text
stringlengths 104
605k
|
---|
# Evaluate $\int_0^1\int_0^1 \left\{ \frac{e^x}{e^y} \right\}dxdy$
I want compute this integral $$\int_0^1\int_0^1 \left\{ \frac{e^x}{e^y} \right\}dxdy,$$ where $\left\{ x \right\}$ is the fractional part function.
Following PROBLEMA 171, Prueba de a), last paragraph of page 109 and firts two paragraphs of page 110,here in spanish, I say the case $k=1$.
When I take $x=\log u$ and $y=\log v$ then I can show that $$\int_0^1\int_0^1 \left\{ \frac{e^x}{e^y} \right\}dxdy=\int_1^e\int_1^e \left\{ \frac{x}{y} \right\}\frac{1}{xy}dxdy=I_1+I_2$$ since following the strategy in cited problem and take $t=\frac{1}{u}$ $$I_1:=\int_1^e\int_1^x \left\{ \frac{x}{y} \right\}\frac{1}{xy}dydx=\int_1^e\frac{1}{x}\int_{\frac{1}{x}}^1 \left\{ \frac{1}{t} \right\}\frac{dt}{t}dx=\int_1^e\int_1^x\frac{ \left\{ u \right\} }{u}dudx,$$ and since if there are no mistakes $$\int_1^x\frac{ \left\{ u \right\} }{u}du = \begin{cases} x-1-\log x, & \text{if 1\leq x<2} \\ 1+\log 2+(x-2)-2\log x, & \text{if 2\leq x\leq e} \end{cases}$$ then $$I_1=\int_1^2\frac{1}{x}(x-1-\log x)dx+\int_1^2\frac{1}{x}(1+\log 2+(x-2)-2\log x)dx,$$ It is $I_1=-3+\log 2-\frac{\log^22}{2}+e$. On the other hand following the cited problem, since $y>x$ then $\left\{ \frac{x}{y} \right\}= \frac{x}{y}$ and the second integral is computed as $$I_2:=\int_1^e\int_x^e \left\{ \frac{x}{y} \right\}\frac{1}{xy}dydx=\int_1^e\int_x^e \frac{x}{y} \frac{1}{y^2}dydx.$$ Thus I've computed $I_2=\frac{1}{e}$.
Question. I would to know if my computations with the fractional part function $\left\{ x \right\}$ were rights (the evaluation of $\int_1^x\frac{ \left\{ u \right\} }{u}du$ and $I_1$). Can you compute $$\int_0^1\int_0^1 \left\{ \frac{e^x}{e^y} \right\}^kdxdy$$ for the case $k=1$? (At least this case to see it as a proof verification of my computations; your are welcome if you provide us similar identities for integers $k\geq 1$, as in the cited problem). Thanks in advance.
• Do you have the book written by Ovidui Furdui? it discusses this type of question. Mar 17 '16 at 8:35
• Is exactly this integral? Then I will delete my post. Very thanks much, I don't read the book, only a free part provide us by Springer @Kf-Sansoo
– user243301
Mar 17 '16 at 8:38
• Don't delete it. Some one will step up and post the correct answer and you can learn from it. Mar 17 '16 at 8:39
• yes, you can delete all you questions and read a book Jul 27 '16 at 22:44
• My apologizes if I disturb to you @user1952009
– user243301
Jul 28 '16 at 5:49
Here is a solution:
By WP we have $\{x\}=x-\lfloor x \rfloor$. Then $$\iint_0^1 \left \{ \text{e}^x\text{e}^{-y} \right \} dxdy=\iint_0^1 \text{e}^x\text{e}^{-y} dxdy-\iint_0^1 \left \lfloor \text{e}^x\text{e}^{-y} \right \rfloor dxdy.$$
We also have $\left \lfloor \text{e}^x\text{e}^{-y} \right \rfloor =$ $0$ $(x<y)$, 2 $(y<x-\ln 2), 1$ (otherwise), in the region $0\leq x \leq 1$ $0 \leq y \leq 1$. Then $$\iint_0^1 \left \lfloor \text{e}^x\text{e}^{-y} \right \rfloor dxdy = \int_{0}^{1}\int_{x}^{1} 0\, dydx + \int_{0}^{1}\int_{0}^{x}1\,dydx + \int_{\ln 2}^{1}\int_{0}^{x-\ln 2}1\,dydx\\= 1 -\ln 2 + (\ln 2)^2 /2.$$
Putting this together gives $$\iint_0^1 \left \{ \text{e}^x\text{e}^{-y} \right \} dxdy= 1/\text{e} + \text{e} +\ln 2 - (\ln 2)^2/2 - 3 \approx 0.54$$
Now we can use the same technique, along with the binomial formula, to find the solution to the general case. I assume $n \geq 1$. We have $$\iint_0^1 \left \{ \text{e}^{x-y} \right \}^n dxdy=\iint_0^1 \left ( \text{e}^{x-y}-\lfloor \text{e}^{x-y} \rfloor\right )^n dxdy\\ = \sum_{k=0}^n \begin{pmatrix}n \\ k\end{pmatrix} (-1)^k \iint_0^1 \text{e}^{(n-k)(x-y)}\lfloor \text{e}^{x-y} \rfloor^k dxdy.$$
We have $\lfloor\text{e}^{x-y} \rfloor^k$ = $0$ ($x<y$), $2^k$ ($y<x-\ln 2$), 1 (otherwise) in the domain $0\leq x \leq 1$, $0 \leq y \leq 1$, except for the case $k=0$ in which case $\lfloor\text{e}^{x-y} \rfloor^k=1$. Then
$$\iint_0^1 \left \{ \text{e}^{x-y} \right \}^n dxdy = \sum_{k=0}^n \begin{pmatrix}n \\ k\end{pmatrix} (-1)^k \left [ \int_{0}^{1}\int_{0}^{x}\text{e}^{(n-k)(x-y)}dydx + \delta_{k,0}\int_{0}^{1}\int_{x}^{1}\text{e}^{(n-k)(x-y)}dydx\\ + (2^k-1)\int_{\ln 2}^{1}\int_{0}^{x-\ln 2}\text{e}^{(n-k)(x-y)}dydx \right],$$ where $\delta_{a,b}$ is the Kronecker delta. Solving the integrals and simplifying gives:
$$\iint_0^1 \left \{ \text{e}^{x-y} \right \}^n dxdy = \frac{(-1)^n}{2}(2^n-2^{n+1}\ln 2 + [2^n-1][\ln 2]^2+\ln 4)+\frac{n-1+\text{e}^{-n}}{n^2}+\sum_{k=0}^{n-1} \begin{pmatrix}n \\ k\end{pmatrix} \frac{(-1)^k}{(n-k)^2} \left [ (k-n-1+\text{e}^{n-k}) + (2^k-1)(2\text{e})^{-k}(2^k\text{e}^n+2^n\text{e}^k[k-1+n(\ln 2 -1)-k\ln 2]) \right]$$
– user243301
Mar 17 '16 at 10:40
• Very hanks for this expansion, incredible, now this is as a reference for all users . Now I try read all these mathematics, from rodjohn, Blatter and your proposition. Thanks.
– user243301
Mar 17 '16 at 10:53
Let $u=x-y$ and $v=x+y$. Then $$\left\{(x,y):0\le x,y\le1\right\} =\left\{(u,v):0\le\left|u\right|\le1,\left|v-1\right|\le1-\left|u\right|\right\}$$ The change of coordinates makes things a bit easier \begin{align} &\int_0^1\int_0^1\left\{\frac{e^x}{e^y}\right\}\,\mathrm{d}x\,\mathrm{d}y\\ &=\int_0^1\int_0^1\left\{e^{x-y}\right\}\,\mathrm{d}x\,\mathrm{d}y\\ &=\frac12\int_{-1}^0\int_{-u}^{2+u}\left\{e^u\right\}\,\mathrm{d}v\,\mathrm{d}u +\frac12\int_0^1\int_u^{2-u}\left\{e^u\right\}\,\mathrm{d}v\,\mathrm{d}u\\ &=\int_{-1}^0(1+u)\left\{e^u\right\}\,\mathrm{d}u +\int_0^1(1-u)\left\{e^u\right\}\,\mathrm{d}u\\ &=\int_0^1(1-u)\left\{e^{-u}\right\}\,\mathrm{d}u +\int_0^1(1-u)\left\{e^u\right\}\,\mathrm{d}u\\ &=\int_0^1(1-u)\,e^{-u}\,\mathrm{d}u +\int_0^{\log(2)}(1-u)\left(e^u-1\right)\,\mathrm{d}u +\int_{\log(2)}^1(1-u)\left(e^u-2\right)\,\mathrm{d}u\\ &=\int_0^1(1-u)\left(e^{-u}+e^u\right)\,\mathrm{d}u -\int_0^{\log(2)}(1-u)\,\mathrm{d}u -2\int_{\log(2)}^1(1-u)\,\mathrm{d}u\\[3pt] &=\left[(2-u)e^u+ue^{-u}\right]_0^1 -\left[u-\tfrac12u^2\right]_0^{\log(2)} -2\left[u-\tfrac12u^2\right]_{\log(2)}^1\\[6pt] &=\left[(2-u)e^u+ue^{-u}\right]_0^1 -\left[u-\tfrac12u^2\right]_0^1 -\left[u-\tfrac12u^2\right]_{\log(2)}^1\\[9pt] &=2\cosh(1)-2-\tfrac12-\tfrac12+\log(2)-\tfrac12\log(2)^2\\[12pt] &=2\cosh(1)-3+\log(2)-\tfrac12\log(2)^2 \end{align}
• Very thanks much for this answer with a change of coordinates, now I can refresh it.
– user243301
Mar 17 '16 at 10:41
The following solution is similar to SDiv's, but he was four minutes faster.
1. Note that $\{e^u\}=e^u$ when $u<0$, that $\{e^u\}=e^u-1$ when $0\leq u<\log2$, and that $\{e^u\}=e^u-2$ when $\log 2\leq u\leq 1$.
2. One has $$\int_{[0,1]^2} e^{x-y}\>{\rm d}(x,y)=\int_0^1 e^x\>dx\cdot\int_0^1 e^{-y}\>dy={(e-1)^2\over e}\ .$$ 3. From the value obtained in 2 we have to subtract the area of the triangle $x-y\geq0$ as well as the area of the triangle $x-y\geq\log2$, in order to take care of the observations made in 1. It follows that $$\int_{[0,1]^2} \left\{{e^x\over e^y}\right\}\>{\rm d}(x,y)={(e-1)^2\over e}-{1\over2}-{1\over2}(1-\log2)^2\ .$$
• Thanks @ChristianBlatter your answer seems very nice too, I will take more than 4 minutes to read it, and since you are a just man with SDiv you are you have earned the following badge: $\bullet$ Just Man, congratullations, is a golden badge, and is unique!
– user243301
Mar 17 '16 at 10:51
|
# Redistributing Income Through Hierarchy
### Abstract
Although the determinants of income are complex, the results are surprisingly uniform. To a first approximation, top incomes follow a power-law distribution, and the redistribution of income corresponds to a change in the power-law exponent. Given the messiness of the struggle for resources, why is the outcome so simple?
This paper explores the idea that the (re)distribution of top incomes is uniform because it is shaped by a ubiquitous feature of social life, namely hierarchy. Using a model first developed by Herbert Simon and Harold Lydall, I show that hierarchy can explain the power-law distribution of top incomes, including how income gets redistributed as the rich get richer.
### To study income is to be perplexed
In a famous 1933 speech, John Maynard Keynes lamented his discontent with capitalism:
It is not intelligent, it is not beautiful, it is not just, it is not virtuous — and it doesn’t deliver the goods. In short, we dislike it, and we are beginning to despise it. But when we wonder what to put in its place, we are extremely perplexed.
(Keynes, 1933)
Today, we might attribute a similar sentiment to researchers who study the distribution of income. Heterodox economists agree that the current distribution of income is ‘not virtuous’, and that the dominant approach to understanding income (marginal productivity theory) ‘doesn’t deliver the goods’. But when we look for a better approach to understanding inequality, we are ‘extremely perplexed’.
Like so many aspects of human society, the distribution of income is frustratingly complex — the joint result of ideology, politics, class struggle, and everything in between. Reviewing these complexities, Sandy Hager argues that it may be best to study inequality using a ‘plurality of methodological approaches’ (2020). I largely agree, but with one caveat. While the causes of inequality are surely complex, the outcome is not. Regardless of where we look, we find that top incomes follow a simple pattern: they are distributed according to a power law. That is, the probability of finding someone with income I is roughly proportional to I^{-\alpha} .
If the causes of income are complex, why can we model the result with a single parameter — the power-law exponent \alpha ? Moreover, why can we model income redistribution by shifting this parameter, and this parameter alone? Given the complexity of human society, the success of such a simple model seems unreasonable. How do the myriad of different forces driving inequality ‘conspire’ to create such a simple outcome?
One possibility is that the ultimate causes of inequality are indeed complex, but that they are mediated by a ‘proximate’ cause that is far simpler. If this mediator was ubiquitous, it could lead to the simple outcome that we observe (the power-law distribution of top incomes). So what might this mediator be?
I propose that it is hierarchy. Although largely ignored by mainstream economics, hierarchy is a common feature of human life. It seems to be the default mode for organizing large groups. And its use appears to have spread with industrialization (Fix, 2021a).
The distinguishing feature of hierarchy is the chain of command, which concentrates power at the top. It is this feature, I propose, that mediates the distribution of top incomes. For a power-law to emerge, all we need is for income to increase (roughly) exponentially with hierarchical rank. Varying this rate of increase then causes a redistribution of top incomes. The result is a proximate explanation of inequality that locates the source of power-law distributions in the chain-of-command structure of hierarchies (Figure 1).
Although this focus on hierarchy does not explain the ‘ultimate’ cause of inequality, it dramatically changes the way we think about the problem. It is one thing to look at top incomes and wonder what is causing them to increase. It is quite another thing to understand that top incomes can be directly linked to the hierarchical pay structure of individual firms.
In the latter case, we realize that each firm is a microcosm of the distribution of income at large. Moreover, when we link top incomes to hierarchy, we are implicitly connecting the distribution of income to the power structure of society. The consequence is rather incendiary. When top incomes increase, it suggests that firm hierarchies are becoming more despotic.
### The shape of top incomes
Before discussing how hierarchy relates to top incomes, we must cover some requisite knowledge about income and its (re)distribution. In the introduction to his 2014 treatise on inequality, Thomas Piketty observed:
Intellectual and political debate about the distribution of wealth has long been based on an abundance of prejudice and a paucity of fact.
(Piketty, 2014)
Today, thanks in large part to Piketty’s work, the ‘paucity of facts’ is no longer a problem (at least among people who are concerned with facts).1 Many people know that income inequality has risen dramatically in recent decades. Matters came to a head during the Occupy movement when the term ‘one-percenter’ became a well-known put down (Di Muzio, 2015). The term alludes to the growing divide between the income of the majority (the bottom 99%) and the income of the elite (the top 1%).
Figure 2 shows this divide — the income share of the US top 1%. The U-shaped trend is now well known. After World War II, US inequality declined rapidly and then remained low for 30 years. But from the 1980s onward, inequality rose dramatically.
The timing of this rising inequality has eluded few observers. It corresponded with a seismic shift in US politics — a turn from the post-War expansion of the welfare state to the ‘trickle down’ policies of the Reagan era. Given this conspicuous political shift, many researchers leap straight from the inequality evidence to a list of possible ‘causes’.
I sympathize with this move, but think that it is partially premature. Yes, we should look for correlates of inequality, of which there are many. (See, for instance, the work of Huber, Huo, & Stephens, 2017.) But we should also realize that looking only at the income share of a specific group (like the top 1%) gives a rather narrow window into the wider distribution of income.
Unfortunately, looking at the whole distribution of income takes some technical skills, which is likely why doing so is less popular than studying top income shares alone. Still, if we want to study growing inequality, we need to understand how all income is distributed.
#### Viewing the distribution of income in its entirety
In the interest of accessibility, I offer here a brief tutorial of how to visualize income distributions from top to bottom using log histograms. Readers familiar with this technique can skip to the next section.
The most basic way to visualize a distribution of income is to use a histogram. To construct a histogram, we put the data into size ‘bins’ and count how many observations occur within each bin. Then we plot the results.
Figure 3A shows a histogram of a hypothetical distribution of income. (For reference, this simulated society has about 10 million people, a median income of $30,000, and a top 1% income share of about 20%. It’s intended as a scaled-down version of the modern United States.) I have put individual incomes into bins that are$2000 wide. On the vertical axis, I have plotted the number of people within each bin. Each point represents the person count, plotted at the midpoint of the income bin. This representation of a histogram, which connects bin counts with a line, is sometimes called a ‘frequency polygon’. But for ease of reference, I will simply call it a ‘histogram’.
Our Figure 3A histogram does not look like the familiar ‘bell curve’. Rather, it has a ‘fat’ right tail that continues far past the chart’s income cutoff of $100,000. This fat tail is a ubiquitous feature of distributions of income, and is the face of inequality in histogram form. It tells us that some individuals earn far more than the average person. The problem with our standard histogram is that we cannot see the rich — they are literally off the chart. To visualize the distribution of top incomes, we need a different approach. The best option is to move to a logarithmic histogram. A log histogram uses income bins that are logarithmically spaced. For instance, the first bin might go from$1 to $10, the second from$10 to $100, the third from$100 to \$1000, and so on.2 By using log spacing, we can reach enormous incomes with relatively few bins. The key is that we then plot both the bins and the corresponding counts on logarithmic scales. In the resulting logarithmic histogram, shown in Figure 3B, we can see the rich and the poor alike. The poor are on the left, with incomes that are far smaller than the median. And the rich are on the right, with incomes that are far larger than the median.
In our log histogram, we can also see a key feature of top incomes: they tend to be distributed according to a power law.3 A power law is a type of distribution in which the probability of finding a person with income I is proportional to that income, raised to some exponent \alpha :
\displaystyle P(I) = c \cdot I^{-\alpha} (1)
Power law distributions have the interesting feature that if we plot their logarithmic histogram (as we have in Figure 3B), we get a straight line. The reason is beautifully simple. When we take the logarithm of both sides of Equation 1, we get a linear relation whose slope is -\alpha :
\displaystyle \log P(I) = \log c - \alpha \cdot \log I (2)
So the fact that the right tail of our log histogram looks like a straight line means that top incomes roughly follow a power law.
If we wish to compare the distribution of income at different points in time (or between different countries) there is one last step: we must ‘normalize’ the histogram. To do that we convert incomes from dollar values to relative values. In Figure 3C, I compare all incomes to the median. Next, we normalize the histogram counts so that they are unaffected by sample size. I do that in Figure 3C by converting bin counts to a ‘probability density’. This transformation defines the vertical scale so that the area under the histogram sums to 1.
Although our normalized histogram looks identical to the un-normalized version, it now has standardized axes. That means we can compare different distributions of income.
### Income redistribution in the United States
Now that the reader has the requisite knowledge, we are ready to look at the distribution of US income in its entirety. Figure 4 shows the US distribution of income in 1970 and 2007. I have chosen these years because they are the dates of minimum (1970) and maximum (2007) inequality in recent US history. The change in the distribution of income is easy to spot.
Let us start, however, with what did not change between 1970 and 2007. To spot a lack of change, look for locations where the two histograms overlap. In Figure 4, we can see that this overlap occurs below the median income, where the two histograms are nearly identical. This similarity tells us that for the bottom half of Americans, little has changed (in terms of relative income) over the last 4 decades.
Among the American poor, though, there is one conspicuous difference between 1970 and 2007: in the latter year, the social safety net had been removed. This removal appears in Figure 4 as a leftward extension of the blue histogram into ever-more diminutive incomes. This is creeping poverty in histogram form. Today, many Americans earn less than 1% of the median income — something that was not true in 1970.
While creeping US poverty is worth studying, it is not the subject of this paper. Instead, I am concerned with the right-side of the histogram. Here we can see the egregious redistribution of top incomes. Between 1970 and 2007, the American rich got richer … much richer. Whereas in 1970, no one earned more than a few hundred times the median income, by 2007, a handful of Americans earned more than 1000 times the median.
It is easy to marvel at the absurd size of top US incomes. But here I am more concerned with the uniformity of income redistribution. As expected, top US incomes (roughly) follow a power-law distribution, evident as the straight right tail in both distributions. What is fascinating is that despite the complex reasons for growing US inequality, to a first approximation, all that changed between 1970 and 2007 is the slope of the distribution tail.
This simple result deserves an explanation. Why can we model the messy business of the rich getting richer by turning a single dial — the power-law exponent of top incomes?
### Income redistribution among all countries
Before we conclude that the rich getting richer is a simple process, we ought to look at more data. It could be, for instance, that the United States is a uniquely simple case, and that elsewhere, the redistribution of income is more complicated.
To test this possibility, let’s look at income redistribution in every country for which there is suitable data. Using data from the World Inequality Database, Figure 5 plots the income-redistribution trends for 176 different countries covering the years 1900 to 2019.
Rather than show the complete distribution for each country (in each year), I have plotted the top 1% income share against the power-law exponent of top incomes. To reiterate, this exponent measures the slope of the income distribution tail. A smaller exponent indicates a fatter tail. (For power-law fitting methods, see the Appendix.)
If income redistribution was a messy, heterogeneous process, we would expect no clear relation between top income shares and the power-law exponent of top incomes. But that is not what we find. Instead, we see in Figure 5 a very clear relation. Growing top income shares are associated with a decline in the power-law exponent of top incomes. In other words, there is startling uniformity in the way that societies redistribute income.
### Generating power laws
To understand the distribution of top incomes, we need to understand more about power laws. Where do they come from? How are they generated?
Although the causal mechanisms may appear complex, the mathematical mechanisms for generating power laws are surprisingly simple. I will discuss two main routes. (For a review of mechanisms for generating power laws, see Mitzenmacher, 2004.)
The first route to a power law is through income dynamics. Suppose an individual starts out with annual income I . Over time, their income grows and shrinks for reasons that we do not understand. But what we do know is that this income change can be modelled as a random number. After t years, the person’s new income is the product of successive random growth rates, g :
\displaystyle I_t = I_1 \cdot g_1 \cdot g_2 \cdot \ldots \cdot g_t (3)
Now suppose that everyone’s income behaves the same way: it is the product of a series of random growth rates. After many growth iterations, the resulting distribution of income will follow a lognormal distribution — a fact discovered by Robert Gibrat (1931).
To get a power-law distribution, we introduce one more requirement: a lower ‘wall’ that limits the smallness of incomes. If anyone’s income gets below this lower threshold, it gets ‘reflected’ in the opposite direction. After many growth iterations, income will be distributed according to a power law.
This ‘stochastic’ model of income was first articulated by David Champernowne (1953). While the model’s mathematics are beyond dispute, many political economists find its appeal to ‘randomness’ troubling. After all, incomes have definite causes (or so we believe). But to be fair to the Champernowne model, it does not claim that income dynamics are actually random, only that we can model them as such.
The Champernowne model tells us that we can understand the power-law distribution of top incomes without knowing anything about the complexities of human behavior. All that we need are general assumptions about the dynamics of income. I find this result fascinating because it is counter-intuitive. Yet it is also underwhelming because it does not tell us why people earn what they do. For that reason, I will focus on a second route to power laws — a route that can be tied to social structure.
The second route to a power law comes from merging two different exponential functions. Suppose two variables, x and y , are both exponential functions of a third variable, t :
\displaystyle x = e^{a \cdot t} (4)
\displaystyle y = e^{b \cdot t} (5)
If we combine these two functions and eliminate t , we find that x and y are related by a power law:4
\displaystyle y = x ^{b/a} (6)
So we can create a power law by merging two exponential functions. The question is, why would such functions apply to income? The answer, I propose, is simple. These are the equations that describe income in a hierarchy.
### Power-laws via hierarchy
Hierarchies are perhaps the dominant feature of our working lives. Yet paradoxically, they rarely enter into mainstream theories of income distribution. Fortunately, a handful of researchers have explored the distributional consequences of hierarchy. I build on their work here.
To my knowledge, the first person to explicitly model income within a hierarchy was the polymath Herbert Simon (1957). Simon noted that hierarchies are government by a chain of command in which each superior controls multiple subordinates. The consequence is that the number subordinates one controls increases exponentially with rank. At the same time, income within a hierarchy tends to increase exponentially with rank. Combining these two exponential functions gives a power law.
Simon, though, was not interested in the power-law distribution of top incomes. Instead, he was interested in another power law — the fact that CEO pay scales with the power of firm size:
\displaystyle \text{CEO pay} \propto (\text{Firm size}) ^ D (7)
Simon argued that this scaling (which was discovered by David Roberts in 1956), stemmed from hierarchy. It was caused by merging the exponential growth of subordinates (with hierarchical rank) and the exponential growth of pay (with hierarchical rank).
Although largely ignored by mainstream economists, Simon’s reasoning remains sound. In fact, we can extend it to every member of the hierarchy (not just CEOs). As Figure 6 indicates, relative income within hierarchies scales with the number of subordinates one controls. For ease of reference, I give ‘the total number of subordinates’ a shorthand name. I call it ‘hierarchical power’, defined as:
\displaystyle \text{hierarchical power} = 1 + \text{number of subordinates} (8)
Across a wide variety of institutions, relative income appears to scale with hierarchical power.
Two years after Herbert Simon published his results, Harold Lydall (1959) realized that the same model of hierarchy could explain the power-law distribution of top incomes. The mechanism was exactly the same — the merger of two exponential functions. (Interestingly, Lydall appears to have been unaware of Simon’s work.)
Like Simon, Lydall assumed that income grows exponentially with hierarchical rank. That gives exponential function number one. The second function comes from the number of people within each rank. As we move up the hierarchy, the number of people within each rank declines exponentially — a consequence of the nested chain of command. By merging these two exponential functions, Lydall showed that hierarchy could create a power-law distribution of income.
Because Simon and Lydall’s pioneering research was completed a half century ago, one would think that today there would be a burgeoning literature on the distributional consequences of hierarchy. Sadly, this is not the case. Instead, shortly after Simon and Lydall published their work, the study of income distribution became dominated by human capital theory, which focused on personal traits and neglected ‘structural’ explanations of income (Fix, 2021b). And so today, we know little about how hierarchy affects the distribution of income.
Despite the historical neglect, I think focusing on hierarchy is a promising way to understand income (Fix, 2018, 2019b, 2020). And as I discuss below, I think it is also a promising way understand income redistribution.
### A sign from CEOs
To understand how income redistribution relates to hierarchy, I propose that we return to where Herbert Simon started: with CEOs. Over the last 40 years, the relative pay of US CEOs has increased dramatically. The timing of this pay explosion aligns tightly with rising US inequality. Figure 7 shows the trend.
The obvious conclusion, reached by many observers, is that runaway CEO pay is related to runaway inequality. Interestingly, however, there have been few attempts to generalize this finding into a model of income distribution.
The way to do this, I believe, is by treating CEOs as canaries in the coal mine. I propose that the exploding pay of CEOs is part of a wider redistribution of income within hierarchies. It is evidence that US firms are becoming more despotic.
I use the word ‘despotic’ in both a general sense (as in the abuse of power) and in a more technical sense, as follows. A key feature of hierarchies is that they concentrate power at the top — a feature that inevitably creates problems. Yes, rulers can use their power to benefit the group. But they can also use their power to enrich themselves. The more they do so, the more ‘despotic’ the hierarchy.
Importantly, despotism is not just a game for rulers. It is a game played by everyone in the hierarchy. The result, I propose, is that the more despotic the hierarchy becomes, the more rapidly income will increase with hierarchical power. It makes sense, then, to use the scaling of income with hierarchical power, D , as a measure of the ‘degree of hierarchical despotism’. The greater the value of D , the more despotic the hierarchy.
\displaystyle \text{relative income} \propto (\text{hierarchical power})^D (9)
To frame this idea, let’s return to the empirical evidence. In Figure 8, I have replotted (as grey points) the empirical trend between relative income and hierarchical power (the trend originally shown in Fig. 6). Over top of this data, I show scaling relations for different values of D .
In large hierarchies, the value of D affects top incomes dramatically. For instance, when D=0.1 , a CEO with one million subordinates will earn only about 4 times more than a bottom-ranked worker. But when D=1 , the same CEO will earn a million times more than an entry-level employee.
#### US CEOs as canaries of hierarchical despotism
Based on the scatter in the empirical data (in Fig. 8), it seems clear that the ‘degree of despotism’ can vary between hierarchies. The question is, can the average degree of despotism also vary over time?
To answer this question definitively, we would need time-series data for the hierarchical pay structure of many different firms. Since such data does not exist, I propose a rougher approach: we use CEOs as despotism ‘canaries’. Among US CEOs, we know that income scales with hierarchical power (where the CEO’s hierarchical power is measured by firm size). What we do not know, though, is how this relation has changed with time.
To investigate this question, Figure 9 plots data for US CEO pay in two years: 1992 and 2007. In both years, the CEO pay ratio tends to increase with hierarchical power. Yet the rate of this increase differs. In 2007, CEO pay scaled more steeply with hierarchical power than it did in 1992. If CEOs are ‘canaries’ for a larger trend within firms, this result hints that US firms have become more despotic.
The next question is — does changing hierarchical despotism correspond with growing inequality? To test this possibility, we can generalize the method shown in Figure 9. In each year between 1992 and 2019, we regress the relative pay of US CEOs onto their hierarchical power. The result is a time-series estimate of the average degree of hierarchical despotism among US firms.
We want to know whether this changing despotism relates to rising inequality. The evidence, shown in Figure 10, suggests that it does. As my estimates for hierarchical despotism rise, so does the income share of the US top 1%.
If US CEOs are indeed ‘canaries’ in the hierarchy, this evidence suggests that rising US inequality has been driven by growing despotism within firms. Ultimately, I would like to test this incendiary idea directly by peering into corporate hierarchies. But since big corporations are unlikely to open up their payroll structure anytime soon, we are forced to further test this idea using a more indirect route. On that note, let us return to the modelling work of Herbert Simon and Harold Lydall.
### Returning to the Simon-Lydall model
In the 1950s, Simon and Lydall both used a simple model of hierarchy to explain the power-law behavior of top incomes. Simon showed how hierarchy could explain why CEO pay scales with firm size. And Lydall demonstrated that hierarchy could create a power-law distribution of income.
The key feature of the Simon-Lydall model is the ‘span of control’, which is assumed to be constant. The ‘span’ determines how many direct subordinates each superior controls. If the span is constant throughout the group, we get hierarchies that look like the ones shown in Figure 11. A large span of control creates a ‘flat’ hierarchy. A small span of control creates a ‘steep’ hierarchy.
The second key element of the Simon-Lydall model is that income increases exponentially with hierarchical rank. Merge this exponential function with the exponential behavior of the chain of command, and out pop power laws. In what follows, I generalize the Simon-Lydall model to understand how hierarchy affects the distribution of top incomes.
Unlike Simon and Lydall (who used analytic methods), I will build a numerical model. The model starts not with hierarchies, but with the size distribution of firms. Empirical evidence suggests that firm sizes are distributed according to a power law (Axtell, 2001). Based on this observation, I simulate a size distribution of firms by drawing random numbers from a discrete power-law distribution. The simulation is designed to roughly match the size distribution of firms in the United States.
The next step is to use the Simon-Lydall model to give each firm a hierarchical structure. Each individual in the firm is assigned a hierarchical rank, and from this rank we calculate their hierarchical power. (For the model equations, see the Appendix.)
I then model individual income as a function of hierarchical power. To make the model realistic, I introduce stochastic ‘noise’ into the power-income relation:
\displaystyle \text{income} = \text{noise} \cdot (\text{hierarchical power})^D (10)
The output of the model is a simulated distribution of income. What we want to understand, from the model, is how the degree of hierarchical despotism, D , affects the distribution of top incomes.
Figure 12 shows my results. I have plotted here the distribution of income (using a log histogram) for three iterations of the hierarchy model. Each iteration uses a different value for D . As expected, the model produces a power-law distribution of top incomes, evident as the straight line in the right tail. (Note that when D is small, the income ‘noise’ dominates the distribution of income, so we do not get a power law.)
What we are interested in is how the distribution of top incomes is affected by hierarchical despotism. On that front, the results are clear. Increasing hierarchical despotism ‘fattens’ the distribution tail. In short, it makes the rich get richer in a highly uniform way.
To summarize the evidence thus far, we know the following:
1. The United States has grown more unequal over the last 4 decades (Fig. 2);
2. This growing inequality occurred via a ‘fattening’ of the income distribution tail (Fig. 4);
3. Growing inequality is associated with a dramatic increase in US CEO pay (Fig. 7);
4. Like the redistribution of top incomes, the pay increases of US CEOs has an underlying uniformity: the rate at which income scales with hierarchical power seems to have increased (Fig. 9);
5. This increasing ‘hierarchical despotism’ among US CEOs correlates with rising US inequality (Fig. 10), suggesting that US hierarchies have become more despotic.
6. When we put changing hierarchical despotism into a model of hierarchy, we find that it produces a ‘fattening’ of the income distribution tail (Fig. 12).
All in all, this evidence strongly hints that hierarchy lies at the root of US income redistribution. But perhaps the US is a unique case. To test this possibility, the last step of the puzzle is to see if the hierarchy model can explain the redistribution of income observed across countries.
Recall from Figure 5 that across a wide swath of countries, greater inequality is associated with a smaller power-law exponent among top incomes. Figure 13 replots this data in grey. On top of the empirical data, I plot the trend produced by the hierarchy model. Each colored point represents a model iteration, with color indicating the degree of hierarchical despotism. As we ramp up despotism, the hierarchy model cuts through the middle of the path tracked by real-world countries.
Having noted the model’s success, there are a few caveats. First, the model cannot reproduce the low levels of inequality observed in countries like Soviet-era Bulgaria (bottom left of Figure 13). That is because even when we remove all returns to hierarchical rank, there is still income ‘noise’, which generates inequality. We could change this noise if we desired. But to keep the model as simple as possible, I leave the noise function constant.
Second, the hierarchy model assumes a constant size distribution of firms, similar to the distribution found in the United States. In the real world, the firm size distribution varies both across countries and across time within countries. (See Fix, 2017 for details.) A more complex model could incorporate this firm-size variation.
Finally, in the Simon-Lydall model, the span of control is a free parameter. In the model used here, I let the span vary randomly between 1.2 and 13 — a range consistent with what we know from case studies of hierarchy. (See the appendix in Fix, 2019b for a review.) In the real world, we expect the span of control to vary between firms and possibly between societies. Such patterns could be incorporated into a more complex model. That said, the span of control has a weak effect on inequality — far weaker than the effect of hierarchical despotism. (See Figure 14.)
To summarize, my model of hierarchy is highly stylized, neglecting many elements of the real world. But its purpose is not to be ultra-realistic, but instead, to isolate the effects of hierarchical despotism. And these effects are clear — increasing hierarchical despotism makes the rich get richer in much the same way as they do in the real world.
### Conclusions
Despite the complexities of human life, the distribution of top incomes follows a remarkably uniform pattern. To a first approximation, top incomes are distributed according to a power law. And when income gets redistributed, this power law changes. In short, it seems that we can model the rich getting richer with a single parameter — the power-law exponent \alpha . Such simplicity deserves an explanation.
The reason top incomes follow a uniform pattern, I have argued, is not because income has an ultimately simple cause. Instead, it is because the complex forces that shape income pass through a ubiquitous feature of human organization: hierarchy. Thus, I propose that hierarchy is a proximate cause of both the distribution of top incomes, and the uniformity with these incomes get redistributed when the rich get richer.
We have known since Lydall’s work in the 1950s that hierarchy can produce a power-law distribution of top incomes. The more complex model used here confirms Lydall’s result. I also find that by varying the rate that income increases with hierarchical rank, we vary the distribution of top incomes in much the same way as we observe in the real world. This result suggests that growing inequality is caused by a redistribution of income within hierarchies. Importantly, evidence from CEOs points at the same trend — namely, that growing inequality is associated with hierarchies becoming more ‘despotic’.
Appealing to hierarchy, I have admitted, does not explain the root cause of inequality. To do that, we would need to explain why income within hierarchies scales the way it does (something that I do not attempt here). So in a sense, the hierarchy model of income merely kicks the causal can: it explains one parameter (the power-law exponent of top incomes) in terms of another parameter (the degree of despotism within hierarchies).
Still, I consider that progress. It suggests that we can better understand the causes of inequality by studying the command structure of firms.
### Acknowledgements
This work was supported in part by the following individuals: Pierre, Norbert Hornstein, Rob Rieben, Tom Ross, James Young, Tim Ward, Mike Tench, Hilliard MacBeth, Grace and Garry Fix, John Medcalf, Fernando, Joe Clarkson, Michael Defibaugh, Steve Keen, Robin Shannon, and Brent Gulanowski.
#### Support this blog
Economics from the Top Down is where I share my ideas for how to create a better economics. If you liked this post, consider becoming a patron. You’ll help me continue my research, and continue to share it with readers like you.
### Appendix
Source data and code for this paper are available at the Open Science Framework: https://osf.io/h98gn/.
#### Top income shares
Data for top income shares comes from the World Inequality Database (WID). For the long-term trend in US inequality (Fig. 2), I use the average of series sfiinc992t and sfiinc999t. These series are the closest to the measurements presented in Piketty (2014). International data (Fig. 5) is from WID series sptinc992j.
#### US income density
To estimate the density function for the US distribution of income (Fig 4), I use income threshold data from series WID tfiinc999t. This series reports the income thresholds for various income percentiles. From these thresholds, I first construct the cumulative distribution of US income. Then I take the derivative of this function to estimate the density curve.
#### Estimating power-law exponents
To estimate the power-law exponent of the top 1% of incomes, I use the method outlined in Virkar & Clauset (2014). They describe a maximum-likelihood function for fitting power-laws to binned data. The required data is:
1. bin thresholds;
2. counts within each bin.
The WID series tptinc992j provides the needed data. It reports income thresholds for various income percentiles. I use the various percentiles as the ‘bins’. The percentile income thresholds are therefore the bin thresholds. And the bin count is simply the income percentile itself (i.e. the portion of the population it represents).
The caveat is that any data can be ‘fitted’ with a power-law exponent. But this does not mean that the data itself is distributed according to a power law.
#### US CEO pay ratio
Data for the US CEO pay ratio (Fig. 7) is from the Economic Policy Institute (Mishel & Wolf, 2019). I have plotted data in which stock options are measured using ‘realized gains’. For why this is the most appropriate way to measure stock-option income see Hopkins & Lazonick (2016).
#### Relative income vs. hierarchical power
Data for the relative income within hierarchies (Fig. 6) is from a variety of sources:
• Case-Study Firms: Data is from Audas, Barmby, & Treble (2004); Baker, Gibbs, & Holmstrom (1993); Dohmen, Kriechel, & Pfann (2004); Lima (2000); Morais & Kakabadse (2014); Treble, Van Gameren, Bridges, & Barmby (2001). For details about these studies, see the appendix in Fix (2019b).
• CEOs: The data covers the years 2006–2019, and includes CEOs across many countries (but mostly within the US). CEO pay data is from Execucomp, series TOTAL_ALT2. I estimate the CEO’s hierarchical power from firm size — Compustat series EMP. I plot, in Fig. 6, the CEO’s income relative to the average employee. I estimate average income in the firm by dividing employment expenses (Compustat series XLR) by firm employment. (Compustat series EMP). For more details, see Fix (2020).
Note that the CEO data is not strictly comparable to the other series in Fig. 6 because it measures pay relative to the firm average. All other series, however, measure pay relative to the average in the bottom rank of the hierarchy.
• US military: Data is from annual demographics reports (Demographics: Profile of the Military Community) between 2010 and 2019. I exclude warrant officers from the data. I calculate the pay within each rank as the average of the minimum and maximum pay by years of experience. For details, see Fix (2019a).
#### Hierarchical despotism of US CEOs
The CEO data used in Figures 9 and 10 is slightly different than the CEO data used in Fig. 6. For one thing the Fig. 910 includes only US CEOs. But more importantly, the Fig. 910 data measures CEO pay using Execucomp series TDC1, rather than series TOTAL_ALT2. The latter series offers a better accounting of stock-option income (using realized gains). But it begins in 2006. In contrast, series TDC1 uses the (more dubious) Black-Scholes method to estimate stock option income. However, data for TDC1 extends back to 1992.
#### Hierarchy model
The hierarchy model used in this paper is based on equations derived independently by Herbert Simon (1957) and Harold Lydall (1959). In this model, hierarchies have a constant span of control. We assume that there is one person in the top rank. The total membership in the hierarchy is then given by the following geometric series:
\displaystyle N_T = 1 + s +s^2 + \ldots + s^{n-1} (11)
Here n is the number of ranks, s is the span of control, and N_T is the total membership. Summing this geometric series gives:
\displaystyle N_T = \frac{1-s^{n}}{1-s} (12)
In my model of hierarchy, the input is the hierarchy size N_T and the span of control s . To model the hierarchy, we must first estimate the number of hierarchical ranks n . To do this, we solve the equation above for n , giving:
\displaystyle n = \left\lfloor~ \frac{\log \left[ 1 + N_T(s-1) \right]}{\log(s)} ~\right\rfloor (13)
Here \lfloor\rfloor denotes rounding down to the nearest integer. Next we calculate N_1 — the employment in the bottom hierarchical rank. To do this, we first note that the firm’s total membership N_T is given by the following geometric series:
\displaystyle N_T = N_1 \left( 1 + \frac{1}{s} + \frac{1}{s^2} + \ldots + \frac{1}{s^{n-1}} \right) (14)
Summing this series gives:
\displaystyle N_T = N_1 \left( \frac{1-1/s^{n}}{1-1/s} \right) (15)
Solving for N_1 gives:
\displaystyle N_1 = N_T \left( \frac{1 - 1/s}{1-1/s^{n}} \right) (16)
Given N_1 , membership in each hierarchical rank h is:
\displaystyle N_h = \left\lfloor \frac{N_1}{s^{h-1}} \right\rfloor (17)
Sometimes rounding errors cause the total employment of the modeled hierarchy to depart slightly from the size of the original input value. When this happens I add/subtract members from the bottom rank to correct the error.
Once the hierarchy has been constructed, income ( I ) is a function of hierarchical power:
\displaystyle I = N (\bar{P}_h)^D (18)
Here D is the ‘degree of hierarchical despotism’ — a free parameter that determines how rapidly income grows with hierarchical power. N is statistical noise generated by drawing random numbers from a lognormal distribution. (The noise function generates inequality equivalent to a Gini index of about 0.2.) \bar{P}_h is the average hierarchical power (per person) associated with rank h . It is defined as
\displaystyle \bar{P}_{h} = 1 + \bar{S}_h (19)
where \bar{S}_h is the average number of subordinates per member of rank h :
\displaystyle \bar{S}_h ~ = \sum_{i = 1}^{h -1} \frac{N_i}{N_h} (20)
The model is implemented numerically in C++, using the Armadillo linear algebra library (Sanderson & Curtin, 2016). For R users, I have created R functions implementing the model, available at Github:
#### Size distribution of firms
The input into the hierarchy algorithm is a size distribution of firms generated from a discrete power law distribution with \alpha=2 . The resulting distribution is similar to that found in the modern United States. See Fix (2020) for details.
#### The span of control
In the hierarchy model, the span of control is a free parameter. I let it vary between a low of 1.2 and a high of 13. As Figure 14 shows, this variation has a small effect on the power-law distribution of top incomes. Instead, the effect is dominated by the degree of hierarchical despotism.
### Notes
1. Although Piketty popularized the study of top income shares, he built on the work of many researchers, including Atkinson & Harrison (1978), Atkinson & Bourguignon (2001), Atkinson & Piketty (2010), and Alvaredo, Atkinson, Piketty, & Saez (2013).
2. Instead of using log-spaced bins, another option is to use linear bins but count the frequency of log(income). The results will be the same.
3. The power-law distribution of top incomes (and wealth) was discovered at the turn of the 20th century by Vilfredo Pareto (1897). For a sample of subsequent confirmations of Pareto’s discovery, see Di Guilmi, Gaffeo, & Gallegati (2003), Clementi & Gallegati (2005), Coelho, Richmond, Barry, & Hutzler (2008), Toda (2012), and Atkinson (2017).
4. Here are the algebraic steps. First, take the logarithm of both functions and solve for t :
\displaystyle \begin{aligned} t &= \frac{1}{a} \log x \\ \\ t &= \frac{1}{b} \log y \end{aligned}
Next, combine the two equations to eliminate t :
\displaystyle \log (y) = \frac{b}{a} \log (x)
Note that \frac{b}{a} \log (x) is equivalent to \log x^{b/a} . Therefore,
\displaystyle y = x ^{b/a}
### References
Alvaredo, F., Atkinson, A. B., Piketty, T., & Saez, E. (2013). The top 1 percent in international and historical perspective. The Journal of Economic Perspectives, 27(3), 3–20.
Atkinson, A. B. (2017). Pareto and the upper tail of the income distribution in the UK: 1799 to the present. Economica, 84(334), 129–156.
Atkinson, A. B., & Harrison, A. J. (1978). Distribution of personal wealth in Britain. Cambridge Univ Pr.
Atkinson, A., & Bourguignon, F. (2001). Income distribution. In International encyclopedia of the social and behavioral sciences (economics/public and welfare economics) (pp. 7265–7271). Amsterdam: Elsevier.
Atkinson, A. B., & Piketty, T. (2010). Top incomes: A global perspective. New York: Oxford University Press.
Audas, R., Barmby, T., & Treble, J. (2004). Luck, effort, and reward in an organizational hierarchy. Journal of Labor Economics, 22(2), 379–395.
Axtell, R. L. (2001). Zipf distribution of US firm sizes. Science, 293, 1818–1820.
Baker, G., Gibbs, M., & Holmstrom, B. (1993). Hierarchies and compensation: A case study. European Economic Review, 37(2-3), 366–378.
Champernowne, D. G. (1953). A model of income distribution. The Economic Journal, 63(250), 318–351.
Clementi, F., & Gallegati, M. (2005). Power law tails in the Italian personal income distribution. Physica A: Statistical Mechanics and Its Applications, 350(2-4), 427–438.
Coelho, R., Richmond, P., Barry, J., & Hutzler, S. (2008). Double power laws in income and wealth distributions. Physica A: Statistical Mechanics and Its Applications, 387(15), 3847–3851.
Di Guilmi, C., Gaffeo, E., & Gallegati, M. (2003). Power law scaling in world income distribution. Economics Bulletin.
Di Muzio, T. (2015). The 1% and the rest of us: A political economy of dominant ownership. Zed Books Ltd.
Dohmen, T. J., Kriechel, B., & Pfann, G. A. (2004). Monkey bars and ladders: The importance of lateral and vertical job mobility in internal labor market careers. Journal of Population Economics, 17(2), 193–228.
Fix, B. (2017). Energy and institution size. PLOS ONE, 12(2), e0171823.
Fix, B. (2018). Hierarchy and the power-law income distribution tail. Journal of Computational Social Science, 1(2), 471–491.
Fix, B. (2019a). How hierarchy can mediate the returns to education. Economics from the Top Down. https://economicsfromthetopdown.com/2019/12/20/how-hierarchy-can-mediate-the-returns-to-education/
Fix, B. (2019b). Personal income and hierarchical power. Journal of Economic Issues, 53(4), 928–945.
Fix, B. (2020). How the rich are different: Hierarchical power as the basis of income size and class. Journal of Computational Social Science, 1–52.
Fix, B. (2021a). Economic development and the death of the free market. Evolutionary and Institutional Economics Review, 1–46.
Fix, B. (2021b). The rise of human capital theory. Real-World Economics Review, (95), 29–41.
Gibrat, R. (1931). Les inegalites economiques. Recueil Sirey.
Hager, S. B. (2020). Varieties of top incomes? Socio-Economic Review, 18(4), 1175–1198.
Hopkins, M., & Lazonick, W. (2016). The mismeasure of mammon: Uses and abuses of executive pay data. Institute for New Economic Thinking, Working Paper No. 49, 1–60.
Huber, E., Huo, J., & Stephens, J. D. (2017). Power, policy, and top income shares. Socio-Economic Review, 0(0), 1–23. https://doi.org/10.1093/ser/mwx027
Keynes, J. M. (1933). National self-sufficiency. Studies: An Irish Quarterly Review, 22(86), 177–193.
Lima, F. (2000). Internal labor markets: A case study. FEUNL Working Paper, 378.
Lydall, H. F. (1959). The distribution of employment incomes. Econometrica: Journal of the Econometric Society, 27(1), 110–115.
Mishel, L., & Wolf, J. (2019). CEO compensation has grown 940% since 1978: Typical worker compensation has risen only 12% during that time. Economic Policy Institute, 171191. https://www.epi.org/publication/ceo-compensation-2018/
Mitzenmacher, M. (2004). A brief history of generative models for power law and lognormal distributions. Internet Mathematics, 1(2), 226–251.
Morais, F., & Kakabadse, N. K. (2014). The corporate Gini index (cgi) determinants and advantages: Lessons from a multinational retail company case study. International Journal of Disclosure and Governance, 11(4), 380–397.
Pareto, V. (1897). Cours d’economie politique (Vol. 1). Librairie Droz.
Piketty, T. (2014). Capital in the twenty-first century. Cambridge: Harvard University Press.
Roberts, D. R. (1956). A general theory of executive compensation based on statistically tested propositions. The Quarterly Journal of Economics, 70(2), 270–294.
Sanderson, C., & Curtin, R. (2016). Armadillo: A template-based C++ library for linear algebra. Journal of Open Source Software, 1(2), 26.
Simon, H. A. (1957). The compensation of executives. Sociometry, 20(1), 32–35.
Toda, A. A. (2012). The double power law in income distribution: Explanations and evidence. Journal of Economic Behavior & Organization, 84(1), 364–381.
Treble, J., Van Gameren, E., Bridges, S., & Barmby, T. (2001). The internal economics of the firm: Further evidence from personnel data. Labour Economics, 8(5), 531–552.
Virkar, Y., & Clauset, A. (2014). Power-law distributions in binned empirical data. The Annals of Applied Statistics, 8(1), 89–119.
1. Impressively excellent article! “Appealing to hierarchy, I have admitted, does not explain the root cause of inequality. To do that, we would need to explain why income within hierarchies scales the way it does…” So what is wrong with stratified positioning (“power”) being the root cause scaling ala the “Matthew Effect”? Have economists ever tried to model this “Effect” not just as sociological reproduction but as inscribed into the (interest bearing) money system?
https://gaiageld.com/matthew-effect-inequality/
2. Very interesting article.
However, it would have been fair to add that Atkinson (2017) reached first at the conclusion that a rise in the Pareto index corresponds to a fall in the income concentration, which is the same as to state that the smaller the Pareto exponent (fatter distribution) the higher the income inequality.
Moreover, although hierarchies may indeed explain income inequality in firms, the problem of inequality is not resumed to firms and CEOs only. As pointed out by Piketty (2014), inequality is a result of BOTH income and wealth distributions. One of the major analytical problems in this research area is how to relate analytically income to wealth.
• Yes, I’d say that among income distribution experts, it’s common knowledge that top income shares relate to the power-law exponent of the distribution tail. I’m not sure who got there first, but it certainly predates 2017.
• John E Kurman says:
I’m gonna guess the inflection point between power law and the left side is when your money makes more money than you do
3. Venkataraman Amarnath says:
Before 1980, middle managers, who rose from lower ranks, planned and coordinated production independently of elite-executive control, shared not just the responsibilities but also the income and status gained from running their companies. Top executives enjoyed commensurately less control and captured lower incomes.
This situation slowly and then rapidly changed. CEOs with degrees from elite colleges and trained at big three consulting firms gotten rid of all middle managers, took control of the companies and the income.
This is one of the reasons for the rich getting richer.
• John E Kurman says:
The late great David Graeber covered this in Bullshit Jobs.
4. John E Kurman says:
If you told me Figure 4 was a historical study of a beehive, I’d call it colony collapse disorder. Or a social parasite problem.
5. […] As an example, let’s use R to scrape data from my post Redistributing Income Through Hierarchy. […]
6. Martin Zhekov says:
Than sounds like Richard Cantilion Effect from 1743
7. […] income share is easy to interpret. Second, the evidence suggests that inequality dynamics play out mostly among the rich. Measuring the top 1% income share is a simple way to capture this […]
8. […] Figure 3: Within hierarchies, income tends to increase with hierarchical power. This figure illustrates how income scales with hierarchical power within a variety of institutions. Red dots show data from a handful of firm case studies. Blue dots show data from the US military. Green dots show data for US CEOS. In the case-study firms and the US military, I measure income relative to the average in the bottom hierarchical rank. Each point indicates the average hierarchical power within a rank. For CEOs, I measure income relative to the average pay within the firm. I assume the CEO commands the firm, meaning their hierarchical power is equivalent to the firm’s total employment. For data sources, see the appendix in ‘Redistributing income through hierarchy’. […]
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 25 Sep 2018, 16:15
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If x and y are integers such that x^2+2x+2y+4=2x^2+3x+y-2, we can dedu
Author Message
TAGS:
### Hide Tags
Math Revolution GMAT Instructor
Joined: 16 Aug 2015
Posts: 6242
GMAT 1: 760 Q51 V42
GPA: 3.82
If x and y are integers such that x^2+2x+2y+4=2x^2+3x+y-2, we can dedu [#permalink]
### Show Tags
06 Feb 2016, 19:00
00:00
Difficulty:
25% (medium)
Question Stats:
79% (02:06) correct 21% (02:12) wrong based on 81 sessions
### HideShow timer Statistics
If x and y are integers such that x^2+2x+2y+4=2x^2+3x+y-2, we can deduce that y is
A. not an even
B. an even
C. not an odd
D. an odd
E. a prime
* A solution will be posted in two days.
_________________
MathRevolution: Finish GMAT Quant Section with 10 minutes to spare
The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy.
"Only $99 for 3 month Online Course" "Free Resources-30 day online access & Diagnostic Test" "Unlimited Access to over 120 free video lessons - try it yourself" SC Moderator Joined: 13 Apr 2015 Posts: 1702 Location: India Concentration: Strategy, General Management GMAT 1: 200 Q1 V1 GPA: 4 WE: Analyst (Retail) Re: If x and y are integers such that x^2+2x+2y+4=2x^2+3x+y-2, we can dedu [#permalink] ### Show Tags 07 Feb 2016, 00:45 x^2 + 2x + 2y + 4 = 2x^2 + 3x + y - 2 2(x + y + 2) = x^2 + 3x + y - 2 even = x^2 + 3x + y - 2 If x = even --> even + even + y - even = even --> y has to be even If x = odd --> odd + odd + y - even = even --> y has to be even Answer: B Intern Joined: 14 Jul 2015 Posts: 22 GMAT 1: 680 Q44 V40 GMAT 2: 710 Q49 V37 If x and y are integers such that x^2+2x+2y+4=2x^2+3x+y-2, we can dedu [#permalink] ### Show Tags 07 Feb 2016, 02:11 1. Given: $$x^2 + 2x + 2y + 4 = 2x^2 + 3x + y - 2$$ 2. $$y + 4 = x^2 - x - 2$$ 3. $$y = x^2 - x - 6$$ 4. We know that even - even = even, and that odd - odd = even 5. We see $$x^2 - x$$ in (3). This will always translate to one of the two statements: even - even or odd - odd. The result of either statement will be even. 6. We plug in (5) into our simplified formula: $$y = even - 6$$. Since even - even = even, we know that y = even. Therefore B, y must be even. Intern Joined: 07 Oct 2014 Posts: 9 Re: If x and y are integers such that x^2+2x+2y+4=2x^2+3x+y-2, we can dedu [#permalink] ### Show Tags 07 Feb 2016, 05:20 MathRevolution wrote: If x and y are integers such that x^2+2x+2y+4=2x^2+3x+y-2, we can deduce that y is A. not an even B. an even C. not an odd D. an odd E. a prime * A solution will be posted in two days. $$2x^2-x^2+3x-2x+y-2y-2-4=0$$ $$x^2+x-y-6=0$$ Let's assume x is even Then even+even-y-even=even even - y=even, y is even Let's assume x is odd Then odd-odd-y-even=even even-y-even=even y is even Y is even in any case Answer: B Math Revolution GMAT Instructor Joined: 16 Aug 2015 Posts: 6242 GMAT 1: 760 Q51 V42 GPA: 3.82 Re: If x and y are integers such that x^2+2x+2y+4=2x^2+3x+y-2, we can dedu [#permalink] ### Show Tags 08 Feb 2016, 18:37 If x and y are integers such that x^2+2x+2y+4=2x^2+3x+y-2, we can deduce that y is A. not an even B. an even C. not an odd D. an odd E. a prime --> In x^2+2x+2y+4=2x^2+3x+y-2, y=x^2+x-6=x(x+1)-6 is derived. Since x(x+1) is multiplication of consecutive integers, it is always an even number. Then, y=even number-6=even number. Therefore, the answer is B. _________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only$99 for 3 month Online Course"
"Free Resources-30 day online access & Diagnostic Test"
"Unlimited Access to over 120 free video lessons - try it yourself"
Director
Joined: 27 May 2012
Posts: 578
Re: If x and y are integers such that x^2+2x+2y+4=2x^2+3x+y-2, we can dedu [#permalink]
### Show Tags
24 Jun 2018, 21:52
MathRevolution wrote:
If x and y are integers such that x^2+2x+2y+4=2x^2+3x+y-2, we can deduce that y is
A. not an even
B. an even
C. not an odd
D. an odd
E. a prime
* A solution will be posted in two days.
Don't B and C mean the same thing. Given x and y are integers and if Y is not an odd integer then it must be an even integer! If an integer is not odd then it must be even , must it not ?
Unlike positive and negative where an integer need not be positive if it not negative ( e.g. zero ), even and odd have no intermediate category unless I am missing something .
according to wikipedia " An integer that is not an odd number is an even number."
https://simple.wikipedia.org/wiki/Odd_number.
If I am correct then the answer choices in this question are ambiguous . Kindly share your views too.
_________________
- Stne
Re: If x and y are integers such that x^2+2x+2y+4=2x^2+3x+y-2, we can dedu &nbs [#permalink] 24 Jun 2018, 21:52
Display posts from previous: Sort by
# If x and y are integers such that x^2+2x+2y+4=2x^2+3x+y-2, we can dedu
## Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
0
Research Papers
# On the Existence of Controlled Maximal Contractive Sets for Multi Input Linear Systems
[+] Author and Article Information
Andrea Cristofaro
School of Science and Technology,
University of Camerino,
62032 Camerino (MC), Italy
e-mail: [email protected]
Contributed by the Dynamic Systems Division of ASME for publication in the Journal of Dynamic Systems, Measurement, and Control. Manuscript received March 8, 2012; final manuscript received December 6, 2012; published online March 28, 2013. Assoc. Editor: Sergey Nersesov.
J. Dyn. Sys., Meas., Control 135(3), 031017 (Mar 28, 2013) (8 pages) Paper No: DS-12-1078; doi: 10.1115/1.4023402 History: Received March 08, 2012; Revised December 06, 2012
## Abstract
In this work, the existence of maximal contractive sets for unstable multi input linear systems is discussed. It is shown that, under suitable algebraic conditions, linear feedback laws can be designed such that the set of values satisfying the saturation constraints is an invariant set for the closed-loop system, which is asymptotically stable.
<>
## References
Blanchini, F., 1999, “Set Invariance in Control,” Automatica, 35, pp. 1747–1767.
Castelan, E. B., and Hennet, J. C., 1993, “On Invariant Polyhedra of Continuous-Time Linear Systems,” IEEE Trans. Autom. Control, 38, pp. 1680–1685.
Dorea, C., 2009, “Output-Feedback Controlled-Invariant Polyhedra for Constrained Linear Systems,” Proceedings IEEE Conference on Decision and Control 2009, Shanghai, China, pp. 5317–5322.
Dorea, C., and Hennet, J. C., 1999, “(A, B)-Invariant Polyhedral Sets of Linear Discrete Time Systems,” J. Optim. Theory Appl., 103, pp. 521–542.
Hu, T., and Lin, Z., 2001, “Exact Characterization of Invariant Ellipsoids for Linear Systems With Saturating Actuators,” Proceedings IEEE Conference on Decision and Control 2001, Orlando, FL, pp. 4669–4674.
O'Dell, B. D., and Misawa, E. A., 2000, “Semi-Ellipsoidal Controlled Invariant Sets for Constrained Linear Systems,” Proceedings of the American Control Conference 2000, Chigago, IL, ACC, pp. 1779–1783.
Zhou, B., and Duan, G., 2009, “On Analytical Approximation of the Maximal Invariant Ellipsoids for Linear Systems With Bounded Controls,” IEEE Trans. Autom. Control, 54, pp. 346–353.
Sontag, E. D., and Sussman, H. J., 1990, “Nonlinear Output Feedback Design for Linear Systems With Saturating Controls,” Proceedings of the 29thIEEE Conference on Decision and Control, pp. 3414–3416.
Sussman, H. J., and Yang, Y., 1991, “On the Stabilizability of Multiple Integrators by Means of Bounded Feedback Controls,” Proceedings of the 30th IEEE Conference on Decision and Control, pp. 70–72.
Lin, Z., 1998, “Global Control of Linear Systems With Saturating Actuators,” Automatica, 34, pp. 897–905.
Lin, Z., and Saberi, A., 1993, “Semi-Global Exponential Stabilization of Linear Systems Subject to ‘Input Saturation’ via Linear Feedbacks,” Syst. Control Lett., 21, pp. 225–239.
Lin, Z., Saberi, A., and Teel, A. R., 1995, “Control of Linear Systems With Saturating Actuators,” Proceedings of the 34th IEEE Conference on Decision and Control, pp. 285–289.
Hu, T., Lin, Z., and Qiu, L., 2001, “Stabilization of Exponentially Unstable Linear Systems With Saturating Actuators,” IEEE Trans. Autom. Control, 46, pp. 973–979.
Hu, T., Lin, Z., and Qiu, L., 2002, “An Explicit Description of Null Controllable Region of Linear Systems With Saturating Actuators,” Syst. Control Lett., 30, pp. 65–78.
Hu, T., Teel, A. R., and Zaccarian, L., 2006, “Stability and Performance for Saturated Systems via Quadratic and Nonquadratic Lyapunov Functions,” IEEE Trans. Autom. Control, 51, pp. 1770–1786.
Saberi, A., Han, J., and Stoorvogel, A. A., 2002, “Constrained Stabilization Problems for Linear Plants,” Automatica, 38, pp. 639–654.
Stoorvogel, A. A., Saberi, A., and Shi, G., 2004, “Properties of Recoverable Region and Semi-Global Stabilization in Recoverable Region for Linear Systems Subject to Constraints,” Automatica, 40, pp. 1481–1494.
Teel, A. R., 1999, “Anti-Windup for Exponentially Unstable Linear Systems,” Int. J. Robust Nonlinear Control, 9, pp. 701–716.
Lasserre, J. B., 1993, “Reachable, Controllable Sets and Stabilizing Control of Constrained Linear Systems,” Automatica, 29, pp. 531–536.
Corradini, M. L., Cristofaro, A., and Giannoni, F., 2009, “Asymptotic Stabilization of Planar Unstable Linear Systems by a Finite Number of Saturating Actuators,” European Control Conference 2009, Budapest, Hungary.
Corradini, M. L., Cristofaro, A., and Giannoni, F., 2009, “On the Asymptotic Stabilization of Unstable Linear Systems With Bounded Controls,” Far East J. Math. Sci., 35, pp. 233–247.
Benzaouia, A., 1994, “The Resolution of Equation XA + XBX = HX and the Pole Assignment Problem,” IEEE Trans. Autom. Control, 39, pp. 2091–2095.
Castelan, E. B., Gomes da Silva, J. M., Jr., and Cury, J. E. R., 1996, “A Reduced-Order Framework Applied to Linear Systems With Constrained Controls,” IEEE Trans. Autom. Control, 41, pp. 249–255.
Cristea, M., 2007, “A Note on Global Implicit Function Theorem,” Pure Appl. Math., 3, pp. 128–143.
Antsaklis, P. J., and Michel, A. N., 2006, Linear Systems, Birkhäuser, Boston.
## Figures
Fig. 1
Evolution of the state norm ||x(t)||
Fig. 2
Evolution of the controls u1(t)=K1*x(t) (dashed line), u2(t)=K2*x(t) (dotted line), and u3(t)=K3*x(t) (continuous line)
## Errata
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections
|
## CryptoDB
### Paper: On prime-order elliptic curves with embedding degrees k=3,4 and 6
Authors: Koray Karabina Edlyn Teske URL: http://eprint.iacr.org/2007/425 Search ePrint Search Google We further analyze the solutions to the Diophantine equations from which prime-order elliptic curves of embedding degrees $k=3,4$ or $6$ (MNT curves) may be obtained. We give an explicit algorithm to generate such curves. We derive a heuristic lower bound for the number $E(z)$ of MNT curves with $k=6$ and discriminant $D\le z$, and compare this lower bound with experimental data.
##### BibTeX
@misc{eprint-2007-13705,
title={On prime-order elliptic curves with embedding degrees k=3,4 and 6},
booktitle={IACR Eprint archive},
keywords={public-key cryptography / Elliptic curves, pairing-based cryptosystems, embedding degree, MNT curves.},
url={http://eprint.iacr.org/2007/425},
note={ [email protected] 13830 received 12 Nov 2007, last revised 13 Nov 2007},
author={Koray Karabina and Edlyn Teske},
year=2007
}
|
# Physical meaning of time-varying Hamiltonian in Quantum Mechanics
I'm a self-taught in Quantum Mechanics, with the aim to understand Quantum Information theory. I have the following doubt which I cannot solve:
Assuming as a postulate that the evolution of a quantum system is governed by a unitary operator: $|\psi(t)\rangle = U(t,t_0)|\psi(t_0)\rangle$
The Schrodinger equation could be derived: $i\hbar\frac{d}{dt}|\psi(t)\rangle = H(t)|\psi(t)\rangle$. Where $H(t$) is the Hamiltonian operator since it could be seen as the "total energy of the system". When it is time-independent, the justification is physically reasonable since it is a conserved quantity and the system is closed (no problem here). If it is not the case, according to Dirac textbook (pg.110), the system is "open":
If the energy depends on t, it means the system is acted on by external forces.
In my opinion, this assumption is also reasonable, according to the energy conservation principle.
My doubts arise from the fact that different textbooks (e.g. Nielsen-Chuang) states that:
[If the Hamiltonian is time-variant] The system is not, therefore, closed, but it does evolve according to Schrodinger’s equation with a time-varying Hamiltonian, to some good approximation.
Or they make the assumption that the "evolution postulate" is true iff the system is closed.
I can't really take the physical insight behind that. According to such a version, it seems that the Schrodinger equation is not universal or, in some sense, imprecise. This raises some questions to me: What is the correct version of the "evolution postulate"? Does it predict the evolution of any quantum system or only of the closed ones? Why a time-varying Hamiltonian does not describe the real evolution of the system?
• Basically your doubt is generated by the bold text? Just for me to understand.... – Alchimista Aug 20 '17 at 15:38
• Yes, they are the "basic" points which generate my doubts – steg Aug 20 '17 at 16:44
|
# Showing that the set of natural number, $\omega$, is Dedekind infinite
Showing that the set of natural number, $$\omega$$, is Dedekind infinite. It is an easy task to show this directly by sending $$n$$ to $$2n$$, then we produces a injective map that is not surjective.
But suppose I want to make use of this fact that there is a bijection between $$\omega$$ and $$\omega+1 = \omega\ \cup \{\omega\}$$, how might one produce an injective map from $$\omega$$ to $$\omega$$ that is not surjective ?
Cheers and thanks
Simply compose a bijection $$f\colon\omega+1\to\omega$$ with the inclusion map $$\omega\subseteq\omega+1$$. In other words, simply restrict $$f$$ to $$\omega$$.
Since $$\omega$$ is a proper subset of $$\omega+1$$, the result will be an injection from $$\omega$$ to itself whose range is not $$\omega$$.
• Ah! Because I need the '+1' to hit every $x \in X$ using $f$, so if I just restrict $f$, then I am bound to miss some guy. It remains injective because restriction does not affect injectivity. – some1fromhell Nov 27 '19 at 15:44
|
# Paper squares
The paper rectangle measuring 69 cm and 46 cm should be cut into as many squares as possible. Calculate the lengths of squares and their number.
Correct result:
x = 23 cm
n = 6
#### Solution:
$n=(a/x) \cdot \ (b/x)=(69/23) \cdot \ (46/23)=6$
We would be very happy if you find an error in the example, spelling mistakes, or inaccuracies, and please send it to us. We thank you!
Tips to related online calculators
Do you want to calculate greatest common divisor two or more numbers?
#### You need to know the following knowledge to solve this word math problem:
We encourage you to watch this tutorial video on this math problem:
## Next similar math problems:
• Tiles
Hall has dimensions 325 &time; 170 dm. What is the largest size of square tiles that can be entire hall tiled and how many we need them?
• Tiles
How many 46 cm square tiles will cover a floor 22.08 m by 8.74 m?
• Playground
On the special playground, there are 81 square sectors, each with a side of 5 m. How many players can fit on the playground if each player needs a 75 m2 area to play?
• Plumber
Plumber had to cut the metal strip with dimensions 380 cm and 60 cm to the largest squares so that no waste. Calculate the length of the sides of a square. How many squares cut it?
• Square room
What is the size of the smallest square room, which can pave with tiles with dimensions 55 cm and 45 cm? How many such tiles is needed?
• Tiles
It is given an area of 5m x 4m One tile is 40 x 40 cm How many tiles are needed in an area of 5 m x 4 m? And how many tiles need to be cut (if it not possible for the tiles to fall exactly)?
• Area of rectangle
How many times will increase the area of the rectangle, if we increase twice the length and at the same time we decrease the width by the half?
• Rectangle A2dim
Calculate the side of the rectangle, if you know that its area is of 2590 m2 and one side is 74 m.
• Rectangular flowerbed
Around the rectangular flowerbed with dimensions of 5.25 m and 3.50 m, roses should be planted at the same distance from each other so that the roses are located in each corner of the flower bed and are consumed as little as possible. How far do we plant
• Troops
If the general sorts troops into the crowd by nine left 6. How many soldiers has regiment if we know that they are less than 300?
• Groups
In the 6th class there are 60 girls and 72 boys. We want to divide them into groups so that the number of girls and boys is the same. How many groups can you create? How many girls will be in the group?
• Children's home
The children's home received a gift to Nicholas of 54 oranges, 81 chocolate figurines, and 135 apples. Every child received the same gift and nothing was left. a) How many packages could be prepared? b) what did the children find in the package?
• Bicycle wheels
Driving wheel of a bicycle has 54 teeth. The driven wheel has 22 teeth. After how many revolutions will meet the same teeth?
• School books
At the beginning of the school year, the teacher distributed 480 workbooks and 220 textbooks. How many pupils could have the most in the classroom?
• Street numbers
Lada came to aunt. On the way he noticed that the houses on the left side of the street have odd numbers on the right side and even numbers. The street where he lives aunt, there are 5 houses with an even number, which contains at least one digit number 6
• Common divisors
Find all common divisors of numbers 30 and 45.
• Decomposition
Make decomposition using prime numbers of number 155. Result write as prime factors (all, even multiple)
|
## Introduction
The CO2 electroreduction reaction (CO2RR) to valuable fuels and feedstocks, powered using renewable electricity, offers a sustainable approach to store intermittent renewable energy1. Prior CO2RR studies have reported the generation of C1 to C3 chemicals such as CO, methane, formate, ethylene, ethanol, and n-propanol2,3,4,5,6,7,8,9,10. Among these products, carbon-neutral methane produced from CO2RR is desired due to well-established natural gas infrastructure11.
Practical CO2RR systems need to produce a desired product with high selectivity, conversion rate, and energy efficiency (EE)12,13. In prior reports, most advances in improving the selectivity to methane in CO2RR operate at current densities below 50 mA cm−2 (ref. 14,15,16,17,18). Techno-economic analyses suggest that compelling CO2RR systems require current densities above 100 mA cm−2 (ref. 19), which prompted us to concentrate on improving the performance of CO2RR to methane in high current density regimes (>100 mA cm−2).
In CO2RR, *CO protonation to *CHO is the potential-determining step for methane formation, and it competes with C–C coupling toward C2 products20,21. In addition, *CO protonation competes with the hydrogen evolution reaction (HER), since both need *H (ref. 22). The simultaneous suppression of both HER and C–C coupling will improve methane selectivity.
Early studies by Hori et al.2 showed that Cu is the transition metal catalyst that generates methane and C2+ products; but that it did so with low product selectivity. Introducing a second metal into Cu has been shown to be a promising route to tune the product selectivity in CO2RR (refs. 23,24,25,26,27,28,29,30). Prior studies report that Au–Cu bimetallic catalysts of varying structures exhibit good selectivity to CO or alcohols, albeit with pure CO2 feeds (refs. 31,32,33,34). Here we present a strategy wherein we regulate *CO availability on Au–Cu catalysts, enabling selectivity to methane at high production rates in CO2RR. Density functional theory (DFT) calculations indicate that the introduction of Au in Cu not only steers the selectivity from C–C coupling to *CO protonation under low *CO coverage, but also tends to suppress HER relative to Cu. By implementing this concept experimentally, we achieve an FE of (56 ± 2)% to methane. The methane:H2 selectivity ratio is improved 1.6× compared with prior reports having a total current density above 100 mA cm−2 (Supplementary Table 1) (refs. 35,36,37,38,39).
## Results
### DFT calculations
In a previous study, we found that lowering the *CO coverage on a Cu surface improved the selectivity to methane in CO2RR while still suffering from prominent HER (ref. 35). Introducing a second element to Cu, such as Ag, has been shown to suppress HER (refs. 21,28). Au—like Ag—has a greater free energy of hydrogen adsorption than Cu, suggesting that it is also a poor HER catalyst40. We thus use Au–Cu as a representative example to assess methane selectivity on catalysts with HER-suppressing dopants under different *CO coverages. In addition, we note that selecting an element that is on the same side of the hydrogen adsorption volcano curve as Cu avoids any synergistic effects that may optimize the *H binding energy leading to better HER, such as with Cu–Ni or Cu–Pt (refs. 41,42,43).
Computationally, we built three Au–Cu surfaces by replacing one, two, or three surface Cu atoms of a (3 × 3 × 4) Cu(111) supercell with Au atoms, denoted Au1Cu35, Au2Cu34, and Au3Cu33, respectively. With DFT, we first calculated the reaction free energies of *CO to *CHO (∆G*CHO) for methane formation and C–C coupling (∆G*OCCOH) for C2 products on these Au–Cu surfaces under different *CO coverages (Fig. 1a, Supplementary Figs. 14, and Supplementary Tables 2 and 3). ∆G*CHO–∆G*OCCOH is used as a descriptor of the propensity for *CO protonation vs. C–C coupling. We found that the values of ∆G*CHO–∆G*OCCOH on Au–Cu surfaces decrease when one reduces *CO coverage from 4/9 to 2/9 monolayer (ML) (Fig. 1b), a trend similar to that on Cu. Thus lowering *CO coverage on Au–Cu surfaces is predicted to favor methane vs. C2 products, as previously shown on Cu (ref. 35). In addition, DFT calculation results show that the values of ∆G*CHO–∆G*OCCOH on Au–Cu surfaces are not always lower than the values of ∆G*CHO–∆G*OCCOH on Cu at different *CO coverages (Fig. 1b), suggesting that only under low *CO coverage do some Au–Cu surfaces show a higher ratio of methane to C2 products compared to Cu.
To compare the HER activities on Cu and Au–Cu surfaces, we calculated the reaction free energies of *H intermediate formation (∆G*H) (Fig. 1c and Supplementary Fig. 5). The results suggest that Au–Cu surfaces tend to suppress HER compared to Cu under high *H coverages (Fig. 1c).
Taken together, these DFT studies suggest that Au–Cu tends to promote methane selectivity with low *CO coverage, since this will suppress C–C coupling; and that Au–Cu will further advance CO2RR over HER compared to pure Cu.
### Preparation and characterization of catalysts
To achieve the goal of high selectivity to methane in CO2RR with high current densities, we sought to fabricate Au–Cu catalysts. We used a galvanic replacement approach enabled by the differing reduction potentials of Au and Cu (ref. 44). Firstly, we prepared a 100 nm thick layer of Cu catalysts on the surface of polytetrafluoroethylene (PTFE) nanofibers via sputter deposition (Supplementary Figs. 6 and 7). We then immersed the Cu/PTFE in an N2-saturated HAuCl4 aqueous solution at 65 °C for 15 min to prepare the Au–Cu catalysts on PTFE as the electrodes (Fig. 2a–d) via the galvanic replacement between Cu and AuCl4—this approach allows us to directly tune the ratio of Au and Cu on PTFE substrates.
Low-magnification scanning electron microscope (SEM) images and energy-dispersive X-ray spectroscopy (EDX) elemental mapping show a uniform distribution of elemental Cu and Au on PTFE nanofibers, accompanied by loosely distributed Au nanoparticles also on the nanofibers (Fig. 2a, b). The bright-field scanning transmission electron microscope (STEM) image with higher magnification and corresponding EDX elemental mapping further confirms that Au and Cu are distributed evenly on the PTFE nanofibers (Fig. 2d). High-resolution X-ray photoelectron spectroscopy (XPS) characterization of Au–Cu electrodes shows the presence of Au0 and Cu which has been partially oxidized to Cu+ in the air (Fig. 2e, f and Supplementary Figs. 8 and 9)45. The atomic percentage of Au in the catalyst surface is approximately 7% determined by XPS (denoted 7% Au–Cu), which is lower than previously reported Au–Cu alloy catalysts studied in CO2RR (refs. 31,32,33).
### Investigation of CO2 electroreduction
The CO2RR experiments were performed in a flow cell reactor with a three-electrode configuration (Supplementary Figs. 10 and 11) using CO2-saturated 1 M KHCO3 aqueous solution as the electrolyte. Previous studies show that, in CO2RR, both reaction rate—determined by current density—and CO2 concentration affect the concentration of *CO on the catalyst surface35. We thus evaluated the CO2RR performance of 7% Au–Cu electrodes by supplying gas streams consisting of different volume ratios of CO2 to N2 (Fig. 3, Supplementary Figs. 12 and 13, and Supplementary Table 4).
Figure 3a, b shows FEs of methane and ethylene on 7% Au–Cu catalysts in the current density range of 100–250 mA cm−2 at various CO2 concentrations (25% CO2, 50% CO2, 75% CO2, 84% CO2, 92% CO2, and pure CO2). At low current densities (≤150 mA cm−2), 7% Au–Cu delivers appreciable ethylene FEs under pure CO2 and CO2–N2 mixed streams (Fig. 3b). However, at high current densities (200–250 mA cm−2), relative to pure CO2, the methane FEs on 7% Au–Cu catalysts increase sharply in CO2–N2 mixed streams while the ethylene FEs decrease dramatically, which we ascribe to the low *CO coverage on catalyst surfaces as a result of the reduced CO2 concentration and high reaction rate. We note that, once they reach their peaks in these mixed streams, the methane FEs start to decrease with further increase in current density (Fig. 3a), a finding we attribute to the lack of *CO for the *CO protonation step of methane formation46. In particular, at 84% CO2, we achieve the highest CH4 FE of (56 ± 2)% on 7% Au–Cu catalysts with a CH4 production rate of (112 ± 4) mA cm−2. We calculated the CH4 cathodic EEs at different current densities under different CO2 concentrations (Fig. 3c): the highest CH4 cathodic EE of (24 ± 1)% was achieved at 200 mA cm−2 under 84% CO2.
To evaluate experimentally the selectivity between *CO protonation and C−C coupling reaction steps, we further calculated the ratios of methane FE to total C2+ FE ($${{\rm{FE}}}_{{{{\rm{CH}}}}_{4}}/{{\rm{FE}}}_{{\rm{C}}_{2+}}$$) on 7% Au–Cu catalysts at various CO2 concentrations (Fig. 3d). With low reaction rates (≤150 mA cm−2), the C2+ selectivity is higher than the methane selectivity regardless of the CO2 concentration. Under high current densities (200–250 mA cm−2), the $${\rm{FE}}_{{\rm{CH}}_{4}}/{\rm{FE}}_{{\rm{C}}_{2+}}$$ ratio on 7% Au–Cu catalysts in CO2–N2 mixed streams is much greater than that in pure CO2, suggesting that low *CO coverage on the surface of Au–Cu catalysts—as a consequence of a reduced CO2 concentration and high current density—promotes the *CO protonation step for methane production, consistent with our DFT calculations.
To explore the effect of Au concentration on CO2RR performance under CO2–N2 co-feeds, we also prepared 3% Au–Cu and 10% Au–Cu catalysts on PTFE through a similar galvanic replacement approach (Supplementary Figs. 7 and 1419) and measured CO2RR performance of the 3% Au–Cu, 10% Au–Cu, and Cu catalysts at 84% CO2 for comparison (Fig. 4a–c, Supplementary Figs. 20 and 21, and Supplementary Table 5). At low reaction rates (≤150 mA cm−2), the methane FEs on 3% Au–Cu, 7% Au–Cu, 10% Au–Cu, and Cu catalysts are below 11% (Fig. 4a), while ethylene and ethanol are the main CO2RR products on these catalysts (Supplementary Fig. 20c and Supplementary Table 5). At high reaction rates (200–250 mA cm−2), methane becomes the main CO2RR product while ethylene FEs are below 12% on the 3% Au–Cu, 10% Au–Cu, and Cu catalysts; methane FEs on 3% Au–Cu, 10% Au–Cu, and Cu catalysts give peak values at 200 mA cm−2 and then decrease along with the increase in current density (Fig. 4a). These trends are similar to that observed on the 7% Au–Cu catalysts. By comparing the highest methane FEs on different catalysts, we note that, among the catalysts studied, only 7% Au–Cu catalysts deliver higher methane FE vs. Cu catalysts (Fig. 4b), suggesting the significance of controlling Au concentration in Au–Cu catalysts for promoting methane selectivity. The 7% Au–Cu delivers—compared to 3% Au–Cu and 10% Au–Cu—higher methane FE at 200 mA cm−2. We associate this with improved suppression of HER on 7% Au–Cu (Supplementary Fig. 20a) and note that both HER and *CO coverage impact methane FE. We also calculated the $${\rm{FE}}_{{\rm{CH}}_{4}}/{\rm{FE}}_{{\rm{C}}_{2+}}$$ ratios on Cu and three Au–Cu catalysts at 84% CO2: only at high current density—low *CO coverage—do some of Au–Cu catalysts show higher $${{\rm{FE}}}_{{{{\rm{CH}}}}_{4}}/{{\rm{FE}}}_{{\rm{C}}_{2+}}$$ ratios compared to Cu, in agreement with DFT calculations.
At high current densities (200–250 mA cm−2) with high methane selectivity, the H2 FEs on 3% Au–Cu, 7% Au–Cu, and 10% Au–Cu catalysts are lower than that on the Cu catalysts (Supplementary Fig. 20a), suggesting that the introduction of Au in Cu tends to suppress HER when using dilute CO2 feeds. We further calculated the ratio of methane FE to H2 FE ($${\rm{FE}}_{{\rm{CH}}_{4}}/{\rm{FE}}_{{\rm{H}}_{2}}$$) on catalysts in high current density regimes (Fig. 4c): compared with Cu catalysts, 3% Au–Cu, 7% Au–Cu, and 10% Au–Cu catalysts exhibit higher $${\rm{FE}}_{{\rm{CH}}_{4}}/{\rm{FE}}_{{\rm{H}}_{2}}$$ ratios—with the highest value of 2.7 on 7% Au–Cu catalysts— indicating that the Au–Cu catalysts shifted the reaction from undesired HER toward *CO protonation for methane production.
To investigate the chemical state of Cu in the catalysts during CO2RR, we carried out operando X-ray absorption spectroscopy (XAS) at the Cu K-edge at a constant current density of 200 mA cm−2 with an 84% CO2 feed (Fig. 4d). The average valence states of Cu in 3% Au–Cu, 7% Au–Cu, 10% Au–Cu, and Cu catalysts are zero during CO2RR, demonstrating that the difference in product selectivity among these catalysts is associated with the metallic state of Cu in lieu of copper oxides5,47.
## Discussion
This work demonstrates that the introduction of Au in Cu facilitates *CO protonation for methane formation using CO2–N2 co-feeds and suppresses HER at high current densities. DFT results show that a decrease in *CO coverage on Au–Cu surfaces favors *CO protonation vs. C–C coupling; compared with Cu, Au–Cu suppresses HER, enabling methane selectivity improvements under dilute CO2 streams. Experimentally, we fabricated Au–Cu catalysts and regulated *CO availability by controlling the CO2 concentration and current density, wherein the selectivity ratio of methane to H2 exhibited the highest value of 2.7. We report a CO2-to-methane conversion with a high methane FE of (56 ± 2)% at a partial current density of (112 ± 4) mA cm−2 with a CO2–N2 co-feed. These findings suggest a promising strategy to convert CO2 to carbon-neutral methane with a combination of high selectivity, high conversion rate, and high cathodic EE through catalyst design and tuning local *CO coverage.
## Methods
### DFT calculations
In the Vienna ab initio simulation package, the generalized gradient approximation and the Perdew–Burke–Ernzerhof exchange-correlation functional was implemented for all DFT calculations48,49,50,51,52. The projector-augmented wave (PAW) method was used to treat the electron–ion interactions53,54 with an energy cut-off of 450 eV for the plane-wave basis set. The force and energy convergence for all DFT calculations were set to 0.01 eV Å−1 and 10−5 eV, respectively. A (3 × 3 × 4) Cu(111) supercell with the bottom two layers fixed was used to simulate the exposed Cu surface with a 15 Å vacuum gap. One, two, or three surface Cu atoms were substituted by Au atoms in the Au1Cu35, Au2Cu34, and Au3Cu33, respectively. A (3 × 3 × 1) Monkhorst–Pack k-points grid was used to optimize all the surface structures. In DFT calculations, we did not consider the isolated arrangement of Au dopants in Cu as the XAS characterization of the Au–Cu catalysts showed that Au atoms were not atomically dispersed in Cu (Supplementary Fig. 22 and Supplementary Table 6).
Surface *CO coverages of 2/9 ML, 3/9 ML, and 4/9 ML were studied, where 2/9 ML corresponds to two single-carbon adsorbed species or one double-carbon species on the surface of the supercell. To systematically determine the most stable geometry of each reaction intermediate in CO2RR and HER under different *CO coverages (2/9, 3/9, and 4/9 ML) on different surfaces (Cu36, Au1Cu35, Au2Cu34, and Au3Cu33), we considered different possibilities of *CO adsorption, protonation, and C–C coupling, as well as different directions of *OCCO protonation (more details of our computational workflow in Supplementary Fig. 23). We note that the models reported in this study include a charged water layer, i.e., an ML of six water molecules, one of which is a hydronium or charged water (H3O+) molecule, to consider both field and solvation effects55. The water structure was determined by ab initio molecular dynamics and adopted from a previous study56. The two competing CO2RR reaction steps as listed below57,58,59,60,61 were simulated for the three *CO coverages while *H adsorption was simulated for equivalent *H coverages
$$\ast {\rm{CO}}+{\rm{H}}_{2}{\rm{O}}+{\rm{e}}^{-}\to \ast {\rm{CHO}}+{\rm{OH}}^{-}$$
(1)
$$\ast {\rm{CO}}+\ast {\rm{CO}}+{{\rm{H}}}_{2}{\rm{O}}+{{\rm{e}}}^{-}\to \ast {\rm{OCCOH}}+{{\rm{OH}}}^{-}$$
(2)
$${{\rm{H}}}^{+}+{{\rm{e}}}^{-}+\ast \to \ast {\rm{H}}$$
(3)
The Gibbs free energy changes (∆G) for *CO protonation, C–C coupling, and *H adsorption were calculated without dipole corrections based on the computational hydrogen electrode (CHE) model62. The Gibbs free energy of adsorbed and non-adsorbed species (G) is calculated as
$$G=E+{\rm{ZPE}}+\smallint C{p}{\rm{d}}T-TS$$
(4)
where E, ZPE, Cp, and S are the electronic energy directly obtained from DFT calculations, zero-point energy, heat capacity, and entropy, respectively (see Supplementary Table 2 for more details). T is set to room temperature (298.15 K) for a better comparison with the experimental measurements.
### Electrode preparation
All chemicals were used as received without further purification. All aqueous solutions were prepared using deionized water with a resistivity of 18.2 MΩ cm. The Cu/PTFE cathodes were prepared by sputtering 100 nm thickness of Cu catalysts (Cu target, 99.999%, Kurt J. Lesker company) on PTFE membranes (pore size: 450 nm, Beijing Zhongxingweiye Instrument Co., Ltd.) using a magnetron sputtering system. In the cathode, a porous PTFE membrane functions as a stable hydrophobic gas diffusion layer and prevents flooding during operation7. 3% Au–Cu, 7% Au–Cu, and 10% Au–Cu cathodes were prepared by immersing the Cu/PTFE electrodes in N2-saturated HAuCl4 aqueous solution (5 μmol L−1) at 40 °C for 30 min, 65 °C for 15 min, and 65 °C for 30 min, respectively. Ag/AgCl reference electrode (3 M KCl, BASi) and Ni foam (1.6 mm thickness, MTI Corporation) was used as the reference electrode and anode, respectively. Ni foam was used as an OER electrode in the anode due to its commercial availability and good stability35,63.
Material characterization. SEM images and the corresponding EDX elemental mapping were taken using the Hitachi FE-SEM SU5000 microscope. HAADF-STEM and bright-field STEM images, and the corresponding EDX elemental mapping were taken using a Hitachi HF-3300 microscope at 300 kV. XRD was recorded on Rigaku SmartLab X-ray diffractometer with Cu-Kα radiation. The surface compositions of electrodes were determined by XPS (Thermo Scientific K-Alpha) using a monochromatic aluminum X-ray source. Operando Cu K-edge XAS spectra recorded in fluorescence yield were performed at the SuperXAS beamline at the Swiss Light Source. Ex situ XAS measurements were carried out at the Advanced Photon Source (Argonne National Laboratory). XAS data were processed by Athena and Artemis software included in a standard IFEFFIT package64.
### Electrochemical measurements
The electrochemical measurements were conducted in an electrochemical flow cell setup configuration with the three-electrode system at an electrochemical station (AUT50783). The geometric area of the cathode in the flow cell is 1 cm2, which is used for all current density calculations. 30 mL of CO2-saturated 1 M KHCO3 aqueous solution was introduced into the cathode chamber and the anode chamber at the rate of 10 mL min−1 by two pumps, respectively. An anion exchange membrane (Fumasep FAB-PK-130, Fuel Cell Store) was used to separate the cathode chamber and anode chamber. Pure CO2 gas (Linde, 99.99%) or N2-diluted CO2 gas with different CO2 concentrations (75% and 84%) was continuously supplied to the gas chamber of the flow cell at a flow rate of 90 mL min−1. The CO2RR performance was tested using constant-current electrolysis while purging CO2 into the catholyte during the whole electrochemical test. The potentials vs. Ag/AgCl reference electrode were converted to values vs. reversible hydrogen electrode using the equation
$${E}_{\rm{RHE}}={E}_{{\rm{Ag}}/{\rm{AgCl}}}+0.210\,{\rm{V}}+0.0591\times {\rm{pH}}$$
(5)
The ohmic loss between the working and reference electrodes was evaluated by electrochemical impedance spectroscopy technique and 80% iR compensation was applied to correct the potentials manually.
Gas products were analyzed using a gas chromatograph (PerkinElmer Clarus 600) equipped with thermal conductivity and flame ionization detectors. Liquid products were analyzed by nuclear magnetic resonance spectrometer (Agilent DD2 600 MHz) and dimethylsulfoxide was used as an internal standard.
We calculated the methane cathodic EE based on the equation as follows7:
$${\rm{Cathodic}}\,EE=\frac{(1.23+(-{E}_{{\rm{methane}}}))\times F{E}_{{\rm{methane}}}}{(1.23+(-{E}_{{\rm{applied}}}))},$$
(6)
where the overpotential of oxygen evolution is assumed to be 0, Eapplied is the potential used in the experiment, FEmethane is the measured Faradaic efficiency of methane in percentage, and Emethane = 0.17 VRHE for CO2RR (ref. 65).
|
Chemistry Allotropic Forms of Phosphorous and Phosphine
### Topics Covered :
● Allotropes of Phosphorous
● White Phosphorous
● Red Phosphorous
● Black Phosphorous
● Preparation of Phosphine
● Properties and Uses of Phosphine
### Phosphorus — Allotropic Forms :
Phosphorus is found in many allotropic forms, the important ones being white, red and black.
### White phosphorus :
● It is a translucent white waxy solid.
● It is poisonous, insoluble in water but soluble in carbon disulphide
● It glows in dark (chemiluminescence).
● It dissolves in boiling color{red}(NaOH) solution in an inert atmosphere giving color{red}(PH_3).
color{red}(P_4+3NaOH +3H_2O → PH_3 + undersettext{(sodium hypophosphite)}(3NaH_2PO_2))
● White phosphorus is less stable and therefore, more reactive than the other solid phases under normal conditions because of angular
strain in the color{red}(P_4) molecule where the angles are only 60^0.
● It readily catches fire in air to give dense white fumes of color{red}(P_4O_(10)).
color{red}(P_4+5O_2 → P_4O_(10))
● It consists of discrete tetrahedral color{red}(P_4) molecule as shown in Fig. 7.2.
### Red phosphorus :
● Red phosphorus is obtained by heating white phosphorus at 573K in an inert atmosphere for several days.
● When red phosphorus is heated under high pressure, a series of phases of black phosphorus are formed.
● Red phosphorus possesses iron grey lustre.
● It is odourless, non-poisonous and insoluble in water as well as in carbon disulphide.
● Chemically, red phosphorus is much less reactive than white phosphorus.
● It does not glow in the dark.
● It is polymeric, consisting of chains of color{red}(P_4) tetrahedra linked together in the manner as shown in Fig. 7.3.
### Black phosphorus :
=> Black phosphorus has two forms color{red}(α)-black phosphorus and color{red}(β)-black phosphorus.
color{green}(α-text(Black Phosphorus ))
●It is formed when red phosphorus is heated in a sealed tube at 803K.
● It can be sublimed in air and has opaque monoclinic or rhombohedral crystals.
● It does not oxidise in air.
color{green}(β-text(Black Phosphorus ))
● It is prepared by heating white phosphorus at 473 K under high pressure.
● It does not burn in air upto 673 K.
### Phosphine :
Preparation, properties and uses of phosphine is as follow :
### Preparation :
=> Phosphine is prepared by the reaction of calcium phosphide with water or dilute color{red}(HCl).
color{red}(Ca_3P_2 +6H_2O → 3 Ca(OH)_2+2PH_3)
color{red}(Ca_3P_2+6HCl → 3Ca Cl_2+2PH_3)
=> In the laboratory, it is prepared by heating white phosphorus with concentrated color{red}(NaOH) solution in an inert atmosphere of color{red}(CO_2).
color{red}(P_4+3NaOH +3H_2O → PH_3 undersettext{(sodium hypophosphite)} (3 NaH_2PO_2))
● When pure, it is non inflammable but becomes inflammable owing to the presence of color{red}(P_2H_4) or color{red}(P_4) vapours.
● To purify it from the impurities, it is absorbed in color{red}(HI) to form phosphonium iodide color{red}(PH_4I) which on treating with color{red}(KOH) gives off phosphine.
color{red}(PH_4I +KOH → KI +H_2O +PH_3)
### Properties :
● It is a colourless gas with rotten fish smell and is highly poisonous.
● It explodes in contact with traces of oxidising agents like color{red}(HNO_3, Cl_2) and color{red}(Br_2) vapours.
● It is slightly soluble in water. The solution of PH_3 in water decomposes in presence of light giving red phosphorus and color{red}(H_2).
● When absorbed in copper sulphate or mercuric chloride solution, the corresponding phosphides are obtained.
color{red}(3CuSO_4 +2PH_3 → Cu_3P_2+3H_2SO_4)
color{red}(3HgCl_2+2PH_3 → Hg_3P_2 +6HCl)
● Phosphine is weakly basic and like ammonia, gives phosphonium compounds with acids e.g.,
color{red}(PH_3+HBr → PH_4Br)
### Uses :
● The spontaneous combustion of phosphine is technically used in Holme’s signals. Containers containing calcium carbide and calcium phosphide are pierced and thrown in the sea when the gases evolved burn and serve as a signal.
● It is also used in smoke screens.
Q 3070391216
In what way can it be proved that PH_3 is basic in nature?
Solution:
PH_3 reacts with acids like HI to form PH_4I which shows that it is basic in nature.
PH_3 + HI → PH_4 I
Due to lone pair on phosphorus atom, PH_3 is acting as a Lewis base in the above reaction.
|
How do we measure the concentration of a solution in "parts per million"?
You need the concentration in $g \cdot {L}^{-} 1$, and convert this to $\text{ppm}$. How?
$\text{ppm}$ as you know is parts per million; literally $\text{milligrams per litre}$, i.e. ${10}^{- 3} g \cdot {L}^{-} 1$ (ppm because there are a million mg in $1 \cdot L$ of water). It is typically used for trace quantities, and the density of the solvent (usually water) is almost identical to that of the pure solvent.
|
## Calculus 8th Edition
$$g(x)=4x-17$$
To find $g$ such that $g \circ f=h$, we have$$g(f(x))=h(x) \quad \Rightarrow \quad g(x+4)=4x-1$$ $$\Rightarrow \quad g(x+4)=4(x+4)-17$$ $$\Rightarrow \quad g(x)=4x-17.$$
|
Differentials word problem help
• September 24th 2009, 08:04 PM
Baginoman
Differentials word problem help
Hey guys, cant seem to build up the formula for this question. Trying to evaluate dy and delta y for the indicated values...
A cube with sides 10inches long is covered with a coat of fiberglass 0.2 inch thick. Use differentials to estimate the volume of the fiberglass shell.
(Wink)
• September 24th 2009, 09:20 PM
The Power
My guess would be using the volume formula and the upper limit is 10.02 and lower limit is 10 would to be find the antiderivative for the formula and plug in the new limits and such and F(b) - F(a), sorry for not being so clear
Something like this I assume since the volume formula of a cube is
$a^3$
$\int a^3$
$\frac{a^4}{4}$
• September 27th 2009, 10:15 PM
Baginoman
We haven't gotten into integrals yet. So im assuming there's another way?
|
## Homework Question 8.47
$\Delta U=q+w$
Lily Guo 1D
Posts: 64
Joined: Fri Sep 29, 2017 7:03 am
### Homework Question 8.47
For 8.47, why does the solutions manual use the equation (delta)H = U + P(delta)V instead of just U = q+ w? I'm also confused as to why work would be positive instead of negative because I thought that expansion work done ON a system is negative.
Felicia Fong 2G
Posts: 31
Joined: Sat Jul 22, 2017 3:00 am
### Re: Homework Question 8.47
I think expansion work done on a system is positive. Work done by the system would be negative.
Sarah_Stay_1D
Posts: 57
Joined: Sat Jul 22, 2017 3:00 am
Been upvoted: 1 time
### Re: Homework Question 8.47
Lily Guo 1D wrote:For 8.47, why does the solutions manual use the equation (delta)H = U + P(delta)V instead of just U = q+ w? I'm also confused as to why work would be positive instead of negative because I thought that expansion work done ON a system is negative.
You are in a sense actually using ΔU= q + w. First you can rearrange the equation ΔH = ΔU + PΔV to be ΔU = ΔH - PΔV. Sense we know that at a constant pressure ΔH=q, we can substitute q for ΔH. We also know that w = -PΔV , so we can substitute w for -PΔV. So, know you have the familiar equation ΔU = q + w. Then you just substitute the values given in the problem.
Angela 1K
Posts: 80
Joined: Fri Sep 29, 2017 7:05 am
### Re: Homework Question 8.47
But that ultimately give you a negative work, whereas work done ON a system should be positive???
Lily Guo 1D
Posts: 64
Joined: Fri Sep 29, 2017 7:03 am
### Re: Homework Question 8.47
Angela 2I wrote:But that ultimately give you a negative work, whereas work done ON a system should be positive???
I just realized my mistake. Work done ON a system is positive, but EXPANSION work means that work is done on the surroundings/BY the system because the system is expanding. Therefore, work would be negative. That's my bad for not catching that, sorry!
|
# Article: Origin, limits, and systematic position of Scaphites
Publication: Palaeontology
Volume: 8
Part: 3
Publication Date: October 1965
Page(s): 397 453
Author(s): Jost Wiedmann
DOI:
### How to Cite
WIEDMANN, J. 1965. Origin, limits, and systematic position of ScaphitesPalaeontology8, 3, 397–453.
### Online Version Hosted By
The Palaeontological Association (Free Access)
PDF:
[Free Access]
## Abstract
Up to the present interpretations concerning the origin and systematic position of the Cretaceous heteromorph Scaphites have been extremely divergent. On one hand, scaphitids have been regarded as a mono-phyletic group of either lytoceratid (e.g. Spath 1933, 1934; Wright 1953, 1957) or ammonitid origin (Luppov and Drushtchic 1958; Drushtchic 1962), on the other hand, as a more or less polyphyletic accumulation (Nowak 1911; Reeside 1927a; Schindewolf 1961). Wright and Wright (1951) established a superfamily Scaphitaceae, directly connected with the lytoceratid stock, while in the recent Russian literature they are placed in the Ammonitina. Reeside distributed the scaphitids among four different ammonitid lineages. All these possibilities of scaphitid classification are discussed here. A monophyletic but hamitid origin of the true scaphitids is asserted; 'Otoscaphitinae' Wright are regarded as heterogeneous (Otoscaphites is a true Scaphites, but Worthoceras should be placed in Ptychoceratinae), and Labeceratidae Spath are referred to the anisoceratids. The suture line of the restricted Scaphites was found to be quadrilobate throughout, as in all other heteromorphs. This makes the superfamily rank unnecessary, in the author's opinion, and places the remaining family Scaphitidae in the Ancylocerataceae, as recently defined by Wiedmann (1962b).
|
# What is the difference between correlated equilibrium and mixed equilibrium?
What is the difference between correlated equilibrium and mixed equilibrium?
Here's what I understand :
Unlike a pure Nash equilibrium, a mixed equilibrium corresponds to when each player has a probability distribution he follows instead of pure actions (single action with probability one).
Also, what I understand by correlated equilibrium is that it corresponds to when someone tells each player to follow some particular probability distribution, and each player plays according to that.
What I don't understand is what is the difference between these two? Can't one be modeled as the other?
In a mixed (strategy) Nash equilibrium, the players' actions (strategies) are independent random variables. In other words, if you know that player 1 (randomly) chose $x$, that doesn't give you any additional information about what player 2 might do.
|
man fig2sty (Commandes) - LaTeX Layout Generator
NAME
fig2sty - LaTeX Layout Generator
SYNOPSIS
fig2sty [-debug] [-baselineskip=..] [-fontsize=..]
[-reference=(grid|upper|lower)] [-offset=..]
[-figversion=..] [-nopipe] [-fig2dev=...] input-file
Show base lines in background picture Default values. May be overriden by tags. XFig format version which will be fed to CWfig2dev to generate background picture If your version of CWfig2dev does not accept piped input use this switch If you have installed several versions of CWfig2dev you can specify which version to use
DESCRIPTION
fig2sty allows you to generate fancy layouts with LaTeX. The basic idea is to draw layout definitions interactively with XFig and transform this definition to a LaTeX style file. You can then use LaTeX to typeset your text into arbitrarily shaped polygons (frames) within the layout.
You can even add any graphical elements in the layout definition which will appear in the LaTeX output as a background picture.
How to draw the layout definition.
Any closed polygon or box may be a frame. But it must not have an area fill such that your coloured background boxes will not interfere with your frames. Tags are associated to each frame. A tag is simply a text CWkey=value marked as special and positioned within the bounding box of the respective frame. If there are any ambiguities, simply put your frame and its tags into a compound.
Several key words are interpreted by CWfig2sty: This tag is compulsory; a frame will not be detected as such if you don't provide a type tag. You can have several type tags in your layout, but you may as well have several frames associated with the same type tag. Text will then flow between all the frames of common type. The type tag must neither contain numerals nor any special characters, just plain alpha characters. If you care about the ordering of the text flow you can use the following tag: Text will flow between frames of common type in increasing size of CWn Text will be typeset in a fixed line grid. Baselineskip provides the distance between neighbouring lines. All frames of a certain type share the same baselineskip. If you provide several of them, the tag within the frame of lowest CWn will be used. If you have rectangular frames of equal widths only you may set CWbaselineskip=any. Text will then by set in ordinary horizontal mode. For any other kind of frame or multiple rectangulars with differing width CWbaselineskip=any is forbidden. Lines of all frames will be aligned if you assign the value CWgrid to this tag. Other possibilities are CWupper(CWlower) which will align to the upper(lower) boundary of your frame. Each frame has its private reference. You can shift your base lines by an arbitrary amount by providing the CWoffset tag. Each frame has its private offset. Used only as minimal distance from top baseline to the upper boundary of the frame. LaTeX code to be prepended to the text LaTeX code to be appended to the text
Default values to the tags are CWbaselineskip=12, CWreference=grid, CWoffset=0, but you can overwrite these defaults by the command line options. All dimensions are given in TeX pt.
How the baselines are chosen
First of all, you can't have lines split horizontally. Lines will always go from the leftmost intersection of the base line with the frame to the rightmost one. There is not really any technical reason for this (except for laziness, one of the virtues...). If you find this to be a major restriction, please let me know.
The baselines are chosen such that the distance to the reference point is a multiple of CWbaselineskip. CWfig2sty will ensure that no textual element of size CWfontsize will overlap the given frame. This is indeed the only use of the CWfontsize tag. CWfontsize=baselineskip is usually a good choice except when using small letters at wide vertical spacing. The choice of the reference point is based on the CWreference tag. The value CWgrid means that a global reference point (the upper-left corner of the layout) will be used. This allows text in different frames to be aligned. CWreference=upper means that text will touch the upper boundary of your frame. The analogous functionality is provided by the CWlower value.
How to use in your LaTeX document.
If you have two frames of type 'CWabstract' and 'CWtext' within the layout CWfancylayout you type:
\documentclass{article} \usepackage{figtosty} \usepackage{fancylayout}
\begin{document}
...
\begin{fancylayout}
\begin{figframe}{abstract}
blah blah
\end{figframe}
\begin{figframe}{text}
blah blah
\end{figframe}
\end{fancylayout}
...
\end{document}>
INSTALLATION
You will need XFig.pm in your Perl Library path , figtosty.sty in your LaTeX search path and fig2sty in your binary search path.
KNOWN BUGS
list environments do not respect frame boundaries
some mysterious bug with splines
doesn't work with \sloppy
AUTHOR
M. Rohner, [email protected]
|
# Transfer Function for a negative feedback loop
Discussion in 'Math' started by smarch, Nov 20, 2010.
1. ### smarch Thread Starter Active Member
Mar 14, 2009
52
0
Hi guys would appreciate the help.
I am having trouble with this question.
From the JPEG I have a attached:
Determine the equivalent T.F. G(s) in form of G(s) = $\frac{p}{qs+r}$, where p, q, r are constants.
I have tried working it out, but got into a mess. Help please, how do you do it?
Thanks
• ###### TF.jpg
File size:
9.7 KB
Views:
35
Last edited: Nov 20, 2010
2. ### guitarguy12387 Active Member
Apr 10, 2008
359
12
In general, for a closed loop system, you have:
TF = (direct path gain)/(1 - sum of loop gain)
From there it is just algebra to get into the right form
3. ### smarch Thread Starter Active Member
Mar 14, 2009
52
0
I come to :
C(s) = R(s).(10/3s - 5) - C(s)(40/3s - 20)
Am I on the right track?
I have limited notes on this and have tried to follow it as they are in the notes, but it is quite a different problem to that one.
4. ### guitarguy12387 Active Member
Apr 10, 2008
359
12
Yeah that looks about right i think.
Now just solve for C(s)/R(s)
5. ### smarch Thread Starter Active Member
Mar 14, 2009
52
0
How do you complete it all the way through to get G(s) = $\frac{p}{qs+r}$
6. ### Georacer Moderator
Nov 25, 2009
5,142
1,266
First read here:http://en.wikipedia.org/wiki/Control_theory in the section Closed Loop Tranfer Funtion. In your case you have C=5, P=$\frac{2}{3s}-1$ and F=-4;
Try to calculate the Transfer Function H(s).
Post it and we will discuss more.
7. ### krishna chaitanya New Member
Nov 8, 2010
5
0
TF = (direct path gain)/(1 - sum of loop gain)..
it will give ans for this...
|
# Neutral element
A neutral element is a special element of an algebraic structure . It is characterized by the fact that each element is mapped to itself through the link with the neutral element.
## definition
Be a magma (a quantity with a two-digit link ). Then an element is called${\ displaystyle (S, *)}$${\ displaystyle e \ in S}$
• left neutral , if is for everyone ,${\ displaystyle e * a = a}$${\ displaystyle a \ in S}$
• legally neutral , if is for everyone ,${\ displaystyle a * e = a}$${\ displaystyle a \ in S}$
• neutral if left neutral and right neutral.${\ displaystyle e}$
If the connection is commutative , then the three terms match. But if it is not commutative, then there can be a right-neutral element that is not left-neutral, or a left-neutral element that is not right-neutral.
A semigroup with a neutral element is called a monoid . If every element in also has an inverse element in , then is a group . ${\ displaystyle S}$${\ displaystyle S}$${\ displaystyle S}$${\ displaystyle S}$
The symbol is often used for the link ; one then speaks of a multiplicatively written semigroup. A neutral element is then called a single element and is symbolized by. As is common with ordinary multiplication , the painting point can be left out in many situations . ${\ displaystyle *}$${\ displaystyle \ cdot}$${\ displaystyle 1}$${\ displaystyle \ cdot}$
A semigroup can also be noted additively by using the symbol for the link . A neutral element is then called a null element and is symbolized by. ${\ displaystyle *}$${\ displaystyle +}$${\ displaystyle 0}$
## properties
• If a semigroup has both right-neutral and left-neutral elements, then all of these elements match and have exactly one neutral element. Because is and for everyone , then is .${\ displaystyle S}$${\ displaystyle S}$${\ displaystyle a * e = a}$${\ displaystyle f * a = a}$${\ displaystyle a \ in S}$${\ displaystyle f = f * e = e}$
• The neutral element of a monoid is clearly determined.
• But if a semigroup has no right-neutral element, then it can have several left-neutral elements. The simplest example is any set with at least two elements with the link . Each element is left neutral, but none is right neutral. Similarly, there are also semigroups with right-neutral, but without left-neutral elements.${\ displaystyle a * b: = b}$
• This can also occur when multiplying in rings. One example is the partial ring
${\ displaystyle R = \ left \ {\ left. {\ begin {pmatrix} a & b \\ 0 & 0 \ end {pmatrix}} \ right | a, b \ in K \ right \}}$
of the 2-by-2 matrices over any body . It is easy to calculate that is a non-commutative ring. Exactly the elements are left-neutral with regard to the multiplication ${\ displaystyle K}$${\ displaystyle R}$
${\ displaystyle {\ begin {pmatrix} 1 & x \\ 0 & 0 \ end {pmatrix}}}$
with . According to what has been said above, the multiplication in cannot have any right-neutral elements.${\ displaystyle x \ in K}$${\ displaystyle R}$
|
# Using two short mobile HF antennas to make a vertical dipole
Instead of using a metal vehicle roof or multiple radials as the counterpose to a shortened vertical antenna, can one use two of them, one pointed downward to construct a balanced vertical dipole, and mounting the pair up higher? Are there any advantages or disadvantages to doing this over using a bunch of random length radials around the base of a lower vertical? (other than needing to string the feed-line out roughly horizontally for some lambda distance...)
To begin the discussion, it is helpful to understand the effects of shortening any antenna to a length below resonance. In all cases, the directivity of the antenna is reduced but this tends to be a fairly uniform reduction regardless of how much shortening occurs. The reduction in length also reduces the radiation resistance of the antenna. This is significant in that as radiation resistance is reduced, the effect of losses in the antenna is amplified. Efficiency of an antenna is defined as:
$$Efficiency=\frac {R_r}{R_r+R_l} \tag 1$$
where Rr is the radiation resistance of the entire antenna (not simply the resistive component of the feedpoint impedance1) and Rl are the resistive losses.
The gain of the antenna is given as:
$$Gain=Directivity*Efficiency \tag 2$$
So it becomes clear that as an antenna is shortened, the directivity is reduced and efficiency tend to go down the further the antenna is shortened, having a compound effect in reducing the gain of the antenna.
A 1/4 wave vertical antenna with a substantial ground plane over real ground and a vertical, 1/2 wave dipole over real ground will have approximately the same gain of ~ 0 dBi. This includes all of the effects listed above.
So now to the question of which is better, the answer comes down to which is the more efficient component - the vertical's ground plane or the second shortened element that makes up the dipole? This of course depends on the specific comparison and the construction elements involved. But we can generalize an answer by saying if the ground plane is highly efficient and the dipole is made up of two of the comparatively inefficient elements as used for the vertical component of the vertical antenna, the dipole will have twice the inefficiency and therefore have a gain of -3 dB compared to the vertical antenna.
Note 1:
There is a common error made in many antenna texts and on Internet postings that equate radiation resistance of a lossless antenna to the resistive component of the feedpoint impedance. While this is often the case with common amateur radio antennas, there are many exceptions. For example, as amateurs we are well versed in the lossless, center fed, 72 ohm feedpoint impedance of a 1/2 wave dipole in free space. But if we take the same antenna and feed it 1/3 from the end instead of the center, we will observe a much higher resistive component of the feedpoint impedance. But the radiation resistance of the antenna is still 72 ohms.
It is not a trivial exercise to correctly determine the radiation resistance of an antenna. Fortunately, many antenna designs have already been characterized. We are also fortunate that in the amateur radio community, most of our linear antennas are 1/2 wavelength or shorter and we tend to feed them at the current maxima point. When these conditions are met, then the resistive component of the feedpoint impedance of the antenna generally consists of the radiation resistance of the entire antenna plus the resistive losses of the antenna.
An example of where this knowledge can be very helpful is determining how many radials to add to a 1/4 wavelength vertical antenna in order to maximize its efficiency. If the antenna meets the earlier stated requirements, then monitoring the resistive component of the feedpoint impedance will show the increase in efficiency (reduced resistive losses) as each radial is added. When the efficiency improvement becomes asymptotic, it is no longer productive to add additional radials.
• "where Rr is the radiation resistance and Rl are the resistive losses." normalized to the feedpoint impedance (or any other fixed impedance). Otherwise you might say for example that a folded dipole has a DC resistance of 1 ohm, but by making the impedance step-up of the antenna very high, radiation resistance is 10,000 ohms and thus efficiency is increased. – Phil Frost - W8II Jan 9 '18 at 14:12
• Or a simpler example, feeding a dipole off-center where the impedance is higher. Radiation resistance is increased, but efficiency is not, because the resistive losses are transformed to a higher impedance by the same ratio. – Phil Frost - W8II Jan 9 '18 at 14:38
• @philfrost The Rr that I referred to in my formula is the Rr of the entire antenna, not the feedpoint Rr. This is a common problem in antenna texts as well. To use the efficiency formula properly, Rr must be for the entire antenna. – Glenn W9IQ Jan 9 '18 at 14:52
• Sure, but the entire antenna, as seen from the same perspective as the radiation resistance, right? Same frequency, same feedpoint, etc. – Phil Frost - W8II Jan 9 '18 at 14:59
• I am not sure what you mean by that question. Rr for the entire antenna remains the same regardless of where the feedpoint is located. Of course Rr is frequency dependent. – Glenn W9IQ Jan 9 '18 at 15:07
People can and do use a pair of mobile antennas to make shortened dipoles. A "hamstick dipole" is one example.
One application for such an antenna is in a mobile station which for whatever reason requires a horizontal polarization.
It's also a quick way to make a small antenna where space would not allow for a full-sized dipole, such as an attic antenna, or a station in an apartment.
Or, a station that needs to be portable, where radials are not easily installed and hanging a wire dipole is not feasible, perhaps for field day.
The advantages are much the same a dipoles generally, most usually that no radials need be installed. For installation on a vehicle with a metal body, the vehicle body makes a sufficient ground plane. But I can think of a few reasons a dipole may still be desirable.
For line-of-sight propagation the radio horizon is increased. For higher frequencies there's likely no need to use a shortened antenna, but perhaps on 6 and 10 meters this may be of some value for local communication.
Getting the antenna higher also reduces ground losses, by reducing ground current density. Although the metal body provides some counterpoise, the Earth ground is still significant. Since losses are proportional to the square of current (P = I2R), moving the antenna away from the ground reduces ground losses.
If the antenna can be raised to at least a half wavelength, reflection from the ground will create an image antenna that is effectively a phased array which increases gain at low radiation angles.
Or if the antenna is raised to a lesser height and turned horizontal, that image antenna increases upwards radiation which may be useful for NVIS propagation.
|
# \startalign and \startcases are not compatible
In ConTeXt, it seems that \startalign and \startcases are not compatible. If so, what should I do when I want to align an equation containing a case-statement and a following equation?
\starttext
\placeformula
\startformula
\startalign % < -- Causes compilation error
\NC f(x) \NC
=\startcases
\NC 1, \MC x >0 \NR
\NC 0,\MC x \leq 0 \NR
\stopcases
\NC g(x) \NC =x^2 +2x +1
\stopalign % < -- Causes compilation error
\stopformula
\stoptext
Each line of a multi-line math align environment must be of the form:
\NC .... \NC .... \NR
So, you have to close the first line with \NR after the case environment:
\starttext
\startformula \startalign
\NC f(x) \NC = \startcases
\NC 1, \MC x > 0 \NR
\NC 0, \MC x >= 0 \NR
\stopcases \NR
\NC g(x) \NC = x^2 +2x +1 \NR
\stopalign \stopformula
\stoptext
• @TeXnician Totally – DG' Feb 22 at 12:35
• Oh, the first =0 in my question is superfluous. Since the text is invented, I overlooked that. Shall I edit it away for future reader? – Aminopterin Feb 22 at 13:09
• @DG' I changed the language a little bit. Hope that you don't mind. – Aditya Feb 22 at 16:02
• @Aditya On the contrary, I appreciate it – DG' Feb 22 at 16:11
• @TeXnician: sorry I misread your comment – Aditya Feb 23 at 5:17
|
# C++ API Overview
Filter Content By
Version
Languages
## C++ API Overview
This section documents the Gurobi C++ interface. This manual begins with a quick overview of the classes exposed in the interface and the most important methods on those classes. It then continues with a comprehensive presentation of all of the available classes and methods.
If you are new to the Gurobi Optimizer, we suggest that you start with the Quick Start Guide or the Example Tour. These documents provide concrete examples of how to use the classes and methods described here.
Environments
The first step in using the Gurobi C++ interface is to create an environment object. Environments are represented using the GRBEnv class. An environment acts as the container for all data associated with a set of optimization runs. You will generally only need one environment object in your program.
Models
You can create one or more optimization models within an environment. Each model is represented as an object of class GRBModel. A model consists of a set of decision variables (objects of class GRBVar), a linear or quadratic objective function on those variables (specified using GRBModel::setObjective), and a set of constraints on these variables (objects of class GRBConstr, GRBQConstr, or GRBSOS). Each variable has an associated lower bound, upper bound, and type (continuous, binary, etc.). Each linear or quadratic constraint has an associated sense (less-than-or-equal, greater-than-or-equal, or equal), and right-hand side value.
Linear constraints are specified by building linear expressions (objects of class GRBLinExpr), and then specifying relationships between these expressions (for example, requiring that one expression be equal to another). Quadratic constraints are built in a similar fashion, but using quadratic expressions (objects of class GRBQuadExpr) instead.
We often refer to the class of an optimization model. A model with a linear objective function, linear constraints, and continuous variables is a Linear Program (LP). If the objective is quadratic, the model is a Quadratic Program (QP). If any of the constraints are quadratic, the model is a Quadratically-Constrained Program (QCP). We'll sometimes also discuss a special case of QCP, the Second-Order Cone Program (SOCP). If the model contains any integer variables, semi-continuous variables, semi-integer variables, or Special Ordered Set (SOS) constraints, the model is a Mixed Integer Program (MIP). We'll also sometimes discuss special cases of MIP, including Mixed Integer Linear Programs (MILP), Mixed Integer Quadratic Programs (MIQP), Mixed Integer Quadratically-Constrained Programs (MIQCP), and Mixed Integer Second-Order Cone Programs (MISOCP). The Gurobi Optimizer handles all of these model classes.
Solving a Model
Once you have built a model, you can call GRBModel::optimize to compute a solution. By default, optimize will use the concurrent optimizer to solve LP models, the barrier algorithm to solve QP and QCP models, and the branch-and-cut algorithm to solve mixed integer models. The solution is stored in a set of attributes of the model. These attributes can be queried using a set of attribute query methods on the GRBModel, GRBVar, GRBConstr, and GRBQConstr classes.
The Gurobi algorithms keep careful track of the state of the model, so calls to GRBModel::optimize will only perform further optimization if relevant data has changed since the model was last optimized. If you would like to discard previously computed solution information and restart the optimization from scratch without changing the model, you can call GRBModel::reset.
After a MIP model has been solved, you can call GRBModel::fixedModel to compute the associated fixed model. This model is identical to the input model, except that all integer variables are fixed to their values in the MIP solution. In some applications, it is useful to compute information on this continuous version of the MIP model (e.g., dual variables, sensitivity information, etc.).
Infeasible Models
You have a few options if a model is found to be infeasible. You can try to diagnose the cause of the infeasibility, attempt to repair the infeasibility, or both. To obtain information that can be useful for diagnosing the cause of an infeasibility, call GRBModel::computeIIS to compute an Irreducible Inconsistent Subsystem (IIS). This method can be used for both continuous and MIP models, but you should be aware that the MIP version can be quite expensive. This method populates a set of IIS attributes.
To attempt to repair an infeasibility, call GRBModel::feasRelax to compute a feasibility relaxation for the model. This relaxation allows you to find a solution that minimizes the magnitude of the constraint violation.
Querying and Modifying Attributes
Most of the information associated with a Gurobi model is stored in a set of attributes. Some attributes are associated with the variables of the model, some with the constraints of the model, and some with the model itself. To give a simple example, solving an optimization model causes the X variable attribute to be populated. Attributes such as X that are computed by the Gurobi optimizer cannot be modified directly by the user, while others, such as the variable lower bound (the LB attribute) can.
Attributes are queried using GRBVar::get, GRBConstr::get, GRBQConstr::get, or GRBModel::get, and modified using GRBVar::set, GRBConstr::set, GRBQConstr::set, or GRBModel::set. Attributes are grouped into a set of enums by type (GRB_CharAttr, GRB_DoubleAttr, GRB_IntAttr, GRB_StringAttr). The get() and set() methods are overloaded, so the type of the attribute determines the type of the returned value. Thus, constr.get(GRB.DoubleAttr.RHS) returns a double, while constr.get(GRB.CharAttr.Sense) returns a char.
If you wish to retrieve attribute values for a set of variables or constraints, it is usually more efficient to use the array methods on the associated GRBModel object. Method GRBModel::get includes signatures that allow you to query or modify attribute values for arrays of variables or constraints.
The full list of attributes can be found in the Attributes section.
Most modifications to an existing model are done through the attribute interface (e.g., changes to variable bounds, constraint right-hand sides, etc.). The main exceptions are modifications to the constraint matrix and the objective function.
The constraint matrix can be modified in a few ways. The first is to call the chgCoeffs method on a GRBModel object to change individual matrix coefficients. This method can be used to modify the value of an existing non-zero, to set an existing non-zero to zero, or to create a new non-zero. The constraint matrix is also modified when you remove a variable or constraint from the model (through the GRBModel::remove method). The non-zero values associated with the deleted constraint or variable are removed along with the constraint or variable itself.
The model objective function can also be modified in a few ways. The easiest is to build an expression that captures the objective function (a GRBLinExpr or GRBQuadExpr object), and then pass that expression to method GRBModel::setObjective. If you wish to modify the objective, you can simply call setObjective again with a new GRBLinExpr or GRBQuadExpr object.
For linear objective functions, an alternative to setObjective is to use the Obj variable attribute to modify individual linear objective coefficients.
If your variables have piecewise-linear objectives, you can specify them using the setPWLObj method. Call this method once for each relevant variable. The Gurobi simplex solver includes algorithmic support for convex piecewise-linear objective functions, so for continuous models you should see a substantial performance benefit from using this feature. To clear a previously specified piecewise-linear objective function, simply set the Obj attribute on the corresponding variable to 0.
One very important item to note about attribute and model modifications in the Gurobi optimizer is that they are performed in a lazy fashion, meaning that they don't actually affect the model until the next call to optimize or update on that model object. This approach provides the advantage that the model remains unchanged while you are in the process of making multiple modifications. The downside, of course, is that you have to remember to call update in order to see the effect of your changes.
If you forget to call update, your program won't crash. The most common symptom of a missing update is a NOT_IN_MODEL exception, which indicates that the object you are trying to reference isn't in the model yet.
If you find the need to call update inconvenient, you can adjust the behavior of lazy updates with the UpdateMode parameter. By setting this parameter to 1, you can use newly added variables and constraints immediately for building or modifying the model. This setting does have a few downsides, though. It causes Gurobi to use a small amount of additional internal storage, and it introduces a small performance overhead. In addition, this setting may cause Gurobi to make less aggressive use of warm-start information when you modify a model and resolve it using simplex.
Managing Parameters
The Gurobi optimizer provides a set of parameters to allow you to control many of the details of the optimization process. Factors like feasibility and optimality tolerances, choices of algorithms, strategies for exploring the MIP search tree, etc., can be controlled by modifying Gurobi parameters before beginning the optimization. Parameters are set using methods on a GRBEnv object (e.g., GRBEnv::set). Current values may also be retrieved with GRBEnv::get. Parameters can be of type int, double, or string. You can also read a set of parameter settings from a file using GRBEnv::readParams, or write the set of changed parameters using GRBEnv::writeParams.
We also include an automated parameter tuning tool that explores many different sets of parameter changes in order to find a set that improves performance. You can call GRBModel::tune to invoke the tuning tool on a model. Refer to the parameter tuning tool section for more information.
One thing we should note is that each model gets its own copy of the environment when it is created. Parameter changes to the original environment therefore have no effect on existing models. Use GRBModel::getEnv to retrieve the environment associated with a particular model if you want to change a parameter for that model.
The full list of Gurobi parameters can be found in the Parameters section.
Memory Management
Memory management must always be considered in C++ programs. In particular, the Gurobi library and the user program share the same C++ heap, so the user must be aware of certain aspects of how the Gurobi library uses this heap. The basic rules for managing memory when using the Gurobi optimizer are as follows:
• As with other dynamically allocated C++ objects, GRBEnv or GRBModel objects should be freed using the associated destructors. In other words, given a GRBModel object m, you should call delete m when you are no longer using m.
• Objects that are associated with a model (e.g., GRBConstr, GRBSOS, and GRBVar objects) are managed by the model. In particular, deleting a model will delete all of the associated objects. Similarly, removing an object from a model (using GRBModel::remove) will also delete the object.
• Some Gurobi methods return an array of objects or values. For example, GRBModel::addVars returns an array of GRBVar objects. It is the user's responsibility to free the returned array (using delete[]). The reference manual indicates when a method returns a heap-allocated result.
One consequence of these rules is that you must be careful not to use an object once it has been freed. This is no doubt quite clear for environments and models, where you call the destructors explicitly, but may be less clear for constraints and variables, which are implicitly deleted when the associated model is deleted.
Monitoring Progress - Logging and Callbacks
Progress of the optimization can be monitored through Gurobi logging. By default, Gurobi will send output to the screen. A few simple controls are available for modifying the default logging behavior. If you would like to direct output to a file as well as to the screen, specify the log file name in the GRBEnv constructor. You can modify the LogFile parameter if you wish to redirect the log to a different file after creating the environment object. The frequency of logging output can be controlled with the DisplayInterval parameter, and logging can be turned off entirely with the OutputFlag parameter. A detailed description of the Gurobi log file can be found in the Logging section.
More detailed progress monitoring can be done through the GRBCallback class. The GRBModel::setCallback method allows you to receive a periodic callback from the Gurobi optimizer. You do this by sub-classing the GRBCallback abstract class, and writing your own callback() method on this class. You can call GRBCallback::getDoubleInfo, GRBCallback::getIntInfo, GRBCallback::getStringInfo, or GRBCallback::getSolution from within the callback to obtain additional information about the state of the optimization.
Modifying Solver Behavior - Callbacks
Callbacks can also be used to modify the behavior of the Gurobi optimizer. The simplest control callback is GRBCallback::abort, which asks the optimizer to terminate at the earliest convenient point. Method GRBCallback::setSolution allows you to inject a feasible solution (or partial solution) during the solution of a MIP model. Methods GRBCallback::addCut and GRBCallback::addLazy allow you to add cutting planes and lazy constraints during a MIP optimization, respectively.
Error Handling
All of the methods in the Gurobi C++ library can throw an exception of type GRBException. When an exception occurs, additional information on the error can be obtained by retrieving the error code (using method GRBException::getErrorCode), or by retrieving the exception message (using method GRBException::getMessage). The list of possible error return codes can be found in the Error Codes section.
|
# Measuring the Exposure to Sound Samples in Subjective Experiments
Yan, L; Chen, K; Gomez, F; Stoop, R (2012). Measuring the Exposure to Sound Samples in Subjective Experiments. In: Acoustics 2012, Hong Kong, China, 13 May 2012 - 18 May 2012.
## Abstract
Traditional measures of environmental noise exposure concentrate on time and power (e.g. Ldn). For short measurements, time is, however, of secondary importance and the approach may come up with misleading results. In this paper, we propose a novel method based on short-term dose values evaluated along the playing time of the sound samples, to solve this problem. A comprehensive study on potentially influencing factors is carried out, discussing the partitioning method for short-term period analysis, the statistical treatment of the short-term dose values and four different frequency weightings. Eleven indices are then used to measure the exposure of the fixed duration sound sample. This lays the groundwork for the dose-annoyance relationship via subjective experiments.
## Abstract
Traditional measures of environmental noise exposure concentrate on time and power (e.g. Ldn). For short measurements, time is, however, of secondary importance and the approach may come up with misleading results. In this paper, we propose a novel method based on short-term dose values evaluated along the playing time of the sound samples, to solve this problem. A comprehensive study on potentially influencing factors is carried out, discussing the partitioning method for short-term period analysis, the statistical treatment of the short-term dose values and four different frequency weightings. Eleven indices are then used to measure the exposure of the fixed duration sound sample. This lays the groundwork for the dose-annoyance relationship via subjective experiments.
## Statistics
### Citations
Dimensions.ai Metrics
|
# Circular Measures (IGCSE A LEVEL 9709)
An arc equal in length to the radius of a circle subtends an angle of $1$ radian at the centre.
Useful Relations
\begin{aligned} 2\pi \text{ radians}=360 \text{ degrees}\\\\ \pi \text{ radians}=180 \text{ degrees}\\\\ 1\text{ radian}=\dfrac{180}{\pi} \text{ degrees}\\\\ 1\text{ degree}=\dfrac{\pi}{180} \text{ radians}\\\\ \end{aligned}
Arc Length and Area of a Sector
• When $\theta$ is measured in radians, the length of arc $AB$ is $r\theta$ .
• When $\theta$ is measured in radians, the area of sector $AOB$ is $\dfrac{1}{2}r^2\theta$ .
1. The diagram shows ane quilateral triangle, $PQR$, with side length $5$ cm. $M$ is the midpoint of the line $QR$. An arc of a circle, centre $P$, touches $QR$ at $M$ and meets $PQ$ at $X$ and $PR$ at $Y$. Find in terms of $\pi$ and $\sqrt{3}:$
(a) the total perimeter of the shaded region.
(b) the total area of the shaded region.
Join PM, then $\triangle P R M$ and $\triangle P O M$ are $30^{\circ}-60^{\circ}$ right triangles,
\begin{aligned} &\\ \therefore \ P M &=\dfrac{\sqrt{3}}{2}\\\\ P Q&=\dfrac{5 \sqrt{3}}{2}\\\\ \text{arc length of } X Y &=P M \cdot \dfrac{\pi}{3}\\\\ &=\dfrac{5 \sqrt{3}}{2} \cdot \dfrac{\pi}{3}\\\\ &=\dfrac{5 \sqrt{3} \pi}{6} \mathrm{~cm}\\\\ \text{perimeter of shaded region} &=5+\dfrac{5 \sqrt{3} \pi}{6} \mathrm{~cm}\\\\ \text{total area of shaded region}& = \text{area of triangle - area of rector}&\\\\ &=\dfrac{\sqrt{3}}{4} \times 5^{2}-\dfrac{1}{2} \times \dfrac{5 \sqrt{3}}{2} \times \dfrac{\pi}{6} \\\\ &=\dfrac{25 \sqrt{3}}{4}-\dfrac{5 \sqrt{3} \pi}{24} \\\\ &=\dfrac{5 \sqrt{3}}{24}[30-\pi] \mathrm{cm}^{2} \end{aligned}
2. In the diagram, $OAB$ is a sector of a circle with centre $O$ and radius $8$ cm. Angle $BOA$ is $\alpha$ radians. $OAC$ is a semicircle with diameter $OA$. The area of the semicircle $OAC$ is twice the area of the sector $OAB$.
(a) Find $\alpha$ in terms of $\pi$.
(b) Find the perimeter of the complete figure in terms of $\pi$.
\begin{aligned} \text{radius of sector } O A B &=8 \mathrm{~cm}\\\\ \text{radius of semicircle } O A C &=4 \mathrm{~cm}\\\\ \text{area of sector } O A B &=\dfrac{1}{2}\left(8^{2}\right) \alpha\\\\ &=32 \alpha\\\\ \text{area of semicircle } O A C &=\dfrac{1}{2}\left(4^{2}\right) \pi\\\\ &=8 \pi \mathrm{cm}^{2}\\\\ \text{By the problem},&\\\\ 8 \pi &=2(32 \alpha)\\\\ \alpha &=\dfrac{\pi}{8}\\\\ \therefore\ \text{length of arc } A B=8 \alpha\\\\ &=8\left(\dfrac{\pi}{8}\right) \\\\ &=\pi \mathrm{cm}\\\\ \end{aligned}
\begin{aligned} &\text{perimeter } \text{of given figure}\\\\ &=O B+\text { length of arc } A B+\text { length of semicircle } \\\\ &=8+\pi+4 \pi \\\\ &=8+5 \pi \end{aligned}
3. The diagram shows triangle $ABC$ in which $AB$ is perpendicular to $BC$. The length of $AB$ is $4$ cm and angle $CAB$ is $\alpha$ radians. The arc $DE$ with centre $A$ and radius $2$ cm meets $AC$ at $D$ and $AB$ at $E$ . Find, in terms of $\alpha$,
(a) the area of the shaded region,
(b) the perimeter of the shaded region.
\begin{aligned} \text{length of arc } DE &=2\alpha \text{ cm }\\\\ BE &=2 \text{ cm }\\\\ \dfrac{BC}{AB} &=\ \tan\alpha\\\\ \therefore\ BC &=\ AB\tan\alpha\\\\ &=\ 4\tan\alpha \text{ cm }\\\\ \dfrac{AC}{AB} &=\ \sec\alpha\\\\ \therefore\ AC &=\ AB\sec\alpha\\\\ &=\ 4\sec\alpha \text{ cm }\\\\ \therefore\ DC &=\ 4\sec\alpha - 2\\\\ \end{aligned}
\begin{aligned} & \text{area of shaded region } \\\\ =&\ \text{area of triangle} - \text{ area of sector}\\\\ =&\ \dfrac{1}{2}(AB \cdot BC) - \dfrac{1}{2}AD^2\cdot\alpha\\\\ =&\ \dfrac{1}{2}(4 \cdot 4\tan\alpha) - \dfrac{1}{2}\cdot2^2\cdot\alpha\\\\ =&\ 8\tan\alpha - 2\cdot\alpha\\\\ =&\ 2(4\tan\alpha - \alpha)\\\\ & \text{perimeter of shaded region } \\\\ =&\ DC + BC + BE + \text{length of arc } DE\\\\ =&\ 4\sec\alpha - 2 + 4\tan\alpha + 2 + 2\alpha\\\\ =&\ 2(2\sec\alpha + 2\tan\alpha + \alpha) \end{aligned}
4. The diagram shows a circle with centre $A$ and radius $r$. Diameters $CAD$ and $BAE$ are perpendicular to each other. A larger circle has centre $B$ and passes through $C$ and $D$.
(a) Show that the radius of the larger circle is $r\sqrt{2}$.
(b) Find the area of the shaded region in terms of $r$.
$B C = \text{ radius of larger circle.}\\\\$
$\triangle B A C \text{is a } 45^{\circ}-45^{\circ} \text{ right triangle.}\\\\$
$\therefore\ B C=r \sqrt{2}$
\begin{aligned} & \\ A_{1}=\text { area of semicircle } C E D &=\dfrac{1}{2} \pi r^{2} \\\\ A_{2}=\text { area of sector } C B D &=\dfrac{1}{2} B C^{2} \times \angle C B D \\\\ &=\dfrac{1}{2}(r \sqrt{2})^{2} \times\left(\dfrac{\pi}{2}\right) \\\\ &=\dfrac{1}{2} \pi r^{2}\\\\ A_{3}=\text { area of } \triangle C B D &=\dfrac{1}{2} B C \times B D \\\\ &=\dfrac{1}{2}(r \sqrt{2})(r \sqrt{2}) \\\\ &=r^{2} \\\\ \therefore\ \text { area of shaded region } &=A_{1}-\left(A_{2}-A_{3}\right) \\\\ &=\dfrac{1}{2} \pi r^{2}-\left(\dfrac{1}{2} \pi r^{2}-r^{2}\right) \\\\ &= r^{2} \end{aligned}
5. The diagram shows a sector $OAB$ of a circle with centre $O$ and radius $r$. Angle AOB is $\theta$ radians. The point $C$ on $OA$ is such that $BC$ is perpendicular to $OA$. The point $D$ is on $BC$ and the circular arc $AD$ has centre $C$. (a) Find $AC$ in terms of $r$ and $\theta$. (b) Find the perimeter of the shaded region $ABD$ when $\theta= \dfrac{1}{3}\pi$ and $r = 4$, giving your answer as an exact value.
\begin{aligned} \triangle O B C & \text{ is a right triangle.}\\\\ \therefore B C &=r \sin \theta \\\\ O C &=r \cos \theta \\\\ A C &=O A-O C \\\\ &=r-r \cos \theta \\\\ &=r(1-\cos \theta) \\\\ C D &=A C\ \ (\because \text { radii of small sector }) \\\\ \therefore\ C D &=r(1-\cos \theta)\\\\ r=4, \theta &=\dfrac{\pi}{3} \text { (given) } \\\\ \textbf { length of arc } A D &=A C \times \dfrac{\pi}{2} \\\\ &=\dfrac{\pi r}{2}(1-\cos \theta) \\\\ &=\dfrac{\pi \times 4}{2}\left(1-\cos \dfrac{\pi}{3}\right) \\\\ &=\pi \\\\ B D &=B C-C D \\\\ &=r \sin \theta-r(1-\cos \theta) \\\\ &=r(\sin \theta+\cos \theta-1)\\\\ &=4\left(\sin \dfrac{\pi}{3}+\cos \dfrac{\pi}{3}-1\right) \\\\ &=4\left(\dfrac{\sqrt{3}}{2}+\dfrac{1}{2}-1\right) \\\\ &=2(\sqrt{3}-1) \\\\ \textbf { length of arc } A B &=r \theta \\\\ &=\dfrac{4 \pi}{3} \\\\ \therefore\ \textbf{ Perimeter of shaded} & \textbf{ region }\\\\ &=\pi+\dfrac{4 \pi}{3}+2(\sqrt{3}-1) \\\\ &=\dfrac{7 \pi}{3}+2(\sqrt{3}-1) \end{aligned}
6. The diagram shows a sector, $P O Q$, of a circle, centre $O$, with radius $4 \mathrm{~cm}$. The length of arc $P Q$ is $7 \mathrm{~cm}$. The lines $P X$ and $Q X$ are tangents to the circle at $P$ and $Q$, respectively.
(a) Find angle $P O Q$, in radians.
(b) Find the length of $P X$.
(c) Find the area of the shaded region.
radius $(r)=4 \mathrm{~cm}\\\\$
length of arc $P Q(s)=7 \mathrm{~cm}\\\\$
$\angle P O Q(\theta)=\dfrac{3}{8}=\dfrac{7}{4} \text{ radians}\\\\$
$\triangle P O X \text{ is a right triangle.}\\\\$
\begin{aligned} \therefore P X &=O P \tan \left(\dfrac{\theta}{2}\right) \\\\\ &=4 \tan \left(\dfrac{7}{8}\right) \mathrm{cm}\\\\ \end{aligned}
\begin{aligned} &\text{area of shaded region}\\\\ =&\ 2 \times \text { area of } \Delta P O \times-\text { area of sector POQ } \\\\ =&\ 2\left(\dfrac{1}{2} \times O P \times P X\right)-\dfrac{1}{2} O P^{2} \times \theta \\\\ =&1\ 6 \tan \left(\dfrac{7}{8}\right)-\dfrac{1}{2} \times 16 \times \dfrac{7}{4} \\\\ =&\ 16 \tan \left(\dfrac{7}{8}\right)-14 \mathrm{~cm}^{2} \end{aligned}
7. The diagram shows a sector, $P O R$, of a circle, centre $O$, with radius $8 \mathrm{~cm}$ and sector angle $\dfrac{\pi}{3}$ radians. The lines $O R$ and $Q R$ are perpendicular and $O P Q$ is a straight line. Find the exact area of the shaded region.
$\triangle O Q R \text{ is a } 30^{\circ}-60^{\circ}\text{ right triangle.}\\\\$
$\therefore\ QR=8 \sqrt{3}\\\\$
\begin{aligned} \text{area of }\triangle OQR &=\dfrac{1}{2} \times 8 \times 8 \sqrt{3} \\\\ &=32 \sqrt{3} \mathrm{~cm}^{2} \\\\ \text{area of sector } OPR &=\dfrac{1}{2} \times 8^{2} \times \dfrac{\pi}{3}\\\\ &=\dfrac{32 \pi}{3} \mathrm{~cm}^{2}\\\\ \end{aligned}
\begin{aligned} & \text{ area of shaded region}\\\\ & \text{area of } \triangle O Q R - \text{area of sector } OPR\\\\ =&\ 32 \sqrt{3}-\dfrac{32 \pi}{3} \\ =&\ \dfrac{32}{3}(3 \sqrt{3}-\pi) \mathrm{cm}^{2} \end{aligned}
8. The diagram shows a sector, $A O B$, of a circle, centre $O$, with radius $5 \mathrm{~cm}$ and sector angle $\dfrac{\pi}{3}$ radians. The lines $A P$ and $B P$ are tangents to the circle at $A$ and $B$, respectively.
(a) Find the exact length of $A P$.
(b) Find the exact area of the shaded region.
$\text{Since } AP \text{ is a tangent,}\\\\$
$\triangle O A P \text{ is a right triangle.}\\\\$
$\therefore\ A P=5 \tan \left(\dfrac{\pi}{6}\right)\\\\$
$\quad \text{area of shaded region}\\\\$
$=\ 2 \times$ area of $\triangle O A P$ - area of sector $A O B\\\\$
$=\ 2\left(\dfrac{1}{2} \times O A \times A P\right)-\dfrac{1}{2} O A^{2} \times \angle A O B\\\\$
$=\ 25 \tan \left(\dfrac{\pi}{6}\right)-\dfrac{25}{2} \times \dfrac{\pi}{3}\\\\$
$=\ 25 \tan \left(\dfrac{\pi}{6}\right)-\dfrac{25 \pi}{6}\\\\$
$=\ 25\left[\tan \left(\dfrac{\pi}{6}\right)-\dfrac{\pi}{6}\right]$
9. The diagram shows three touching circles with radii $6 \mathrm{~cm}, 4 \mathrm{~cm}$ and $2 \mathrm{~cm}$. Find the area of the shaded region.
Since $A C^{2}+B C^{2}=8^{2}+6^{2}=100$ and $A B^{2}=10^{2}=100.\\\\$
$A B^{2}=A C^{2}+B C^{2}.\\\\$
$\therefore \triangle A B C$ is a right triangle.
\begin{aligned} &\\ \angle A&=\tan ^{-1}\left(\dfrac{3}{4}\right) \\\\ \angle B&=\tan ^{-1}\left(\dfrac{4}{3}\right) \\\\ \angle C&=\dfrac{\pi}{2}\\\\ \text { area of } \triangle A B C &=\dfrac{1}{2} \times A C \times B C \\\\ &=\dfrac{1}{2} \times 8 \times 6 \\\\ &=24 \mathrm{~cm}^{2} \\\\ P=\dfrac{1}{2} \times 6^{2} \times \tan ^{-1}\left(\dfrac{3}{4}\right) &=18 \tan ^{-1}\left(\dfrac{3}{4}\right) \mathrm{cm}^{2} \\\\ g=\dfrac{1}{2} \times 2^{2} \times \dfrac{\pi}{2} &=\pi \mathrm{cm}^{2} \\\\ R=\dfrac{1}{2} \times 4^{2} \times \tan ^{-1}\left(\dfrac{4}{3}\right) &=8 \tan ^{-1}\left(\dfrac{4}{3}\right) \mathrm{cm}^{2}\\\\ \end{aligned}
\begin{aligned} &\text{area of shaded region}\\\\ =& \ \text { area of } \triangle A B C-(P+8+R) \\ =&\ 24-\left(18 \tan ^{-1}\left(\dfrac{3}{4}\right)+\pi+8 \tan ^{-1}\left(\dfrac{4}{3}\right)\right) \\ =&\ 1.857 \mathrm{~cm}^{2} \end{aligned}
10. The diagram shows a semicircle, centre $O$, with radius $8 \mathrm{~cm} .$ $F H$ is the arc of a circle, centre $E$. Find the area of:
(a) triangle $E O F$
(b) sector $F O G$
(c) sector $F E H$
\begin{aligned} &\textbf{area of } \triangle E O F\\\\ =&\ \dfrac{1}{2} \times E O \times F I \\\\ =&\ \dfrac{1}{2} \times 8 \times 8 \sin (\pi-2) \\\\ =&\ 32 \sin (\pi-2) \mathrm{cm}^{2}\\\\ &\textbf{area of sector } FOG\\\\ =&\ \dfrac{1}{2} \times O F^{2} \times \angle F O G \\\\ =&\ \dfrac{1}{2} \times 8^{2} \times(\pi-2) \\\\ =&\ 32(\pi-2) \mathrm{cm}^{2} \\\\ \angle F E H =&\ \dfrac{1}{2} \angle F O G \\\\ =&\ \dfrac{1}{2}(\pi-2)\\\\ E F^{2}=& 8^{2}+8^{2}-2\left(8^{2}\right) \cos (2) \\\\ =& 128(1-\cos (2)) \\\\ & \textbf{ area of sector FEH } \\\\ =&\ \dfrac{1}{2} E F^{2} \times \angle F E H \\\\ =&\ \dfrac{1}{2} \times 128 \times(1-\cos (2)) \times \dfrac{1}{2}(\pi-2) \\\\ =&\ 32(\pi-2)(1-\cos (2)) \mathrm{cm}^{2}\\\\ & \ \textbf{area of shaded region}\\\\ =&\ \textbf{area of sector } FOG- \textbf{area of } FOH\\\\ =&\ \textbf{ area of sector } FOG- (\textbf{area of sector } FEH - \textbf{area of } \triangle EOF) \\\\ =&\ 32(\pi-2)-32(\pi-2)(1-\cos (2))+32 \sin (\pi-2) \\\\ =&\ 32[(\pi-2) \cos (2)+\sin (\pi-2)] \\\\ =&\ 13.895 \mathrm{~cm}^{2}\\\\ \end{aligned}
11. The diagram shows a sector, $E O G$, of a circle, centre $O$, with radius $r \mathrm{~cm} .$ The line $G F$ is a tangent to the circle at $G$, and $E$ is the midpoint of $O F$.
(a) The perimeter of the shaded region is $P \mathrm{~cm}$. Show that $P=\dfrac{r}{3}(3+3 \sqrt{3}+\pi)$.
(b) The area of the shaded region is $A \mathrm{~cm}^{2}$. Show that $A=\dfrac{r^{2}}{6}(3 \sqrt{3}-\pi)$.
Since $E$ is the midpoint of of, $O E=E F=r \mathrm{~cm}\\\\$.
Since $GF$ a tangent, $OG\perp GF.\\\\$
$\therefore\ \triangle O G F \text{ is a } 30^{\circ}-60^{\circ} \text{ right triangle.}\\\\$
$\therefore\ G F=\sqrt{3} \mathrm{~cm}\\\\$
length of arc GF $=r \cdot \dfrac{\pi}{3} \mathrm{~cm}\\\\$
\begin{aligned} \therefore P &=r+\sqrt{3} r+r \cdot \dfrac{\pi}{3} \\\\ &=\dfrac{r}{3}(3+3 \sqrt{3}+\pi)\\\\ \end{aligned}
\begin{aligned} &\textbf{ area of sector OEG}\\\\ =&\ \dfrac{1}{2} r^{2}\left(\dfrac{\pi}{3}\right) \\\\ =&\ \dfrac{\pi}{6} r^{2}\\\\ &\textbf{area of shaded region}\\\\ =& \text { area of triangle - area of rector } \\\\ =&\ \dfrac{1}{2} r \cdot \sqrt{3} r-\dfrac{\pi}{6} r^{2} \\\\ =&\ \dfrac{\sqrt{3}}{2} r^{2}-\dfrac{\pi}{6} r^{2} \\\\ =&\ \dfrac{r^{2}}{6}(3 \sqrt{3}-\pi) \end{aligned}
12. The diagram shows two circles with radius $r \mathrm{~cm}$. The centre of each circle lies on the circumference of the other circle. Find, in terms of $r$, the exact area of the shaded region.
Let $A$ and $B$ denote the area of respective regions.
$\therefore\ A=$ area of equilateral $\triangle\\\\$
$\quad\ =\dfrac{\sqrt{3}}{4} r^{2}\\\\$
\begin{aligned} A+B&= \text{ area of sector}\\\\ &=\dfrac{1}{2} r^{2}\left(\dfrac{\pi}{3}\right) \\\\ &=\dfrac{\pi}{6} r^{2} \\\\ B &=A+B-A \\\\ &=\dfrac{\pi}{6} r^{2}-\dfrac{\sqrt{3}}{4} r^{2} \\\\ &=\dfrac{r^{2}}{12}(2 \pi-3 \sqrt{3})\\\\ \textbf{Area of } & \textbf{ shaded region}\\\\ &=2 A+4 B \\\\ &=2(A+B)+2 B \\\\ &=2\left(\dfrac{\pi}{6} r^{2}\right)+\dfrac{r^{2}}{6}(2 \pi-3 \sqrt{3}) \\\\ &=\dfrac{r^{2}}{6}(4 \pi-3 \sqrt{3}) \end{aligned}
13. The diagram shows a square of side length $10$ cm. A quarter circle, of radius $10$ cm, is drawn from each vertex of the square. Find the exact area of the shaded region.
Let $A, B$ and $C$ denot the areas of respective regions. Now, cosider the diagram as below.
$P Q=Q R=P R=$ radii of congruent circles.
$\therefore$ area of $\triangle P Q R=\dfrac{\sqrt{3}}{4}\left(10^{2}\right)=25 \sqrt{3} \mathrm{~cm}^{2}\\\\$
area of sector $P Q R=\dfrac{1}{2}\left(10^{2}\right) \dfrac{\pi}{3}\\\\$
$\hspace{3cm} =\dfrac{50 \pi}{3} \mathrm{~cm}^{2}\\\\$
Let $E$ be the area of yellow-shaded region as shown below.
\begin{aligned} \therefore\ E &=\text { area of sector }-\text { area of } \triangle \\\\ &=\dfrac{50 \pi}{3}-25 \sqrt{3} \mathrm{~cm}^{2} \\\\ B+C+E &=\dfrac{1}{2}\left(10^{2}\right)\left(\dfrac{\pi}{6}\right) \\\\ &=\dfrac{25 \pi}{3} \mathrm{~cm}^{2} \\\\ \therefore\ B+C &=\dfrac{25 \pi}{3}-\dfrac{50 \pi}{3}+25 \sqrt{3} \\\\ &=25 \sqrt{3}-\dfrac{25 \pi}{3} \mathrm{~cm}^{2}\\\\ \end{aligned}
Now recall to give diagram.
\begin{aligned} A &=\text { area of square }-4(B+C) \\\\ &=10^{2}-4\left[25 \sqrt{3}-\dfrac{25 \pi}{3}\right] \\\\ &=100-100\left[\sqrt{3}-\dfrac{\pi}{3}\right] \\\\ &=100\left[1-\sqrt{3}+\dfrac{\pi}{3}\right] \mathrm{cm}^{2} \end{aligned}
14. The diagram shows a circle with radius $1$ cm, centre $O$. Triangle AOB is right angled and its hypotenuse AB is a tangent to the circle at $P$. Angle $BAO = x$ radians.
(a) Find an expression for the length of AB in terms of $\tan x$.
(b) Find the value of $x$ for which the two shaded areas are equal.
\begin{aligned} \dfrac{1}{A P} &=\tan x \\\\ A P &=\dfrac{1}{\tan x} \\\\ \dfrac{P B}{1} &=\tan x \\\\ P B &=\tan x \\\\ A B &=A P+P B \\\\ &=\tan x+\dfrac{1}{\tan x}\\\\ \textbf{Area } & \textbf{ of triangle}\\\\ &=\dfrac{1}{2} \times A B \times O P \\\\ &=\dfrac{1}{2}\left(\tan x+\dfrac{1}{\tan x}\right)\\\\ \textbf{Area } & \textbf{ of sector}\\\\ &=\dfrac{1}{2}\left(\dfrac{3 \pi}{2}\right) \\\\ &=\dfrac{3 \pi}{4}\\\\ \textbf{By the problem}&\\\\ \dfrac{1}{2}\left(\tan x+\dfrac{1}{\tan x}\right) &=\dfrac{3 \pi}{4} \\\\ \dfrac{1}{2}\left(\dfrac{\sin x}{\cos x}+\dfrac{\cos x}{\sin x}\right) &=\dfrac{3 \pi}{4} \\\\ \dfrac{\sin ^{2} x+\cos ^{2} x}{2 \sin x \cos x} &=\dfrac{3 \pi}{4} \\\\ \dfrac{1}{\sin 2 x} &=\dfrac{3 \pi}{4} \\\\ \sin 2 x &=\dfrac{4}{3 \pi} \end{aligned}
15. The diagram shows a sector, $AOB$, of a circle, centre $O$, with radius $R$ cm and sector angle $\dfrac{π}{3}$ radians.
An inner circle of radius $r$ cm touches the three sides of the sector.
(a) Show that $R = 3r$.
(b) Show that $\dfrac{\text{area of inner circle}}{\text{area of sector}}=\dfrac{2}{3}$ .
let the $X, P, Y$ be the point of tangency as shown in figure.
The line joining the point $O$ and $P$ passes theough the centre of inscribed cirle.
Let that centre be $Q.\\\\$
$\therefore Q X=Q Y=QS=r\\\\$
Since $OP$ bisects $\angle AOB,\\\\$
$\angle A O P=\angle P O B=\dfrac{\pi}{6} \mathrm{rad}\\\\$
$\therefore \triangle OQX$ and $OQY$ are $30^{\circ}-60^{\circ}$ right triangle.
Thus, $OQ =2 r$ and $O P=R=3 r\\\\$.
\begin{aligned} \dfrac{\text { Area of inner circle }}{\text { Area of sector }}&=\dfrac{\pi r^{2}}{\dfrac{1}{2} R^{2}\left(\dfrac{\pi}{3}\right)}\\\\ &=\dfrac{\pi r^{2}}{\dfrac{1}{2} \cdot \dfrac{\pi}{3} \cdot 9 r^{2}} \\\\ &=\dfrac{2}{3} \end{aligned}
16. The diagram shows a metal plate made by fixing together two pieces, $OABCD$ (shaded) and $OAED$ (unshaded). The piece $OABCD$ is a minor sector of a circle with centre $O$and radius $2r$. The piece $OAED$ is a major sector of a circle with centre $O$ and radius $r$. Angle $AOD$ is $\alpha$ radians. Simplifying your answers where possible, find, in terms of $\alpha, \pi$ and $r$,
(a) the perimeter of the metal plate,
(b) the area of the metal plate.
It is now given that the shaded and unshaded pieces are equal in area.
(c) Find $\alpha$ in terms of $\pi$.
\begin{aligned} l_{1} &=\text { length of arc AED. } \\\\ &=r(2 \pi-\alpha) \\\\ l_{2} &=\text { length of arc BD } \\\\ &=2 r \alpha\\\\ \textbf{Perimeter } & \textbf{ of metal plate}\\\\ &=l_{1}+l_{2}+2 r \\\\ &=r(2 \pi-\alpha)+2 r \alpha+2 r \\\\ &=r(2 \pi-\alpha+2 \alpha+2) \\\\ &=r(2 \pi+\alpha+2)\\\\ \textbf{By the problem,}&\\\\ \text { shaded are } &=\text { unshaded area } \\\\ \dfrac{1}{2}(2 r)^{2} \alpha &=\dfrac{1}{2} r^{2}(2 \pi-\alpha) \\\\ 4 \alpha &=2 \pi-\alpha \\\\ 5 \alpha &=2 \pi \\\\ \alpha &=\dfrac{2 \pi}{5} \end{aligned}
စာဖတ်သူ၏ အမြင်ကို လေးစားစွာစောင့်မျှော်လျက်!
|
# Detect outliers in mixture of Gaussians
I have a ton of univariate samples ($x_i \in \mathbb{R}^+$). I'd like an automated method to check for outliers and identify the outliers, if any are present. A reasonable model for the distribution of the non-outliers is a mixture of Gaussians. The number of Gaussians in the mixture and their parameters are not known a priori. Can you suggest a simple method for identifying outliers? Do you have any recommendations? It'd be nice if it were simple to code up in Python.
Something quick and dirty -- say, easy to understand, easy to implement, and pretty effective-- beats something complex but optimal. For example, I'm a bit reluctant to wade into something fancy based upon expectation maximization.
Example parameters: I might have 10,000 samples or so. The distribution of non-outliers might be a mixture of 2 Gaussians; or I might have a mixture of a few hundred Gaussians.
Update: People have asked how anything could possibly be an outlier, given these assumptions. (Presumably, the unstated concern is that this problem may be unsolvable: if every data set is always explainable by some mixture model, then there's no basis to ever identify anything as an outlier.) That's a fair question, so let me try to respond. In my application domain, I can reasonably assume that there will be dozens of samples from each component Gaussian. e.g., I might have 40,000 samples from a mixture of 100 Gaussians, where each Gaussian component has a probability no lower than 0.001 (so it is almost guaranteed that I have at least 10 samples from each Gaussian). I realize I didn't state this assumption earlier, and I apologize for that. However, with this additional assumption, I believe the problem is solvable. There exist examples of data sets where one or more points can be considered outliers (they cannot reasonably be explained by any mixture model). For example, consider a data set that has a single isolated point that is very far from all others: if it's far enough away, it can't be explained by the Gaussian mixture model and thus can be recognized as an outlier. In conclusion, I believe that the problem is well-defined and is solvable (given the additional assumption stated here): there do exist example situations where some points can reasonably be identified as outliers.
Note that I'm not trying to propose a special or unusual definition of outlier. I am happy to use the standard notion of outlier (e.g., a point that cannot reasonably be explained as having been generated by the hypothesized process, because it is too unlikely to have been generated by that process).
-
10,000 samples how big each? – Peter Ellis Feb 11 '12 at 9:33
Each sample is a single real number. The samples are $x_1,x_2,\dots,x_n$, where $n$ might be 10,000 or so, where $x_i \in \mathbb{R}^+$. – D.W. Feb 11 '12 at 17:44
Your new definition of an "outlier" really says you're looking for clusters that are unusually small in size. After all, this definition does not stipulate that an "outlier" should be unusually large or small; it only refers to small groups of values that are "far from all others." This suggests you simply apply clustering methods (of which there are many). – whuber Feb 11 '12 at 19:30
@whuber, I don't understand. Perhaps there's been a miscommunication? I'm not looking for a cluster of outliers, and I'm not saying I expect outliers to appear in clusters. And I didn't intend to provide anything different from the standard definition of outlier. I'm merely pointing out that there are cases when one can clearly identify that a particular point is likely an outlier, despite the fact that non-outliers come from a mixture model. Am I making no sense? – D.W. Feb 12 '12 at 0:05
Your edit defines outliers as being part of clusters of fewer than 10 points that are far from all others. – whuber Feb 12 '12 at 3:36
I have suggested, in comments, that an "outlier" in this situation might be defined as a member of a "small" cluster centered at an "extreme" value. The meanings of the quoted terms need to be quantified, but apparently they can be: "small" would be a cluster of less than 10 values and "extreme" can be determined as outlying relative to the set of component means in the mixture model. In this case, outliers can be found with simple post-processing of any reasonable cluster analysis of the data.
Choices have to be made in fine-tuning this approach. These choices will depend on the nature of the data and therefore cannot be completely specified in a general answer like this. Instead, let's analyze some data. I use R due to its popularity on this site and succinctness (even compared to Python).
First, create some data as described in the question:
set.seed(17) # For reproducible results
centers <- rnorm(100, mean=100, sd=20)
x <- c(centers + rnorm(100*100, mean=0, sd=1),
rnorm(100, mean=250, sd=1),
rnorm(9, mean=300, sd=1))
This command specifies 102 components: 100 of them are situated like 100 independent draws from a normal(100, 20) distribution (and will therefore tend to lie between 50 and 150); one of them is centered at 250, and one is centered at 300. It then draws 100 values independently from each component (using a common standard deviation of 1) but, in the last component centered at 300, it draws only 9 values. According to the characterization of outliers, the 100 values centered at 250 do not constitute outliers: they should be viewed as a component of the mixture, albeit situated far from the others. However, one cluster of nine high values consists entirely of outliers. We need to detect these but no others.
Most omnibus univariate outlier-detection procedures would either not detect any of these 109 highest values or would indicate all 109 are outliers.
Suppose we have a good sense of the standard deviations of the components (obtained from prior information or from exploring the data). Use this to construct a kernel density estimate of the mixture:
d <- density(x, bw=1, n=1000)
plot(d, main="Kernel density")
The (almost invisible) blip at the extreme right qualifies as a set of outliers: its small area (less than 10/10109 = 0.001 of the total) indicates it consists of just a few values and its situation at one extreme of the x-axis earns it the appellation of "outlier" rather than "inlier." Checking these things is straightforward:
x0 <- d$x[d$y > 1000/length(x) * dnorm(5)]
gaps <- tail(x0, -1) - head(x0, -1)
histogram(gaps, main="Gap Counts")
The density estimate d is represented by a 1D grid of 1000 bins. These commands have retained all bins in which the density is sufficiently large. For "large" I chose a very small value, to make sure that even the density of a single isolated value is picked up, but not so small that obviously separated components are merged.
Evidently the gap distribution has two high outliers (which can automatically be detected using any simple procedure, even an ad hoc one). One characterization is that they both exceed 25 (in this example). Let's find the values associated with them:
large.gaps <- gaps > 25
ranges <- rbind(tail(x0,-1)[large.gaps], c(tail(head(x0,-1)[large.gaps], -1), max(x))
The output is
[,1] [,2]
[1,] 243.9937 295.7732
[2,] 256.3758 300.9340
Within the range of data (from 25 to 301) these gaps determine two potential outlying ranges, one from 244 to 256 (column 1) and another from 296 to 301 (column 2). Let's see how many values lie within these ranges:
lapply(apply(ranges, 2, function(r){x[r[1] <= x & x <= r[2]]}), length)
The result is
[[1]]
[1] 100
[[2]]
[1] 9
The 100 is too large to be unusual: that's one of the components of the mixture. But the 9 is small enough. It remains to see whether any of these components might be considered outlying (as opposed to inlying):
apply(ranges, 2, mean)
The result is
[1] 250.1848 298.3536
The center of the 100-point cluster is at 250 and the center of the 9-point cluster is at 298, far enough from the rest of the data to constitute a cluster of outliers. We conclude there are nine outliers. Specifically, these are the values determined by column 2 of ranges,
x[ranges[1,2] <= x & x <= ranges[2,2]]
In order, they are
299.0379 300.0376 300.2696 300.3892 300.4250 300.5659 300.7018 300.8436 300.9340
-
I'm not sure I understand the issue here, but the MAD-Median rule:
$\frac{|X-M|}{MADN}>2.24$, where $M$ is the median and $MADN$ is the $\frac{\text{median absolute deviation from the median}}{0.6745}$
is pretty commonly used. Wilcox's WRS package in R has an out() function that fits this and returns the cases to keep and cases to drop, and I'm sure it would be easy to code in other languages. On the face of it this would be an answer to your question - one of many of course because there is a vast literature on outliers.
You may need a more restrictive definition of "outlier", of course. If you are happy with any observations that are consistent with a mixed distribution of 100s of Gaussian variables it is hard to imagine anything being ruled an outlier.
-
Hi @Peter. I am not sure, but I think that formula has a problem of not accounting for sample size. With large samples, even with a single normal distribution, more outliers are to be expected. – Peter Flom Feb 11 '12 at 13:20
Seems like a useful rule, thanks! I wonder if this rule might have been designed for use with approximately Gaussian data (rather than a mixture of multiple Gaussians). I notice that in some cases it might miss some outliers. e.g., if the mixture is $0.5 \mathcal{N}(30, 1) + 0.5 \mathcal{N}(70, 1)$, then an observation of the value 10 is likely an outlier, but will not be detected by the MAD-Median rule. However, that might be a minor quibble. This rule seems like a nice one to try. Thank you! – D.W. Feb 11 '12 at 17:53
@PeterFlom, I would imagine that you can just increase the constant $2.24$ a bit to account for the increased sample size. For instance, changing $2.24$ to $3.5$ seems like it might keep the false alarm rate at or below $1/10^6$ (based on the rule of thumb that, for a Gaussian, $MADN$ is a good estimate of $\sigma$; and based upon my speculation that the false alarm rate for a mixture of Gaussians should be even lower than for a single Gaussian). However do tell me if I'm missing something or if I've erred. – D.W. Feb 11 '12 at 18:03
Simple deviation methods like these fail when the number of outliers is unknown and can exceed 1. With non-robust criteria, groups of outliers can "mask" their presence by inflating the SD; with robust criteria (like one based on MAD and medians), a large number of non-outlying groups can be identified as "outlying." That will be the case in this problem setup if the SDs of the individual mixture components are small compared to the spread of their centers. – whuber Feb 11 '12 at 19:34
@D.W. Let the dataset consist of 100 draws from a Normal$(0,1)$ distribution ($N(0,1)$), 100 from $N(-9,1)$, and 50 from $N(1000,1)$. The median will be around $1$ and the MAD will be around $10$, whence the 50 last draws will have standardized values around $(1000-1)/10 * 0.67)\approx 150$, all of them apparently strong outliers by Peter Ellis's criterion. It makes no sense to declare the top 20% of any dataset to be "outliers," especially when you expect the data to be the mixture you described. – whuber Feb 12 '12 at 20:57
If your range of possible distributions of non-outliers is so broad, I don't think you can have any outliers. But perhaps you can impose some restrictions on the mixture?
For example, if N = 10,000 and it's a mixture of $\mathcal{N}~(9900, 10, 10)$ and $\mathcal{N}~(100, 50, 100)$ then some very large values would be non-outliers.
In addition, in general, automated searching for outliers can only be a first step.
-
Thanks. Good points. See my update to the question for more information that explains how you can have outliers. Automated searching: yes, I realize that automated search is only a first step. A human will examine all items that have been flagged as a possible outlier, and there are other ways (out of scope for this question) for identifying outliers. However, I don't want to bother the human too much more than necessary. – D.W. Feb 11 '12 at 17:46
I don't think the additional restriction really solves things enough to make the problem solvable in an automated way. Look at my example. x <- c(rnorm(9900,10,10), rnorm(100,50,100)) quantile(x, .999) x[x>175] The first time I tried this, I got a maximum of 317.8, and a 2nd highest of 233, then things were tightly bunched. Is 317 "far" from 233? I think so. But it's a combination of 2 normals – Peter Flom Feb 11 '12 at 21:58
I don't understand why you think your example proves the problem is not solvable. In your example, presumably fancy methods like EM could reconstruct the parameters of the mixture model (100 observations from the $\mathcal{N}(50,100)$ distribution should be plenty), compute $p$-values for each observed value, and then identify outliers. (In your example, once we know the parameters, 317 is only 2.7 standard deviations above the mean 50, so not an outlier.) So it seems it should be possible to detect outliers. I don't follow why you've concluded it is impossible. – D.W. Feb 12 '12 at 0:21
I don't see how EM could reconstruct the mixture model, if told only that it is a combination of normal distributions. Maybe it could, but I don't see how it could do so precisely. And something that is an outlier from one mixture of normals would not be an outlier from another mixture of normals. – Peter Flom Feb 12 '12 at 12:48
I think the key to resolving this is the implicit criterion that each mixture component has enough probability to guarantee that it contributes a large number (10 or more, e.g.) of the data. Then isolated clusters (of one to nine points) identified by fitting a mixture model would violate this assumption and constitute either "outliers" (for extreme values) or "inliers" (for non-extreme values). – whuber Feb 12 '12 at 21:05
|
# How to create an injective function to generate pseudo-random numbers with seed
Let's call A the set of all the n-digit natural numbers (base 10).
So with n=3, they would be 000, 001, 002, ... 999
## Basic question:
I need to create a mathematic function with this features:
• it maps numbers from A to A (it assigns to every number in A another number in A);
• it's injective (it never maps distinct numbers to the same number);
• the list of numbers generated by ordered numbers need to look like a random numbers list (see later for an explanation);
• I need to control this randomness with a seed (a number that determines the function, same number, same couple of values).
When I talk about pseudo random generation I mean the numbers mapped need to look like a random series:
For example
000 -> 956
001 -> 289
002 -> 392
003 -> 003
004 -> 128
How can I generate a function like this?
## Extended question:
How can I do the same using not n-digit natural numbers but sequences of n digits taken from a custom alphabet (for example [0,1,2,A,K,B]).
• What you seeks seems to be a random permutation, see en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle for an algorithm – gammatester Sep 5 '18 at 18:09
• Yes! That's it... I can use a pseudo-random number generator (with seed) to shuffle the set, so I can obtain the same shuffle order with the same seed. Thanks! – mugnozzo Sep 6 '18 at 8:05
|
How to convert "433 kg" to "mg" in scientific notation?
Sep 11, 2016
see explanation
Explanation:
mg = milligram
$1 k g = 1 , 000 , 000 m g = 1 \times {10}^{6} m g$
$433 k g = 433 \times 1 \times {10}^{6} = 4.33 \times {10}^{8} m g$
Sep 11, 2016
"433 kg" = ?
Well, you can go from $\text{kg}$ to $\text{g}$ to $\text{mg}$ to make it easier. There's never any reason to skip steps, especially when you aren't comfortable yet...
$433 \cancel{\text{kg" xx (10^3 cancel"g")/cancel"kg" xx (10^3 "mg")/cancel"g}}$
$= 433 \times {10}^{6} \text{mg}$
$= \textcolor{b l u e}{4.33 \times {10}^{8} \text{mg}}$
Remember that scientific notation asks you to have the base number as $1. \overline{0} < x < 9. \overline{9}$. So slip that decimal point two magnitudes back.
$433 \times {10}^{6} \text{mg}$
$= 433 \times {10}^{8 - 2} \text{mg}$
$= 433 \times {10}^{- 2} \times {10}^{8} \text{mg}$
$= 4.33 \times {10}^{8} \text{mg}$
|
0%
# Problem 643
## $2$-Friendly
Two positive integers $a$ and $b$ are $2$-friendly when $\text{gcd}(a,b)=2^t,t>0$. For example, $24$ and $40$ are $2$-friendly because $\text{gcd}(24,40)=8=2^3$ while $24$ and $36$ are not because $\text{gcd}(24,36)=12=2^2\cdot 3$ not a power of $2$.
Let $f(n)$ be the number of pairs, $(p,q)$, of positive integers with $1\le p<q\le n$ such that $p$ and $q$ are $2$-friendly. You are given $f(10^2)=1031$ and $f(10^6)=321418433$ modulo $1\ 000\ 000\ 007$.
Find $f(10^{11})$ modulo $1\ 000\ 000\ 007$.
## $2$-友善数
.
Find $f(10^{11})$并对$1\ 000\ 000\ 007$取余。
|
12.09.2018
# Geodesic flows: notes 7
## Geodesic flows, notes 7: recap
## 0. The goal
The goal is to show that a function on a suitable Riemannian manifold $(M,g)$ is uniquely determined by its integrals over all maximal geodesics. To this end, we must understand the geometry of geodesics in great detail. The geodesic X-ray transform takes a function $f\colon M\to\R$ to a function $If\colon\Gamma\to\R$, where $\Gamma$ is the set of all geodesics on $M$. This $I$ is a linear integral transform, and it is known as the geodesic X-ray transform. The question is whether this transform is injective.
## 1. Symplectic manifolds and Hamilton flows
Definition. A symplectic manifold is a pair $$(N, \sigma)$$ where $$N$$ is an $$2n$$-dimensional smooth manifold, and $$\sigma$$ is symplectic form, that is, a closed $$2$$-form on $$N$$ which is nondegenerate in the sense that for any $$\rho \in N$$, the map $$I_{\rho}\colon T_{\rho} N \to T^*_{\rho} N, I_{\rho}(s) = \sigma(s, \,\cdot\,)$$ is bijective.
Example 1. The space $$\mathbb{R}^{2n}$$ has a standard symplectic structure given by the $$2$$-form $$\sigma = dx_1 \wedge \,dx_{n+1} + \ldots + dx_n \wedge \,dx_{2n}$$.
Example 2. More generally, if $$M$$ is an $$n$$-dimensional $$C^{\infty}$$ manifold, then $$N = T^* M$$ becomes a symplectic manifold as follows: if $$\pi\colon T^* M \to M$$ is the natural projection, there is a $$1$$-form $$\lambda$$ on $$N$$ (called the Liouville form) defined by $$\lambda_{\rho} = \pi^* \rho, \rho \in T^* M$$. Then $$\sigma = d\lambda$$ is a closed $$2$$-form. If $$x$$ are local coordinates on $$M$$, and if $$(x,\xi)$$ are associated local coordinates (called canonical coordinates) on $$T^* M$$, then in these local coordinates \begin{align*} \lambda &= \xi_j \,dx^j, \\ \sigma &= d\xi_j \wedge \,dx^j. \end{align*} It follows that $$\sigma$$ is nondegenerate and hence a symplectic form.
A Riemannian metric is an isomorphism $TM\to T^*M$, so it can be used to give a natural symplectic structure on the tangent bundle of a Riemannian manifold.
Definition. Let $$(N,\sigma)$$ be a symplectic manifold. Given any function $$f \in C^{\infty}(N)$$, the Hamilton vector field of $$f$$ is the vector field $$H_f$$ on $$N$$ defined by $H_f = I^{-1}(df)$ where $$df$$ is the exterior derivative of $$f$$ (a $$1$$-form on $$N$$), and $$I$$ is the isomorphism $$TN \to T^* N$$ given by the nondegenerate $$2$$-form $$\sigma$$.
Example 2. In $$\mathbb{R}^{2n}$$ one has $$I(s,t)=(t,-s)$$, $$s, t \in \mathbb{R}^n$$, and $H_f = \nabla_{\xi} f \cdot \nabla_x - \nabla_x f \cdot \nabla_{\xi}.$
Definition. Let $$(N,\sigma)$$ be a symplectic manifold, and let $$f \in C^{\infty}(N)$$. Denote by $$\varphi_t$$ the flow on $$N$$ induced by $$H_f$$, that is, $\varphi_t\colon \rho(0) \mapsto \rho(t) \text{ where } \dot{\rho}(t) = H_f(\rho(t)).$
Any Hamilton flow map is symplectic ($$(\varphi_t)^* \sigma = \sigma$$) and consequently volume-preserving.
## 2. The geodesic flow
The geodesic flow on a Riemannian manifold $(M,g)$ is a dynamical system on $T^*M$ (or $TM$, the two are naturally isomorphic via the Riemann metric). A geodesic is uniquely determined by its initial position and velocity. The tangent bundle $T^*M$ is a symplectic manifold, and the geodesic flow can be realized as a Hamilton flow. It is given by the Hamilton function $$f\colon T^* M \to \mathbb{R}, \ \ f(x,\xi) = \frac{1}{2} |\xi|_{g^{-1}}^2 = \frac{1}{2} g^{jk}(x) \xi_j \xi_k.$$ The Hamiltonian equation of motion becomes exactly the geodesic equation. One can also see the geodesic flow from the Langrangian point of view, or geometrically via local length minimization.
## 4. The Sasaki metric
The tangent bundle of a smooth manifold is a smooth manifold (of double dimension). There is a canonical Riemannian metric on the tangent bundle of a Riemannian manifold. This is the Sasaki metric.
The tangent bundle describes possible directions of motion on $M$; each point on $TM$ contains a point $x\in M$ and a vector $v\in T_xM$. Similarly, $TTM$ describes the directions of motion on $TM$. It is natural to split motion in two components: motion within a fiber (vertically) or motion of the base point only (horizontally). This division is most clear when $M=\R^n$; then $TM=\R^{2n}$ and $TTM=\R^{4n}$.
For any $\theta=(x,v)\in TM$, we split $T_\theta TM=H(\theta)\oplus V(\theta)$. Note that if $\dim(M)=n$, then $\dim(H(\theta))=\dim(V(\theta))=\dim(T_xM)=n$. It turns out that there are natural isomorphisms $H(\theta)\to T_xM$ and $V(\theta)\to T_xM$.
Let $\pi\colon TM\to M$ be the canonical projection. The vertical fiber is then $V(\theta)=\ker(d_\theta\pi)$ — there is no movement in the base.
There is a connection map $K\colon TTM\to TM$. A point $\theta\in TTM$ describes (to first order) a curve on $TM$. This is a curve on $M$ and a vector field along it. The covariant derivative of this vector field along this curve is $K(\theta)\in T_xM$. The horizontal fiber is then $H(\theta)=\ker(K_\theta)$ — there is no movement in the fiber (parallel transport).
The natural isomorphisms are $d_\theta\pi|_{H(\theta)}\colon H(\theta)\to T_xM$ and $K_\theta|_{V(\theta)}\colon V(\theta)\to T_xM$. The Sasaki metric is obtained by declaring these to be isometries (inherit the metric from $T_xM$) and $H(\theta)\perp V(\theta)$. We can split any vector $TTM\ni\eta=(\eta_h,\eta_v)$. In this notation $$\ip{\eta}{\xi}_\Sa = \ip{\eta_h}{\xi_h} + \ip{\eta_v}{\xi_v}.$$
## 6. Coordinate representations of the Sasaki metric
For any coordinates on $M$, there are corresponding coordinates on $TM$ given by the coordinate functions and their differentials. If $x$ denotes the coordinates on $M$, let $(x,y)$ be the corresponding coordinates on $TM$. Let also $(x,y,X,Y)$ be the corresponding coordinates on $TTM$.
The vectors $$\{\delta_{x^j}=\partial_{x^j} - \Gamma^l_{jk} y^k \partial_{y^l}\}_{j=1}^n$$ are a basis for the subspace $$H(\theta)$$, where $$\theta =(x_0,y_0)$$. The vectors $$\{\partial_{y^k}\}_{k=1}^n$$ are a basis for the subspace $$V(\theta)$$, $$\theta =(x_0,y_0)$$.
The operators $d\pi$ and $K$ can be described in these coordinates: \begin{align*} K_\theta(\delta_{x^j}) &= 0,\\ d_\theta\pi(\delta_{x^j}) &= \partial_{x^j},\\ d_\theta\pi(\partial_{y^k}) &= 0,\\ K_\theta(\partial_{y^k}) &= \partial_{x^k}. \end{align*}
Given vectors $$\xi,\eta \in T_\theta TM$$ and writing them in the basis given by $\delta_{x^j}$ and $\partial_{x^j}$, i.e. $$\xi = X^i \delta_{x^i} + Y^k \partial_{y^k}, \quad \eta = \tilde X^i \delta_{x^i} + \tilde Y^k \partial_{y^k},$$ we get that $$\langle \xi, \eta \rangle_{\text{Sasaki}} = \langle X^i \partial_{x^i}, \tilde X^i \partial_{x^i} \rangle + \langle Y^k \partial_{x^k}, \tilde Y^k \partial_{x^k} \rangle = g_{jk} X^j \tilde X^k + g_jk Y^j \tilde Y^k.$$
|
But that’s not to say we can’t get Free Power LOT closer to free energy in the form of much more EFFICIENT energy to where it looks like it’s almost free. Take LED technology as Free Power prime example. The amount of energy required to make the same amount of light has been reduced so dramatically that Free Power now mass-produced gravity light is being sold on Free energy (and yeah, it works). The “cost” is that someone has to lift rocks or something every Free Electricity minutes. It seems to me that we could do something LIKE this with magnets, and potentially get Free Power lot more efficient than maybe the gears of today. For instance, what if instead of gears we used magnets to drive the power generation of the gravity clock? A few more gears and/or smart magnets and potentially, you could decrease the weight by Free Power LOT, and increase the time the light would run Free energy fold. Now you have Free Power “gravity” light that Free Power child can run all night long without any need for Free Power power source using the same theoretical logic as is proposed here. Free energy ? Ridiculous. “Conservation of energy ” is one of the most fundamental laws of physics. Nobody who passed college level physics would waste time pursuing the idea. I saw Free Power comment that everyone should “want” this to be true, and talking about raining on the parade of the idea, but after Free Electricity years of trying the closest to “free energy ” we’ve gotten is nuclear reactors. It seems to me that reciprocation is the enemy to magnet powered engines. Remember the old Mazda Wankel advertisements?
Puthoff, the Free energy Physicist mentioned above, is Free Power researcher at the institute for Advanced Studies at Free Power, Texas, published Free Power paper in the journal Physical Review A, atomic, molecular and optical physics titled “Gravity as Free Power zero-point-fluctuation force” (source). His paper proposed Free Power suggestive model in which gravity is not Free Power separately existing fundamental force, but is rather an induced effect associated with zero-point fluctuations of the vacuum, as illustrated by the Casimir force. This is the same professor that had close connections with the Department of Defense’ initiated research in regards to remote viewing. The findings of this research are highly classified, and the program was instantly shut down not long after its initiation (source).
We need to stop listening to articles that say what we can’t have. Life is to powerful and abundant and running without our help. We have the resources and creative thinking to match life with our thoughts. Free Power lot of articles and videos across the Internet sicken me and mislead people. The inventors need to stand out more in the corners of earth. The intelligent thinking is here and freely given power is here. We are just connecting the dots. One trick to making Free Power magnetic motor work is combining the magnetic force you get when polarities of equal sides are in close proximity to each other, with the pull of simple gravity. Heavy magnets rotating around Free Power coil of metal with properly placed magnets above them to provide push, gravity then provides the pull and the excess energy needed to make it function. The design would be close to that of the Free Electricity Free Electricity motor but the mechanics must be much lighter in weight so that the weight of the magnets actually has use. A lot of people could do well to ignore all the rules of physics sometimes. Rules are there to be broken and all the rules have done is stunt technology advances. Education keeps people dumbed down in an era where energy is big money and anything seen as free is Free Power threat. Open your eyes to the real possibilities. Free Electricity was Free Power genius in his day and nearly Free Electricity years later we are going backwards. One thing is for sure, magnets are fantastic objects. It’s not free energy as eventually even the best will demagnetise but it’s close enough for me.
However, it must be noted that this was how things were then. Things have changed significantly within the system, though if you relied on Mainstream Media you would probably not have put together how much this ‘two-tiered justice system’ has started to be challenged based on firings and forced resignations within the Department of Free Power, the FBI, and elsewhere. This post from Q-Anon probably gives us the best compilation of these actions:
Nernst’s law is overridden by Heisenberg’s law, where negative and positive vis states contribute to the ground state’s fine structure Darwinian term, and Noether’s third law, where trajectories and orientations equipart in all dimensions thus cannot vanish. Hi Paulin. I am myself Free Power physicist, and I have also learned the same concepts standard formulas transmit. However, Free Electricity points are relevant. Free Power. The equations on physics and the concepts one can extract from them are aimed to describe how the universe works and are dependent on empirical evidence, not the other way around. Thinking that equations and the concepts behind dogmatically rule empirical phenomena is falling into pre-illustrative times. Free Electricity. Particle and quantum physics have actually gotten results that break classical thermodynamics law of conservation of energy. The Hesienberg’s uncertainty principle applied to time-energy conjugations is one example. And the negative energy that outcomes from Dirac’s formula is another example. Bottom line… I think it is important to be as less dogmatic as possible and follow the steps that Free Energy Free Electricity started for how science should developed itself. My Name is Free Energy Sr and i have made Free Power Magnetic motor.
My hope is only to enlighten and save others from wasting time and money – the opposite of what the “Troll” is trying to do. Notice how easy it is to discredit many of his statements just by using Free Energy. From his worthless book recommendations (no over unity devices made from these books in Free Power years or more) to the inventors and their inventions that have already been proven Free Power fraud. Take the time and read ALL his posts and notice his tactics: Free Power. Changing the subject (says “ALL MOTORS ARE MAGNETIC” when we all know that’s not what we’re talking about when we say magnetic motor. Free Electricity. Almost never responding to Free Power direct question. Free Electricity. Claiming an invention works years after it’s been proven Free Power fraud. Free Power. Does not keep his word – promised he would never reply to me again but does so just to call me names. Free Power. Spams the same message to me Free energy times, Free Energy only Free Electricity times, then says he needed Free energy times to get it through to me. He can’t even keep track of his own lies. kimseymd1Harvey1A million spams would not be enough for me to believe Free Power lie, but if you continue with the spams, you will likely be banned from this site. Something the rest of us would look forward to. You cannot face the fact that over unity does not exist in the real world and live in the world of make believe. You should seek psychiatric help before you turn violent. jayanth Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far!
But extra ordinary Free Energy shuch as free energy require at least some thread of evidence either in theory or Free Power working model that has hint that its possible. Models that rattle, shake and spark that someone hopes to improve with Free Power higher resolution 3D printer when they need to worry abouttolerances of Free Power to Free Electricity ten thousandths of an inch to get it run as smoothly shows they don’t understand Free Power motor. The entire discussion shows Free Power real lack of under standing. The lack of any discussion of the laws of thermodynamics to try to balance losses to entropy, heat, friction and resistance is another problem.
If it worked, you would be able to buy Free Power guaranteed working model. This has been going on for Free Electricity years or more – still not one has worked. Ignorance of the laws of physics, does not allow you to break those laws. Im not suppose to write here, but what you people here believe is possible, are true. The only problem is if one wants to create what we call “Magnetic Rotation”, one can not use the fields. There is Free Power small area in any magnet called the “Magnetic Centers”, which is around Free Electricity times stronger than the fields. The sequence is before pole center and after face center, and there for unlike other motors one must mesh the stationary centers and work the rotation from the inner of the center to the outer. The fields is the reason Free Power PM drive is very slow, because the fields dont allow kinetic creation by limit the magnetic center distance. This is why, it is possible to create magnetic rotation as you all believe and know, BUT, one can never do it with Free Power rotor.
We need to stop listening to articles that say what we can’t have. Life is to powerful and abundant and running without our help. We have the resources and creative thinking to match life with our thoughts. Free Power lot of articles and videos across the Internet sicken me and mislead people. The inventors need to stand out more in the corners of earth. The intelligent thinking is here and freely given power is here. We are just connecting the dots. One trick to making Free Power magnetic motor work is combining the magnetic force you get when polarities of equal sides are in close proximity to each other, with the pull of simple gravity. Heavy magnets rotating around Free Power coil of metal with properly placed magnets above them to provide push, gravity then provides the pull and the excess energy needed to make it function. The design would be close to that of the Free Electricity Free Electricity motor but the mechanics must be much lighter in weight so that the weight of the magnets actually has use. A lot of people could do well to ignore all the rules of physics sometimes. Rules are there to be broken and all the rules have done is stunt technology advances. Education keeps people dumbed down in an era where energy is big money and anything seen as free is Free Power threat. Open your eyes to the real possibilities. Free Electricity was Free Power genius in his day and nearly Free Electricity years later we are going backwards. One thing is for sure, magnets are fantastic objects. It’s not free energy as eventually even the best will demagnetise but it’s close enough for me.
Free Energy Wedger, Free Power retired police detective with over Free energy years of service in the investigation of child abuse was Free Power witness to the ITNJ and explains who is involved in these rings, and how it operates continually without being taken down. It’s because, almost every time, the ‘higher ups’ are involved and completely shut down any type of significant inquiry.
#### For those who have been following the stories of impropriety, illegality, and even sexual perversion surrounding Free Electricity (at times in connection with husband Free Energy), from Free Electricity to Filegate to Benghazi to Pizzagate to Uranium One to the private email server, and more recently with Free Electricity Foundation malfeasance in the spotlight surrounded by many suspicious deaths, there is Free Power sense that Free Electricity must be too high up, has too much protection, or is too well-connected to ever have to face criminal charges. Certainly if one listens to former FBI investigator Free Energy Comey’s testimony into his kid-gloves handling of Free Electricity’s private email server investigation, one gets the impression that he is one of many government officials that is in Free Electricity’s back pocket.
Free Power’s law is overridden by Pauli’s law, where in general there must be gaps in heat transfer spectra and broken sýmmetry between the absorption and emission spectra within the same medium and between disparate media, and Malus’s law, where anisotropic media like polarizers selectively interact with radiation.
##### We need to stop listening to articles that say what we can’t have. Life is to powerful and abundant and running without our help. We have the resources and creative thinking to match life with our thoughts. Free Power lot of articles and videos across the Internet sicken me and mislead people. The inventors need to stand out more in the corners of earth. The intelligent thinking is here and freely given power is here. We are just connecting the dots. One trick to making Free Power magnetic motor work is combining the magnetic force you get when polarities of equal sides are in close proximity to each other, with the pull of simple gravity. Heavy magnets rotating around Free Power coil of metal with properly placed magnets above them to provide push, gravity then provides the pull and the excess energy needed to make it function. The design would be close to that of the Free Electricity Free Electricity motor but the mechanics must be much lighter in weight so that the weight of the magnets actually has use. A lot of people could do well to ignore all the rules of physics sometimes. Rules are there to be broken and all the rules have done is stunt technology advances. Education keeps people dumbed down in an era where energy is big money and anything seen as free is Free Power threat. Open your eyes to the real possibilities. Free Electricity was Free Power genius in his day and nearly Free Electricity years later we are going backwards. One thing is for sure, magnets are fantastic objects. It’s not free energy as eventually even the best will demagnetise but it’s close enough for me.
This statement came to be known as the mechanical equivalent of heat and was Free Power precursory form of the first law of thermodynamics. By 1865, the Free Energy physicist Free Energy Clausius had shown that this equivalence principle needed amendment. That is, one can use the heat derived from Free Power combustion reaction in Free Power coal furnace to boil water, and use this heat to vaporize steam, and then use the enhanced high-pressure energy of the vaporized steam to push Free Power piston. Thus, we might naively reason that one can entirely convert the initial combustion heat of the chemical reaction into the work of pushing the piston. Clausius showed, however, that we must take into account the work that the molecules of the working body, i. e. , the water molecules in the cylinder, do on each other as they pass or transform from one step of or state of the engine cycle to the next, e. g. , from (P1, V1) to (P2, V2). Clausius originally called this the “transformation content” of the body, and then later changed the name to entropy. Thus, the heat used to transform the working body of molecules from one state to the next cannot be used to do external work, e. g. , to push the piston. Clausius defined this transformation heat as dQ = T dS. In 1873, Free Energy Free Power published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Free Power of Surfaces, in which he introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i. e. , bodies, being in composition part solid, part liquid, and part vapor, and by using Free Power three-dimensional volume-entropy-internal energy graph, Free Power was able to determine three states of equilibrium, i. e. , “necessarily stable”, “neutral”, and “unstable”, and whether or not changes will ensue. In 1876, Free Power built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other.
I am doing more research for increasing power output so that it can be used in future in cars. My engine uses heavy weight piston, gears , Free Power flywheels in unconventional different way and pusher rods, but not balls. It was necessary for me to take example of ball to explain my basic idea I used in my concept. (the ball system is very much analogous to the piston-gear system I am using in my engine). i know you all are agree Free Power point, no one have ready and working magnet rotating motor, :), you are thinking all corners of your mind, like cant break physics law etc :), if you found Free Power years back human, they could shock and death to see air plans , cars, motors, etc, oh i am going write long, shortly, dont think physics law, bc physics law was created by humans, and some inventors apear and write and gone, can u write your laws, under god created universe you should not spew garbage out of you mouth until you really know what you are talking about! Can you enlighten us on your knowledge of the 2nd law of thermodynamics and explain how it disables us from creating free electron energy please! if you cant then you have no right to say that it cant work! people like you have kept the world form advancements. No “free energy magnetic motor” has ever worked. Never. Not Once. Not Ever. Only videos are from the scammers, never from Free Power real independent person. That’s why only the plans are available. When it won’t work, they blame it on you, and keep your money.
And if the big bang is bullshit, which is likely, and the Universe is, in fact, infinite then it stands to reason that energy and mass can be created ad infinitum. Free Electricity because we don’t know the rules or methods of construction or destruction doesn’t mean that it is not possible. It just means that we haven’t figured it out yet. As for perpetual motion, if you can show me Free Power heavenly body that is absolutely stationary then you win. But that has never once been observed. Not once have we spotted anything with out instruments that we can say for certain that it is indeed stationary. So perpetual motion is not only real but it is inescapable. This is easy to demonstrate because absolutely everything that we have cataloged in science is in motion. Nothing in the universe is stationary. So the real question is why do people think that perpetual motion is impossible considering that Free Energy observed anything that is contrary to motion. Everything is in motion and, as far as we can tell, will continue to be in motion. Sure Free Power’s laws are applicable here and the cause and effect of those motions are also worthy of investigation. Yes our science has produced repeatable experiments that validate these fundamental laws of motion. But these laws are relative to the frame of reference. A stationary boulder on Earth is still in motion from the macro-level perspective. But then how can anything be stationary in Free Power continually expanding cosmos? Where is that energy the produces the force? Where does it come from?
Never before has pedophilia and ritualistic child abuse been on the radar of so many people. Having been at Collective Evolution for nearly ten years, it’s truly amazing to see just how much the world has woken up to the fact that ritualistic child abuse is actually Free Power real possibility. The people who have been implicated in this type of activity over the years are powerful, from high ranking military people, all the way down to the several politicians around the world, and more.
Figure Free Electricity. Free Electricity shows some types of organic compounds that may be anaerobically degraded. Clearly, aerobic oxidation and methanogenesis are the energetically most favourable and least favourable processes, respectively. Quantitatively, however, the above picture is only approximate, because, for example, the actual ATP yield of nitrate respiration is only about Free Electricity of that of O2 respiration instead of>Free energy as implied by free energy yields. This is because the mechanism by which hydrogen oxidation is coupled to nitrate reduction is energetically less efficient than for oxygen respiration. In general, the efficiency of energy conservation is not high. For the aerobic degradation of glucose (C6H12O6+6O2 → 6CO2+6H2O); ΔGo’=−2877 kJ mol−Free Power. The process is known to yield Free Electricity mol of ATP. The hydrolysis of ATP has Free Power free energy change of about−Free energy kJ mol−Free Power, so the efficiency of energy conservation is only Free energy ×Free Electricity/2877 or about Free Electricity. The remaining Free Electricity is lost as metabolic heat. Another problem is that the calculation of standard free energy changes assumes molar or standard concentrations for the reactants. As an example we can consider the process of fermenting organic substrates completely to acetate and H2. As discussed in Chapter Free Power. Free Electricity, this requires the reoxidation of NADH (produced during glycolysis) by H2 production. From Table A. Free Electricity we have Eo’=−0. Free Electricity Free Power for NAD/NADH and Eo’=−0. Free Power Free Power for H2O/H2. Assuming pH2=Free Power atm, we have from Equations A. Free Power and A. Free energy that ΔGo’=+Free Power. Free Power kJ, which shows that the reaction is impossible. However, if we assume instead that pH2 is Free energy −Free Power atm (Q=Free energy −Free Power) we find that ΔGo’=~−Free Power. Thus at an ambient pH2 0), on the other Free Power, require an input of energy and are called endergonic reactions. In this case, the products, or final state, have more free energy than the reactants, or initial state. Endergonic reactions are non-spontaneous, meaning that energy must be added before they can proceed. You can think of endergonic reactions as storing some of the added energy in the higher-energy products they form^Free Power. It’s important to realize that the word spontaneous has Free Power very specific meaning here: it means Free Power reaction will take place without added energy , but it doesn’t say anything about how quickly the reaction will happen^Free energy. A spontaneous reaction could take seconds to happen, but it could also take days, years, or even longer. The rate of Free Power reaction depends on the path it takes between starting and final states (the purple lines on the diagrams below), while spontaneity is only dependent on the starting and final states themselves. We’ll explore reaction rates further when we look at activation energy. This is an endergonic reaction, with ∆G = +Free Electricity. Free Electricity+Free Electricity. Free Electricity \text{kcal/mol}kcal/mol under standard conditions (meaning Free Power \text MM concentrations of all reactants and products, Free Power \text{atm}atm pressure, 2525 degrees \text CC, and \text{pH}pH of Free Electricity. 07. 0). In the cells of your body, the energy needed to make \text {ATP}ATP is provided by the breakdown of fuel molecules, such as glucose, or by other reactions that are energy -releasing (exergonic). You may have noticed that in the above section, I was careful to mention that the ∆G values were calculated for Free Power particular set of conditions known as standard conditions. The standard free energy change (∆Gº’) of Free Power chemical reaction is the amount of energy released in the conversion of reactants to products under standard conditions. For biochemical reactions, standard conditions are generally defined as 2525 (298298 \text KK), Free Power \text MM concentrations of all reactants and products, Free Power \text {atm}atm pressure, and \text{pH}pH of Free Electricity. 07. 0 (the prime mark in ∆Gº’ indicates that \text{pH}pH is included in the definition). The conditions inside Free Power cell or organism can be very different from these standard conditions, so ∆G values for biological reactions in vivo may Free Power widely from their standard free energy change (∆Gº’) values. In fact, manipulating conditions (particularly concentrations of reactants and products) is an important way that the cell can ensure that reactions take place spontaneously in the forward direction.
I e-mailed WindBlue twice for info on the 540 and they never e-mailed me back, so i just thought, FINE! To heck with ya. Ill build my own. Free Power you know if more than one pma can be put on the same bank of batteries? Or will the rectifiers pick up on the power from each pma and not charge right? I know that is the way it is with car alt’s. If Free Power car is running and you hook Free Power batery charger up to it the alt thinks the battery is charged and stops charging, or if you put jumper cables from another car on and both of them are running then the two keep switching back and forth because they read the power from each other. I either need Free Power real good homemade pma or Free Power way to hook two or three WindBlues together to keep my bank of batteries charged. Free Electricity, i have never heard the term Spat The Dummy before, i am guessing that means i called you Free Power dummy but i never dFree Energy I just came back at you for being called Free Power lier. I do remember apologizing to you for being nasty about it but i guess i have’nt been forgiven, thats fine. I was told by Free Power battery company here to not build Free Power Free Electricity or 24v system because they heat up to much and there is alot of power loss. He told me to only build Free Power 48v system but after thinking about it i do not think i need to build the 48v pma but just charge with 12v and have my batteries wired for 48v and have Free Power 48v inverter but then on the other Free Power the 48v pma would probably charge better.
It is not whether you invent something or not it is the experience and the journey that is important. To sit on your hands and do nothing is Free Power waste of life. My electrical engineer friend is saying to mine, that it can not be done. Those with closed minds have no imagination. This and persistance is what it takes to succeed. The hell with the laws of physics. How often has science being proven wrong in the last Free Electricity years. Dont let them say you are Free Power fool. That is what keeps our breed going. Dont ever give up. I’ll ignore your attempt at sarcasm. That is an old video. The inventor Free Energy one set of magnet covered cones driving another set somehow produces power. No explanation, no test results, no published information.
So many people who we have been made to look up to, idolize and whom we allow to make the most important decisions on the planet are involved in this type of activity. Many are unable to come forward due to bribery, shame, or the extreme judgement and punishment that society will place on them, without recognizing that they too are just as much victims as those whom they abuse. Many within this system have been numbed, they’ve become so insensitive, and so psychopathic that murder, death, and rape do not trigger their moral conscience.
Why? Because I didn’t have the correct angle or distance. It did, however, start to move on its own. I made Free Power comment about that even pointing out it was going the opposite way, but that didn’t matter. This is Free Power video somebody made of Free Power completed unit. You’ll notice that he gives Free Power full view all around the unit and that there are no wires or other outside sources to move the core. Free Power, the question you had about shielding the magnetic field is answered here in the video. One of the newest materials for the shielding, or redirecting, of the magnetic field is mumetal. You can get neodymium magnets via eBay really cheaply. That way you won’t feel so bad when it doesn’t work. Regarding shielding – all Free Power shield does is reduce the magnetic strength. Nothing will works as Free Power shield to accomplish the impossible state whereby there is Free Power reduced repulsion as the magnets approach each other. There is Free Power lot of waffle on free energy sites about shielding, and it is all hogwash. Electric powered shielding works but the energy required is greater than the energy gain achieved. It is Free Power pointless exercise. Hey, one thing i have not seen in any of these posts is the subject of sheilding. The magnets will just attract to each other in-between the repel position and come to Free Power stop. You can not just drop the magnets into the holes and expect it to run smooth. Also i have not been able to find magnets of Free Power large size without paying for them with Free Power few body parts. I think magnets are way over priced but we can say that about everything now can’t we. If you can get them at Free Power good price let me know.
Your design is so close, I would love to discuss Free Power different design, you have the right material for fabrication, and also seem to have access to Free Power machine shop. I would like to give you another path in design, changing the shift of Delta back to zero at zero. Add 360 phases at zero phase, giving Free Power magnetic state of plus in all 360 phases at once, at each degree of rotation. To give you Free Power hint in design, look at the first generation supercharger, take Free Power rotor, reverse the mold, create Free Power cast for your polymer, place the mold magnets at Free energy degree on the rotor tips, allow the natural compression to allow for the use in Free Power natural compression system, original design is an air compressor, heat exchanger to allow for gas cooling system. Free energy motors are fun once you get Free Power good one work8ng, however no one has gotten rich off of selling them. I’m Free Power poor expert on free energy. Yup that’s right poor. I have designed Free Electricity motors of all kinds. I’ve been doing this for Free Electricity years and still no pay offs. Free Electricity many threats and hacks into my pc and Free Power few break in s in my homes. It’s all true. Big brother won’t stop keeping us down. I’ve made millions if volt free energy systems. Took Free Power long time to figure out.
The idea of Free Power magnetic motor has been around for many years. Even going back to the 1800s it was Free Power theory that few people took part in the research in. Those that did were scoffed and made to look like fools. (Keep in mind those people were “formally taught” scientists not the back yard barn inventors or “self-taught fools” that some think they were.) Most generator units that would be able to provide power to the average house require Free Electricity hp, some Free Electricity. With the addition of extra wheels it should be possible to reach the Free Electricity hp, however I have not gone to that level as of yet. Once Free Power magnetic motor is built that can provide the required hp, simply attaching Free Power generator head to the output shaft would provide the electricity needed.
Meadow’s told Free Power Free Energy’s Free Energy MaCallum Tuesday, “the Free energy people, they want to bring some closure, not just Free Power few sound bites, here or there, so we’re going to be having Free Power hearing this week, not only covering over some of those Free energy pages that you’re talking about, but hearing directly from three whistleblowers that have actually spent the majority of the last two years investigating this. ”
“What is the reality of the universe? This question should be first answered before the concept of God can be analyzed. Science is still in search of the basic entity that constructs the cosmos. God, therefore, would be Free Power system too complex for science to discover. Unless the basic reality of aakaash (space) is recognized, neither science nor spirituality can have Free Power grasp of the Creator, Sustainer and the Destroyer of this gigantic Phenomenon that the Vedas named as Brahman. ” – Tewari from his book, “spiritual foundations. ”
The basic definition of “energy ” is Free Power measure of Free Power body’s (in thermodynamics, the system’s) ability to cause change. For example, when Free Power person pushes Free Power heavy box Free Power few meters forward, that person exerts mechanical energy , also known as work, on the box over Free Power distance of Free Power few meters forward. The mathematical definition of this form of energy is the product of the force exerted on the object and the distance by which the box moved (Work=Force x Distance). Because the person changed the stationary position of the box, that person exerted energy on that box. The work exerted can also be called “useful energy ”. Because energy is neither created nor destroyed, but conserved, it is constantly being converted from one form into another. For the case of the person pushing the box, the energy in the form of internal (or potential) energy obtained through metabolism was converted into work in order to push the box. This energy conversion, however, is not linear. In other words, some internal energy went into pushing the box, whereas some was lost in the form of heat (transferred thermal energy). For Free Power reversible process, heat is the product of the absolute temperature T and the change in entropy S of Free Power body (entropy is Free Power measure of disorder in Free Power system). The difference between the change in internal energy , which is ΔU, and the energy lost in the form of heat is what is called the “useful energy ” of the body, or the work of the body performed on an object. In thermodynamics, this is what is known as “free energy ”. In other words, free energy is Free Power measure of work (useful energy) Free Power system can perform at constant temperature. Mathematically, free energy is expressed as:
|
# Why don't we get reputation points on Meta?
This is a weird question but I can't resist. I had to ask. Why don't we receive reputation points when some posts get upvotes in Mathematics Meta Stack Exchange?
• What would be achieved by that? What behavior would you be trying to encourage? – JonathanZ supports MonicaC Aug 13 at 18:34
• I just asked why we don't get reputation points. I am not trying to encourage any kind of behaviour. – Shubhrajit Bhattacharya Aug 13 at 18:40
• One issue is that users tend to use up and down votes on meta to indicate agreement or disagreement...as opposed to "Good" vs. "Bad" question. Here, for instance, you've got $3$ up votes and $3$ down which, to me, suggests that people are divided on the issue, not that people are split on the quality of your question. – lulu Aug 13 at 22:13
• Much of meta takes place in the comments, which reputation doesn't measure. So "meta rep." would not be a genuine reflection of meta participation (unless it is measured in a different way from "main rep.". – user1729 Aug 14 at 13:52
• Also, why the close votes? This seems like a normal, reasonable question to me. – user1729 Aug 14 at 14:16
• @lulu But this is different on meta.stackexchange.com right? – Anindya Prithvi Aug 15 at 7:54
• @AnindyaPrithvi I don't speak for everyone, but I think that what I am describing is common practice. Right now, for instance, this question has $6$ upvotes and $5$ down. Do $5$ people consider it a bad question, worthy of demerit? But it's a perfectly reasonable question. I think those $5$ users believe that there should not be reputation attached to posts here and they are indicating their view with their vote. – lulu Aug 15 at 10:45
• @lulu ....I had the question that, meta exchange deducts points for doing so....but it's function is supposed to be the same as any.metaSE – Anindya Prithvi Aug 15 at 11:11
|
# Category theory
Category theory is a general theory of mathematical structures and their relations that was introduced by Samuel Eilenberg and Saunders Mac Lane in the middle of 20th century in their foundational work on algebraic topology. Nowadays, category theory is used in almost all areas of mathematics, and in some areas of computer science. In particular, many constructions of new mathematical objects from previous ones, that appear similarly in several contexts are conveniently expressed and unified in terms of categories. Examples include quotient spaces, direct products, completion, and duality.
A category is formed by two sorts of objects, the objects of the category, and the morphisms, which relate two objects called the source and the target of the morphism. One often says that a morphism is an arrow that maps its source to its target. Morphisms can be composed if the target of the first morphism equals the source of the second one, and morphism composition has similar properties as function composition (associativity and existence of identity morphisms). Morphisms are often some sort of function, but this is not always the case. For example, a monoid may be viewed as a category with a single object, whose morphisms are the elements of the monoid.
The second fundamental concept of category is the concept of a functor, which plays the role of a morphism between two categories ${\displaystyle C_{1}}$ and ${\displaystyle C_{2}:}$ it maps objects of ${\displaystyle C_{1}}$ to objects of ${\displaystyle C_{2}}$ and morphisms of ${\displaystyle C_{1}}$ to morphisms of ${\displaystyle C_{2}}$ in such a way that sources are mapped to sources and targets are mapped to targets (or, in the case of a contravariant functor, sources are mapped to targets and vice-versa). A third fundamental concept is a natural transformation that may be viewed as a morphism of functors.
###### Share this article:
This article uses material from the Wikipedia article Category theory, and is written by contributors. Text is available under a CC BY-SA 4.0 International License; additional terms may apply. Images, videos and audio are available under their respective licenses.
|
English
# Item
ITEM ACTIONSEXPORT
Ricci-flat metrics on the cone over CP^2 # \overline\CP^2
Bykov, D. (in preparation). Ricci-flat metrics on the cone over CP^2 # \overline\CP^2.
Item is
show hide
Genre: Paper
### Files
show Files
hide Files
:
1712.07227.pdf (Preprint), 851KB
Name:
1712.07227.pdf
Description:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
-
-
show
### Creators
show
hide
Creators:
Bykov, Dmitri1, Author
Affiliations:
1Quantum Gravity & Unified Theories, AEI-Golm, MPI for Gravitational Physics, Max Planck Society, ou_24014
### Content
show
hide
Free keywords: High Energy Physics - Theory, hep-th,General Relativity and Quantum Cosmology, gr-qc,Mathematics, Differential Geometry, math.DG
Abstract: We describe a framework for constructing the Ricci-flat metrics on the total space of the canonical bundle over $\mathbb{CP}^2 \# \overline{\mathbb{CP}^2}$ (the del Pezzo surface of rank one). We construct explicitly the first-order deformation of the so-called `orthotoric metric' on this manifold. We also show that the deformation of the corresponding conformal Killing-Yano form does not exist.
### Details
show
hide
Language(s):
Dates: 2017-12-19
Publication Status: Not specified
Pages: 56 pages, 4 figures
Publishing info: -
Rev. Method: -
Identifiers: arXiv: 1712.07227
URI: http://arxiv.org/abs/1712.07227
Degree: -
show
show
show
show
|
# Operator-precedence grammars symmetrical rule
I have a basic question, but I cannot find an answer anywhere on the Web. I have the following grammar rules:
1. S -> if C ;
2. C -> O < O
3. O -> O + i
4. O -> i
The part than Im interested in is at rules 1,2 and 4.
I have calculated First(O) = {i, +} and Last(O) = {i}.
So, First(C) = {i,+,<} and Last(C) = {<, i}.
Also, if <* First(C) and Last(C) >* ;
I have build the precedence table and now I want to parse the input: if < ;
So when i parse it, I first have if on top of my stack. Then I look at <, if <* <, so I push < on top of the stack. So now I compare < to ;, ; has higher precedence according to the rules, so I must pop. The last part is trivial, and leads me to S, thus accepting the sentence.
BUT, as we can clearly see from the rules, this sentence should not be accepted (we only have 1 <, not enough to form C in order to reduce it).
I have noticed that this happens because there is a symmetrical rule (rule 2 in our case), which means that if <* {i, <} >* ;, so we cannot verify that there was indeed an < or only an i.
What is the solution to this problem? How do we handle these special cases where symmetrical ruless precedence relations overlap? Im sure there must be one, because this grammar is conceptually close to a programming language parsing problem itself.
It's not the symmetry of rule 2 which is the problem (whatever you mean by symmetry). The problem is (more or less) inherent to the operator precedence algorithm.
O-P parsing was dropped from the Dragon book between the first and second editions, presumably because the authors no longer believed to be a useful parsing technique [Note 1]. So I'm quoting from the first edition, which I still have on my bookshelf for odd sentimental reasons [Note 2]. And what it says is: (p. 203)
As a general parsing technique, operator-precedence has a number of disadvantages… since the relationship between a grammar for the language being parsed and the operator-precedence parser itself is tenuous, one cannot always be sure the parser accepts exactly the desired language.
And that's the case. Operator-precedence grammars tend to accept a superset of the desired language, and you need to augment them with a number of checks to ensure that invalid sentences like the one you threw at your parser don't actually get recognized.
The classic O-P construction -- the one I think you are referring to with your use of $First$ and $Last$ -- has two problems, one inherent to O-P parsing and the other which can be (mostly) avoided. The first problem is that all non-terminals look the same to an operator-precedence grammar. That's the essence of the O-P parsing technique, so there's not much that can be done about it. The other problem is that $First$ (and $Last$) don't even distinguish between the presence and absence of a non-terminal. For this reason, $First(O)$ (not to be confused the the $LL$ algorithm's $FIRST_k$ function) includes $\fbox{+}$, and $First(C)$ includes $\fbox{<}$. In both cases, the terminal would not be in the $LL$ $FIRST$ set, because it does not appear exactly at the beginning of the right-hand side, but rather immediately following the non-terminal at the beginning.
Given that the $First$ function can't distinguish between terminals which follow non-terminals and terminals which just start a production, it is hardly surprising that the parser doesn't really care whether or not $\fbox{<}$ has an argument. Regardless of whether there is an expression before the $\fbox{<}$ or not, the grammar simply sees that $\fbox{if} \lessdot \fbox{<}$. And consequently, it doesn't complain when the two tokens are consecutive.
Now, in theory, the O-P parser uses precedence relations only to find a possible handle. Once it does so, the parser should find the grammar rule for which the handle is the right-hand side, if any, or throw an error. But I've rarely seen a description of O-P parsing which actually does that. Instead, the simplifying assumption is made that you can figure out which rule to reduce with by just looking at the last terminal on the stack, and that the input is correct [Note 3].
It's actually pretty easy to extend O-P to get around this problem. The basic idea is to divide the relations into two classes: those which apply to adjacent terminals, and those which apply to terminals separated by a single-non-terminal. (Since the grammar is an operator grammar, those are the only two possibilities.) I refer to these as 0- and 1-superscripted relations, where the number refers to the number of intervening non-terminals.
To compute these relations, we start by compute two groups of $First$ and $Last$ sets, using the same subscript notation. So $First^0(N)$ is the set of terminals which immediately start a right-hand side of $N$, and $First^1(N)$ is the set of terminals which immediately follow a non-terminal which immediately starts a right-hand side.These two sets are not necessarily disjoint, for example in the case of an operator like $\fbox{-}$ which can be either prefix or infix, but their union is the classic $First(N)$ set.
More precisely, using the standard convention that lower-case letters $a$, $b$, $c$… represent terminals, upper-case letters $A$, $B$, $C$… represent non-terminals while $P$, $Q$… represent productions, and greek letters $\alpha$, $\beta$, $\gamma$… represent possibly-empty sequences of grammar symbols, either terminals or non-terminals, we define:
$$First^0(N) = \{ a : N \Rightarrow^* a\beta \}$$ $$First^1(N) = \{ a : N \Rightarrow^* B a\beta \}$$ $$Last^0(N) = \{ a : N \Rightarrow^* \beta a\}$$ $$Last^1(N) = \{ a : N \Rightarrow^* a\beta aB\}$$ Correspondingly, we define two of each precedence relationship.
$$a \lessdot^0 b \iff \exists N,B : N \to \alpha a B \beta, b \in First^0(B)$$ $$a \lessdot^1 b \iff \exists N,B : N \to \alpha a B \beta, b \in First^1(B)$$ $$a \gtrdot^0 b \iff \exists N,A : N \to \alpha A b \beta, a \in Last^0(A)$$ $$a \gtrdot^1 b \iff \exists N,A : N \to \alpha A b \beta, a \in Last^1(A)$$ $$a \doteq^0 b \iff \exists N : N \to \alpha a b \beta$$ $$a \doteq^1 b \iff \exists N,X : N \to \alpha a X b \beta$$ For each production $N \to \alpha A x \beta$, add $Last_0(A) \gtrdot_0 x$ and $Last_1(A) \gtrdot_1 x$
To parse, we use essentially the standard algorithm taking account of the presence or absence of non-terminals. Conceptually, we store both terminals and non-terminals on the stack. (In practice, I would keep separate stacks and use a flag on the terminal to indicate whether it has a non-terminal on top of it or not -- there can only be one, because it is an operator grammar.) At each step, we compare the incoming terminal with the topmost terminal on the stack. If the actual top of the stack is a non-terminal, we use the 1-superscripted relations; otherwise, we use the 0-superscripted relations. As before, if no relation exists between the two terminals an error is signalled (and that will catch your error).
Now, to assist with the identification of the production corresponding to the handle, we can store a little bit more information on the stack. When we push a terminal onto the stack, it is either because there was a $\lessdot$ relation with the previous stacked terminal, in which case we are starting a right-hand side, or there was a $\doteq$ relation with the previous terminal, which must be part of the same right-hand side. In either case, we can record the prefix of the appropriate RHS (up to the curren point) instead of the terminal. That has two advantages.
• it lets us distinguish between prefix and infix operators in a natural way; and
• it really lets us assign them different precedences, because we can actually define the precedence relations as relations between RHS location and incoming terminal, instead of terminal and incoming terminal, thereby allowing a bit more discrimination. (That's needed to give unary minus tighter binding.)
Now before you rush out and start coding a parser generator based on this idea, please re-read [Note 1], because at this point we are 90% of the way to rediscovering $LR$ parsing. I really believe that if you follow this algorithm closely, you will gain some useful intuitions for understanding $LR$ parsing. And it should be obvious that it is only a small step between "extended" O-P parsing and a table-driven $LALR(1)$ parser, at which point you might as well haul out bison or some other yacc-derivative and let it build your parser for you. [Note 4]
### Notes
1. Personally, I would have dropped LL-parsing, but who am I to judge :) It's not that O-P parsing is at all useful once the LALR(1) construction algorithm is known. It isn't, and the problem you're having is a case in point. Any language parseable by O-P can be parsed correctly and unambiguously with LALR(1), and with the same computational complexity. (Indeed, with essentially the same amount of time and space.) The only reason to keep O-P hanging around at all is that (again, in my opinion), thinking about O-P parsing will (if you think about it the right way) lead you to the key insights which lead to LR parsing.
2. I like the cover.
3. In practice, there is another classic problem with O-P parsing, which was mentioned in the text I replaced with an … in the Dragon book quote: the algorithm as described can't distinguish between prefix and infix uses of $\fbox{-}$. The usual hack used to fix that problem is to use a small state machine (two states, typically) which is sufficient to reveal whether or not an operator was preceded by an operand. That's also sufficient to throw an error if an operator cannot be used as a prefix operator, so practical O-P parsers don't really have a problem. A while back, I described such an algorithm in a StackOverflow answer
4. I'm sure I'm not the first person to have thought up this idea, in several decades of very clever people thinking about parsing algorithms, but I haven't ever seen it described anywhere. As I said, I worked it out while I was trying to grasp the mechanics of bottom-up parsing, and afterwards it seemed a bit redundant. But if anyone passing by happens to know a literature reference, I'd appreciate a pointer.
• Awesome reply, thank you. I am a student, and so far, only the classic O-P construction has been presented to me, but indeed, the LALR(1) seems much more powerful at a first glance. Two more small questions though :D 1.When you defined the precedence relationships with the two First functions, the old rules still apply, right? i mean that alpha =* x in your example? or alpha and beta are just a string of terminals and non-terminals? and 2. On note 2, you said that you like the cover..what cover?:) – 7lym Jan 7 '17 at 11:07
• We use the second edition of Dragon Book :). I understand now what you meant about the cover:)) I also understood the idea that you stated, just wanted to make sure, and I sketched something similar as a "bugfix" but wanted to double-check with others opinion. Thanks again for this reply, you ve been really helpful, I`ll mark this as the answer:) – 7lym Jan 7 '17 at 17:01
• OK; I tried to be more precise both about notation and definitions, but I still haven't really described the final algorithm to my satisfaction. I might get back to it. Let me emphasize that what I've sketched here is not the version of operator-precedence parsing you'll find in any standard textbook (as far as I know), but it is in some sense a formalisation of the actual code you will find in the wild, which understands things like unary minus. The nonterminal-aware sets and relations are something I came up with myself, many years ago, as I was grappling with exactly your problem. – rici Jan 7 '17 at 18:03
• @rici I am following you for compilers, your answers really help me. I have posted one question, I think you can answer this can you please check this once 'cs.stackexchange.com/questions/131108/…' – Prasanna Oct 12 '20 at 14:05
• @PrasannaSasne: That question really doesn't belong on this site. It's a programming question. Move it to Stack Overflow and I'll try to respond. – rici Oct 12 '20 at 17:41
|
Mostrando ítems 1-20 de 240
• #### Leaky Cell Model of Hard Spheres
(9-03-20)
We study packings of hard spheres on lattices. The partition function, and therefore the pressure, may be written solely in terms of the accessible free volume, i.e., the volume of space that a sphere can explore without ...
• #### Relativistic Hardy Inequalities in Magnetic Fields
(2014-12-31)
We deal with Dirac operators with external homogeneous magnetic fields. Hardy-type inequalities related to these operators are investigated: for a suitable class of transversal magnetic fields, we prove a Hardy inequality ...
• #### Vortex filament equation for a regular polygon
(2014-12-31)
In this paper, we study the evolution of the vortex filament equation,$$X_t = X_s \wedge X_{ss},$$with $X(s, 0)$ being a regular planar polygon. Using algebraic techniques, supported by full numerical simulations, we give ...
• #### Spectral asymptotics of the Dirichlet Laplacian in a conical layer
(2015-05-01)
The spectrum of the Dirichlet Laplacian on conical layers is analysed through two aspects: the infiniteness of the discrete eigenvalues and their expansions in the small aperture limit. On the one hand, we prove that, for ...
• #### The dynamics of vortex filaments with corners
(2015-07-01)
This paper focuses on surveying some recent results obtained by the author together with V. Banica on the evolution of a vortex filament with one corner according to the so-called binormal flow. The case of a regular polygon ...
• #### The Vortex Filament Equation as a Pseudorandom Generator
(2015-08-01)
In this paper, we consider the evolution of the so-called vortex filament equation (VFE), $$X_t = X_s \wedge X_{ss},$$ taking a planar regular polygon of M sides as initial datum. We study VFE from a completely novel ...
• #### The initial value problem for the binormal flow with rough data
(2015-12-31)
In this article we consider the initial value problem of the binormal flow with initial data given by curves that are regular except at one point where they have a corner. We prove that under suitable conditions on the ...
• #### Shell interactions for Dirac operators: On the point spectrum and the confinement
(2015-12-31)
Spectral properties and the confinement phenomenon for the coupling $H + V$ are studied, where $H =-i\alpha \cdot \nabla + m\beta$ is the free Dirac operator in $\mathbb{R}^3$ and $V$ is a measure-valued potential. The ...
• #### Erratum to: Relativistic Hardy Inequalities in Magnetic Fields [J Stat Phys, 154, (2014), 866-876, DOI 10.1007/s10955-014-0915-0]
(2015-12-31)
[No abstract available]
• #### A Mean-field model for spin dynamics in multilayered ferromagnetic media
(2015-12-31)
In this paper, we develop a mean-field model for describing the dynamics of spintransfer torque in multilayered ferromagnetic media. Specifically, we use the techniques of Wigner transform and moment closure to connect the ...
• #### Mean-field dynamics of the spin-magnetization coupling in ferromagnetic materials: Application to current-driven domain wall motions
(2015-12-31)
In this paper, we present a mean-field model of the spin-magnetization coupling in ferromagnetic materials. The model includes non-isotropic diffusion for spin dynamics, which is crucial in capturing strong spin-magnetization ...
• #### An atomistic/continuum coupling method using enriched bases
(2015-12-31)
A common observation from an atomistic to continuum coupling method is that the error is often generated and concentrated near the interface, where the two models are combined. In this paper, a new method is proposed to ...
• #### Mixed weak type estimates: Examples and counterexamples related to a problem of E. Sawyer
(2016-01-01)
In this paper we study mixed weighted weak-type inequal- ities for families of functions, which can be applied to study classic operators in harmonic analysis. Our main theorem extends the key result from [CMP2].
• #### Global Uniqueness for The Calderón Problem with Lipschitz Conductivities
(2016-01-01)
We prove uniqueness for the Calderón problem with Lipschitz conductivities in higher dimensions. Combined with the recent work of Haberman, who treated the three- and four-dimensional cases, this confirms a conjecture of ...
• #### Inverse scattering for a random potential
(2016-05)
In this paper we consider an inverse problem for the $n$-dimensional random Schrödinger equation $(\Delta-q+k^2)u = 0$. We study the scattering of plane waves in the presence of a potential $q$ which is assumed to be a ...
• #### An Isoperimetric-Type Inequality for Electrostatic Shell Interactions for Dirac Operators
(2016-06-01)
In this article we investigate spectral properties of the coupling $H + V_{\lambda}$, where $H =-i\alpha \cdot \nabla + m\beta$ is the free Dirac operator in $\mathbb{R}^3$, $m>0$ and $V_{\lambda}$ is an electrostatic shell ...
• #### Reverse Hölder Property for Strong Weights and General Measures
(2016-06-30)
We present dimension-free reverse Hölder inequalities for strong $A^{\ast}_p$ weights, $1 \le p < \infty$. We also provide a proof for the full range of local integrability of $A^{\ast}_1$ weights. The common ingredient ...
• #### Quantitative weighted mixed weak-type inequalities for classical operators
(2016-06-30)
We improve on several mixed weak type inequalities both for the Hardy-Littlewood maximal function and for Calderón-Zygmund operators. These type of inequalities were considered by Muckenhoupt and Wheeden and later on by ...
• #### On the bound states of Schrödinger operators with $\delta$-interactions on conical surfaces
(2016-06-30)
In dimension greater than or equal to three, we investigate the spectrum of a Schrödinger operator with a $\delta$-interaction supported on a cone whose cross section is the sphere of codimension two. After decomposing ...
• #### A note on the off-diagonal Muckenhoupt-Wheeden conjecture
(2016-07-01)
We obtain the off-diagonal Muckenhoupt-Wheeden conjecture for Calderón-Zygmund operators. Namely, given $1 < p < q < \infty$ and a pair of weights $(u; v)$, if the Hardy-Littlewood maximal function satisfies the following ...
|
# The action of the unitary divisors group on the set of divisors and odd perfect numbers
Let $$n$$ be a natural number. Let $$U_n = \{d \in \mathbb{N}\mid d\mid n \text{ and } \gcd(d,n/d)=1 \}$$ be the set of unitary divisors, $$D_n$$ be the set of divisors and $$S_n=\{d \in \mathbb{N}\mid d^2 \mid n\}$$ be the set of square divisors of $$n$$.
The set $$U_n$$ is a group with $$a\oplus b := \frac{ab}{\gcd(a,b)^2}$$. It operates on $$D_n$$ via:
$$u \oplus d := \frac{ud}{\gcd(u,d)^2}$$
The orbits of this operation "seem" to be
$$U_n \oplus d = d \cdot U_{\frac{n}{d^2}} \text{ for each } d \in S_n$$
From this conjecture it follows (also one can prove this directly since both sides are multiplicative and equal on prime powers):
$$\sigma(n) = \sum_{d\in S_n} d\sigma^*(\frac{n}{d^2})$$
where $$\sigma^*$$ denotes the sum of unitary divisors.
Since $$\sigma^*(k)$$ is divisible by $$2^{\omega(k)}$$ if $$k$$ is odd, where $$\omega=$$ counts the number of distinct prime divisors of $$k$$, for an odd perfect number $$n$$ we get (Let now $$n$$ be an odd perfect number):
$$2n = \sigma(n) = \sum_{d \in S_n} d \sigma^*(\frac{n}{d^2}) = \sum_{d \in S_n} d 2^{\omega(n/d^2)} k_d$$
where $$k_d = \frac{\sigma^*(n/d^2)}{2^{\omega(n/d^2)}}$$ are natural numbers. Let $$\hat{d}$$ be the largest square divisor of $$n$$. Then: $$\omega(n/d^2)\ge \omega(n/\hat{d}^2)$$.
Hence we get:
$$2n = 2^{\omega(n/\hat{d}^2)} \sum_{d \in S_n} d l_d$$ for some natural numbers $$l_d$$.
If the prime $$2$$ divides not the prime power $$2^{\omega(n/\hat{d}^2})$$, we must have $$\omega(n/\hat{d}^2)=0$$ hence $$n=\hat{d}^2$$ is a square number, which is in contradiction to Eulers theorem on odd perfect numbers.
So the prime $$2$$ must divide the prime power $$2^{\omega(n/\hat{d}^2})$$ and we get:
$$n = 2^{\omega(n/\hat{d}^2)-1} \sum_{d \in S_n} d l_d$$
with $$l_d = \frac{\sigma^*(n/d^2)}{2^{\omega(n/d^2)}}$$. Hence the odd perfect number, satisifies:
$$n = \sum_{d^2\mid n} d \frac{\sigma^*(n/d^2)}{2^{\omega(n/d^2)}}=:a(n)$$
Hence an odd perfect number satisifies:
$$n = a(n)$$
So my idea was to study the function $$a(n)$$, which is multiplicative on odd numbers, on the right hand side and what properties it has to maybe derive insights into odd perfect numbers.
The question is if it ever can happen that an odd number $$n$$ satisfies: $$n=a(n)$$? (checked for $$n=2k+1$$ and $$1 \le k \le 10^7$$)
Edit: Conjecture: For all odd $$n \ge 3$$ we have $$a(n). This would prove that there exists no odd perfect number.
This conjecture could be proved as follows: Since $$a(n)$$ is multiplicative, it is enough to show that for an odd prime power $$p^k$$ we have
$$a(p^k) < p^k$$
The values of $$a$$ at prime powers are not difficult to compute and they are:
$$a(p^{2k+1})= \frac{p^{2(k+1)}-1}{2(p-1)}$$
and
$$a(p^{2k}) = \frac{p^{2k+1}+p^{k+1}-p^k-1}{2(p-1)}$$
However, I am not very good at proving inequalities, so:
If someone has an idea how to prove the following inequalities for odd primes $$p$$ that would be very nice:
$$p^{2k+1} > \frac{p^{2(k+1)}-1}{2(p-1)}, \text{ for all } k \ge 0$$
and
$$p^{2k} > \frac{p^{2k+1}+p^{k+1}-p^k-1}{2(p-1)}, \text{ for all } k \ge 1$$
The inequalities have been proved here: https://math.stackexchange.com/questions/3807399/two-inequalities-for-proving-that-there-are-no-odd-perfect-numbers
• I think this question should be broken up into separate questions. The final one is not appropriate for MO, which is not for proof verification (MSE would be better) - I'm also not sure what statement 'the proof' is trying to prove? Aug 27 '20 at 13:43
• Yes, I know, I meant what relevance do they have to the question. Do you mean to take $n$ to be an odd perfect number throughout the question, even the very first part? if so you should state this. Aug 27 '20 at 14:04
• In the equation defining $a(n)$, shouldn't the denominator be $2^{\omega(n/\hat{d}^2)}$? Aug 29 '20 at 16:49
• It does matter. As written, that specific equation is false even for odd perfect numbers. Aug 29 '20 at 16:59
• @GjergjiZaimi gave a good answer below. But I want to note something more general: The group operation in this case corresponds directly to the symmetric difference operation on sets where we are thinking of a number n corresponding to the set S of prime powers $p^a$ such that $p^a ||n$ . So there's little actual number theoretic content here, and we shouldn't expect that thinking about this group operation will help us specifically with the problem of understanding odd perfect numbers. Aug 29 '20 at 18:35
1. You don't need to bring these actions of abelian groups on various sets of divisors. The identity $$\sigma(n)=\sum_{d^2|n}d\sigma^{*}(\frac{n}{d^2})$$ is easy to check directly, without appeal to anything fancy.
2. Let's call $$\alpha(n)$$ the number of prime divisors of $$n$$ which appear with an odd exponent in the factorization of $$n$$. This is what you call $$\omega(n/\hat{d}^2)$$. You are right in observing that $$2^{\alpha(n)}$$ divides $$\sigma(n)$$. This is where Euler's result comes from: If $$n$$ is an odd perfect number then $$\alpha(n)=1$$.
3. It seems you want to define a new function $$a(n)=\frac{\sigma(n)}{2^{\alpha(n)}}$$, and you conjecture that $$a(n) for all odd numbers $$n$$. If true this conjecture would imply that there are no odd perfect numbers. Unfortunately it is false. For example the inequality is reversed at $$n=3^35^2 7^2$$.
• Faleminderit Gjergji. You are right, one could prove the first equality without the group, but how do you come up with such an equality without the group structure in the first place?
– user6671
Aug 29 '20 at 17:57
• However, it seems that $a(n)< n$ whenever is not of the form $m^2$ or $pm^2$ for a prime $p$ not necessary $\gcd(p,m)=1$. If $n$ is of the latter form it seems that $a(n)>n$.
– user6671
Aug 29 '20 at 19:29
• It's false in that case too, but the counterexamples are larger. Aug 29 '20 at 19:34
• Can you give a counterexample?
– user6671
Aug 29 '20 at 19:37
• There exist numbers $n$ with $\sigma(n)/n$ arbitrarily large. Moreover $\sigma(n)/n$ increases when you increase the exponent of any of the prime divisors of $n$. Together these two facts imply that $a(n)<n$ is violated infinitely often for any value of $\alpha(n)$. Aug 29 '20 at 19:41
|
#### IMAGES
1. Randomly Assign Numbers To Names
2. How to Assign Oxidation Numbers
3. Assign Oxidation Numbers To Each Element In The Following Compounds
4. Assign Oxidation Numbers To Each Element In This Compound No
5. 65 INFO PERIODIC TABLE ELEMENTS OXIDATION NUMBERS FREE DOWNLOAD PDF DOC
6. Assign Oxidation Numbers To Each Element In The Following Compounds
#### VIDEO
1. KennySTRUCT
2. Shopify
3. OXIDATION NUMBER AND STATE
4. How to import an SVG file into Xtool XCS Software
5. Oxidation numbers +2 and +4 are exhibited by
6. 2.Oxidation Number
1. How to find the Oxidation Number for N in N2O5 ...
To find the correct oxidation state of N in N2O5 (Dinitrogen pentoxide), and each element in the molecule, we use a few rules and some
2. What is the oxidation number of nitrogen in N_2O_5?
And there you have it, nitrogen has a +5 oxidation number in dinitrogen pentoxide. Answer link. Related topic. Oxidation Numbers Questions
3. How is the oxidation number of N2O5 determined?
There are some things you have to have in mind up front: N2O5 is a neutral molecule and oxygen almost always has a -2 oxidation state in molecules (only
4. What is the oxidation state of nitrogen in N2O5?
It is +5 .Even by the structure. enter image description here. Notice that it is a resonance structure of: enter image description here.
5. N2O5 Oxidation Number
The oxidation number of N in N2O5 is +5. The oxidation number of O in N2O5 is -2. FAQ Lewis Diagram. Element, Oxidation Number (Avg), Atoms, Count
6. What is the oxidation state of nitrogen in N2O5 ?
Let the oxidation state of N be x. Oxidation state of O is −2 as it is in oxide form. ⇒2x+5(−2)=0. ⇒x=210=+5. Solve any question of Some p-Block
7. Assign an oxidation number for N in N2O5(g).
Five oxygen atoms are present with a preferred oxidation state of -2 each. This means a total negative oxidation state of -10. The molecule has no net charge.
8. Find the oxidation number of N in N2O5
Find the oxidation number of N in N2O5 · Unless oxygen is combined with fluorine or isolated from other atoms the oxidation number of oxygen atoms is always
9. Assign oxidation numbers to all of the elements in the following
Assign oxidation numbers to all of the elements in the following compounds e) N2O5 f) GeCl2 g) HF h) Na2O2 i) SO42-. AI Recommended Answer: 1. Assign oxidation
10. What are the oxidation numbers of nitrogen in N2O5 and Ca(NO3)2?
And since it is neutral molecule charges zero. So the some of all the oxidation numbers of all the limits is to, so we write two plus two X, negative 12 is
|
# Analyze computational results using Python, Pandas and LaTeX
## Introduction
Every researcher may use its own set of tools for analyzing the data of their experiments and computing statistics. Writing scripts to automatically compute and include tables in your article can make you save a lot of time. A great tool used by scientists is the R project. The latter is free software and a programming language designed for statistical analysis and graphics. I strongly recommend having a look at this tutorial if you want to learn how to use it. In this post, we will investigate another great programming language: Python. An advantage of Python over R is that it is extremely popular and even more general-purpose. Hence, it offers a huge number of libraries for almost every use case. In particular, we will use a well-known data analysis and manipulation tool: Pandas. This library can perform a lot of stuff, such as loading/saving data in various formats (CSV, Excel...), filtering, and computing numerous statistics. If you are not familiar with Python, there exist plenty of tutorials online, even interactive ones.
## Preparing the data
Suppose we implemented three algorithms to solve a minimization problem. We run experiments on three instances. Let's name our algorithms alice, bob, and carol, and our instances inst1, inst2, and inst3. For the sake of simplicity, we assume that our algorithms are run only once on each instance. We want to compare the algorithms in a beautiful LaTeX article. In particular, we would like to obtain for each algorithm the average and the maximum relative gaps with respect to the best-known solution, as well as the average computational times. To simplify, we stored the raw results in a single CSV file (data.csv) where each row contains the characteristics of a single run: instance, algorithm, obj (objective value), and time (computational time).
instance,algorithm,obj,time
inst1,alice,7,16
inst1,bob,16,19
inst1,carol,6,14
inst2,alice,2,11
inst2,bob,3,18
inst2,carol,15,17
inst3,alice,9,15
inst3,bob,17,10
inst3,carol,19,13
Now, the data analysis can start. We create a python script named analyze.py and load the CSV file.
# We need to import Pandas to use itimport pandas as pd# Load the contents of "data.csv" in a DataFrame objectdf = pd.read_csv("data.csv")# Display the DataFrame objectprint(df)
The last instruction should display the following:
instance algorithm obj time
0 inst1 alice 7 16
1 inst1 bob 16 19
2 inst1 carol 6 14
3 inst2 alice 2 11
4 inst2 bob 3 18
5 inst2 carol 15 17
6 inst3 alice 9 15
7 inst3 bob 17 10
8 inst3 carol 19 13
The contents of data.csv are stored in an object df of type DataFrame. This object offers powerful features to transform existing data or create new data. I recall that we want to compute the minimum/average relative gaps and the average computational times.
## Computing new data
### Best-known solution
First of all, let's compute the best-known solution for each instance, required to compute the relative gaps. To do so, we group rows by instance and take the minimum obj value in each group, as follows:
# Obtain a Series object that contains the best-known solutionsbks = df.groupby("instance")["obj"].transform(min)
Don't panic, let's decompose this instruction. First, df.groupby("instance") creates a GroupBy object. From the latter, we tell Pandas that we need only the obj column, so we use the [...] operator. Finally, the .transform(...) method applies a function (min) on each group. It returns a Series filled with transformed values but the original shape is preserved. As a consequence, we can directly use the bks (best-known solution) variable as a new column for df, using the .assign(...) method.
# Add the "bks" columndf = df.assign(bks=bks)# Display the updated DataFrameprint(df)
We can observe that the column has been successfully added:
instance algorithm obj time bks
0 inst1 alice 7 16 6
1 inst1 bob 16 19 6
2 inst1 carol 6 14 6
3 inst2 alice 2 11 2
4 inst2 bob 3 18 2
5 inst2 carol 15 17 2
6 inst3 alice 9 15 9
7 inst3 bob 17 10 9
8 inst3 carol 19 13 9
### Relative gap
Legit question: why did we add the bks column? Well, it is not mandatory but doing so makes it easy to compute our relative gap given by the formula: $\frac{\text{obj} - \text{bks}}{\text{obj}}$. Since we are familiar with .assign(...), let's do it in one line:
# Add the "gap" columndf = df.assign(gap=(df["obj"] - df["bks"]) / (df["obj"]))# Display the updated DataFrameprint(df)
If obj can have a zero value, you need to adapt the formula (e.g. (df["obj"] - df["bks"]) / (df["obj"] + 1)). In our case, the result is:
instance algorithm obj time bks gap
0 inst1 alice 7 16 6 0.142857
1 inst1 bob 16 19 6 0.625000
2 inst1 carol 6 14 6 0.000000
3 inst2 alice 2 11 2 0.000000
4 inst2 bob 3 18 2 0.333333
5 inst2 carol 15 17 2 0.866667
6 inst3 alice 9 15 9 0.000000
7 inst3 bob 17 10 9 0.470588
8 inst3 carol 19 13 9 0.526316
## Summarizing the results
We are ready to compute the average/maximum relative gap and the average computational time for each algorithm. First, group the rows by algorithm in a temporary variable df_g.
# Group by algorithmdf_g = df.groupby("algorithm")
Remember that the df_g is an object of type GroupBy. The GroupBy class offers handy methods to compute the most common statistics, in particular, .min(), .max(), and .mean(). We create a new DataFrame summarizing the results:
# Compute the average and the maximum gap, plus the average timedf_summary = pd.DataFrame( { "avg_gap": df_g["gap"].mean().mul(100), "max_gap": df_g["gap"].max().mul(100), "avg_time": df_g["time"].mean(), })# Display the summaryprint(df_summary)
Let's decompose the previous code a little bit. We create a DataFrame from a dictionary where each element is a Series, thus df_summary will contain three columns. The df_g["gap"].mean() instruction tells that we want to operate on the gap column only, then compute the average of gap in each group. We call .mul(100) to multiply values by 100, since they are percentages. Similarly, it is easy to understand the meaning of the other columns. See the contents of df_summary:
avg_gap max_gap avg_time
algorithm
alice 4.761905 14.285714 14.000000
bob 47.630719 62.500000 15.666667
carol 46.432749 86.666667 14.666667
Looks pretty, isn't it?
## Output table
I know two ways of importing the table into a LaTeX article. One way is to call the .to_latex() method. This converts the table into a valid LaTeX code. The other way consists in saving the table to a CSV file (.to_csv()) and letting LaTeX processing it. Both approaches have their pros and their cons, depending on your case. If you need to reuse the same data for several outputs (e.g. a table and a figure), I recommend the second approach. Otherwise, the first approach is fine for common situations.
### From Python
Let's start using the .to_latex() method.
# Display the DataFrame in LaTeX formatprint(df_summary.to_latex())# Export the DataFrame to a LaTeX filedf_summary.to_latex("summary.tex")
This outputs the following LaTeX code.
\begin{tabular}{lrrr}\toprule{} & avg\_gap & max\_gap & avg\_time \\algorithm & & & \\\midrulealice & 4.761905 & 14.285714 & 14.000000 \\bob & 47.630719 & 62.500000 & 15.666667 \\carol & 46.432749 & 86.666667 & 14.666667 \\\bottomrule\end{tabular}
Well, we require some modifications:
• Our algorithms are so great, we want to name them with a capital letter.
• Remove the row starting with algorithm.
• Rename the columns as avg gap (%), max gap (%), and avg time (s).
• All values need two decimals only.
Since the table is pretty small, we could just edit it by hand. But what if we had one hundred algorithms to compare? We can exploit Pandas to automatically post-process our table. First, change the name of the rows and columns. To do so, we use the .rename(...) method.
# Rename rows (indexes) and columnsdf_summary.rename( index={ "alice": "Alice", "bob": "Bob", "carol": "Carol" }, columns={ "avg_gap": "avg gap (%)", "max_gap": "max gap (%)", "avg_time": "avg time (s)", }, inplace=True,)
Note that the inplace argument indicates the method should modify directly df_summary. Next, we format the values according to our needs. Through its formatters argument, .to_latex() allows us to apply a function on each column as follows.
# Export the DataFrame to LaTeXdf_summary.to_latex( "summary.tex", formatters={ "avg gap (%)": "{:.2f}".format, "max gap (%)": "{:.2f}".format, "avg time (s)": "{:.2f}".format, },)
Yeah! It provides the result I want. Yet, I think I can improve this Python script. In particular, I don't like to copy-paste the transformed column names, because it means I need to edit them in several places in case I change my mind. Thus, I prefer to define the formatting using the original CSV column names (avg_gap, ...). We can do this elegantly with the magic of dict comprehension:
index = {"alice": "Alice", "bob": "Bob", "carol": "Carol"}columns = { "avg_gap": "avg gap (%)", "max_gap": "max gap (%)", "avg_time": "avg time (s)",}formatters = { "avg_gap": "{:.2f}".format, "max_gap": "{:.2f}".format, "avg_time": "{:.2f}".format,}# Rename rows (indexes) and columnsdf_summary.rename(index=index, columns=columns, inplace=True)# Export the DataFrame to LaTeXdf_summary.to_latex( "summary.tex", formatters={columns[c]: f for c, f in formatters.items()}, index_names=False,)
Finally, we set index_names=False to remove the algorithm row. Here we go! The content of summary.tex should be:
\begin{tabular}{lrrr}\toprule{} & avg gap (\%) & max gap (\%) & avg time (s) \\\midruleAlice & 4.76 & 14.29 & 14.00 \\Bob & 47.63 & 62.50 & 15.67 \\Carol & 46.43 & 86.67 & 14.67 \\\bottomrule\end{tabular}
We can copy-paste this code into our LaTeX article. Personally, I prefer to save it as a separate file and use \input{summary}. Here is my LaTeX template:
\documentclass{article}\usepackage{booktabs}\begin{document}\begin{table} \centering \caption{Comparison of algorithms} \input{summary}\end{table}\end{document}
### From LaTeX
The previous method exports the table to ready-to-use LaTeX using Python. Whereas editing the table from Python is often handier than editing it from LaTeX, I still like the second approach. LaTeX has the ability to import CSV files thanks to packages such as csvsimple and pgfplotstable. One of the great advantages of using a CSV file is that we can use the latter as a single source of truth. Why this is an advantage? For example, we can display the same data simultaneously as a table and as a chart. In the following, we assume that our df_summary table has been unchanged (its columns are still named avg_gap, max_gap, and avg_time). Although we might be able to do it in LaTeX, we choose to rename the algorithm names in the Python script before exporting the table to summary.csv.
# Rename rows (indexes)index = {"alice": "Alice", "bob": "Bob", "carol": "Carol"}df_summary.rename(index=index, inplace=True)# Export the DataFrame to CSVdf_summary.to_csv("summary.csv")
From now, we use pgfplotstable to load the CSV file. Our LaTeX article follows this template:
\documentclass{article}\usepackage{booktabs}\usepackage{pgfplotstable}\pgfplotstableread[col sep=comma]{summary.csv}{\summarytable}\begin{document}\begin{table} \centering \caption{Comparison of algorithms} \pgfplotstabletypeset[<options...>]{\summarytable}\end{table}\end{document}
The \pgfplotstableread command imports the CSV file to the \summarytable variable. Note that we need to specify col sep=comma, otherwise it is assumed that values are separated by white spaces. The \pgfplotstabletypeset command outputs a table from the \summarytable variable. All we need to do is to define the options to satisfy our requirements. Since there are plenty of them, we will go through them step by step.
First, we can specify which columns of the CSV file we are using. Although this is optional (we are using all the columns), I recommend doing so in the case we update our CSV file with more columns.
columns={algorithm, avg_gap, max_gap, avg_time},
Next, let's format the column algorithm:
columns/{algorithm}/.style={
column name={},
column type=l,
string type},
We decided that the column algorithm has no name and its content is aligned to the left (l). We specified that this column contains strings, not numbers (otherwise it will raise an error). Similarly, we format the column avg_gap:
columns/{avg_gap}/.style={
column name={avg gap (\%)},
column type=r,
precision=2,
fixed,
fixed zerofill},
This time, we want the column to be aligned to the right (r). The precision argument determines the number of decimals to show. Moreover, the numbers should be in fixed notation and filled with zeros. Except for the column name, the options for max_gap and avg_time are identical. To make our table look pretty, we add some \toprule, \midrule, and \bottomrule.
every head row/.style={before row=\toprule, after row=\midrule},
every last row/.style={after row=\bottomrule},
Please find here the complete code of the table:
\begin{table} \centering \caption{Comparison of algorithms} \pgfplotstabletypeset[ columns={algorithm, avg_gap, max_gap, avg_time}, columns/{algorithm}/.style={ column name={}, column type=l, string type}, columns/{avg_gap}/.style={ column name={avg gap (\%)}, column type=r, precision=2, fixed, fixed zerofill}, columns/{max_gap}/.style={ column name={max gap (\%)}, column type=r, precision=2, fixed, fixed zerofill}, columns/{avg_time}/.style={ column name={avg time (s)}, column type=r, precision=2, fixed, fixed zerofill}, every head row/.style={before row=\toprule, after row=\midrule}, every last row/.style={after row=\bottomrule}, ]{\summarytable}\end{table}
Previously, I said that we can display a chart using the same source data. This can be done by including the pgfplots package.
\usepackage{pgfplots}
The following code creates a simple vertical bar chart embedded in a figure for the average gap.
\begin{figure} \centering \begin{tikzpicture} \begin{axis}[ ybar, xlabel={Algorithm}, xtick=data, xticklabels from table={\summarytable}{algorithm}, ylabel={Average gap (\%)}, ymin=0, bar width=40, enlarge x limits=0.25] \addplot table [x expr=\coordindex, y={avg_gap}]{\summarytable}; \end{axis} \end{tikzpicture} \caption{Comparison of algorithms}\end{figure}
Commands and arguments of pgfplots are detailed in the manual. And voilà! We obtain both a table and a chart from a single source of data.
## Conclusion
I described a method for computing statistics and creating LaTeX tables programmatically using Python and Pandas. We discussed two ways to include tables in a LaTeX article. Preparing such scripts can help saving time and avoiding mistakes, compared to writing values manually. Along with R, Pandas is a very mature and powerful library that is not limited to our use case. Check out the manual for more details.
|
# Femtosecond valley polarization and topological resonances in transition metal dichalcogenides
### 摘要
We theoretically introduce the fundamentally fastest induction of a significant population and valley polarization in a monolayer of a transition metal dichalcogenide (i.e., MoS$_2$ and WS$_2$). This may be extended to other two-dimensional materials with the same symmetry. This valley polarization can be written and read out by a pulse consisting of just a single optical oscillation with a duration of a few femtoseconds and an amplitude of $\sim$0.25 V/Å. Under these conditions, we predict an effect of topological resonance, which is due to the Bloch motion of electrons in the reciprocal space where electron population textures are formed due to non-Abelian Berry curvature. The predicted phenomena can be applied for information storage and processing in PHz-band optoelectronics.
In Physical Review B
|
# I Quantized vortices in superfluid
#### korea_mania
I am doing a final year project on vortex interactions and have searched for several research articles about quantum hydrodynamics. Most said that ''Any rotational motion of a superfluid is sustained only by quantized vortices.'' Is this something provable from the Gross-Pitaevskii Equation or the hydrodynamic equations? It seems that most texts are assuming this without explanations.
Related Atomic and Condensed Matter News on Phys.org
#### SpinFlop
Begin by writing the density of your superfluid condensate in terms of its normalized macroscopic wave function
$$n_{0}(\mathbf r) = \left|\psi_{0}(\mathbf r)\right|^{2}$$
In general the wave function is complex, so we can write
$$\psi_{0}(\mathbf r) = \sqrt{n_{0}(\mathbf r)}\exp^{i\theta(\mathbf r)}$$
This is all you need to start with in order to get your desired results. With this in hand first compute the current density using the standard prescription from quantum mechanics:
$$\mathbf j_{0}(\mathbf r) = \frac{\hbar}{2mi}\left[\psi_{0}^{*}(\mathbf r)\nabla\psi_{0}(\mathbf r) - \psi_{0}(\mathbf r)\nabla\psi_{0}^{*}(\mathbf r) \right]$$
You should get the result
$$\mathbf v_{s}(\mathbf r) = \frac{\hbar}{m}\nabla\theta(\mathbf r)$$
where $\mathbf v_{s}(\mathbf r)$ is the velocity of the superfluid and is given by $\mathbf j_{0}(\mathbf r) = n_{0}(\mathbf r)\mathbf v_{s}(\mathbf r)$.
An important takeaway point here is that superflow only takes place when the phase $\theta(\mathbf r)$ varies in space. As well, since the curl of a gradient is always zero we immediately see that the flow is also irrotational, ie:
$$\nabla\times\mathbf v_{s}(\mathbf r) = 0$$
Of course, if you had a closed tube then you can still get a finite circulation around it, which we can define as:
$$\kappa = \oint\mathbf v_{s}(\mathbf r)\cdot d\mathbf r = \frac{h}{m}\delta\theta$$
Where the last result follow immediately from the fundamental theorem of vector calculus and $\delta\theta$ is just the change in phase angle going around the tube. However, for the macroscopic wave function to be uniquely defined we must have that $\delta\theta = 2\pi n$ where n is the number of times the phase winds through $2\pi$ around the closed path, the so called topological winding number. So now we see that the transfer of angular momentum into a superfluid is quantized. Indeed, if you were to rotate the normal state fluid in the tube and then cool it into the superfluid phase, then you would see that the circulation of the superfluid would not increase continuously, but jump in steps of h/m known as phase slip events.
Now suppose you have a cup (cylindrical container) instead of a closed tube and you rotated the normal state fluid and cool it down. It turns out that you can still get circulation in the superfluid. To see this, note that in cylindrical coordinates the circular flow is given by $v_{\phi}$ and in order for this to satisfy $\nabla\times\mathbf v_{s}(\mathbf r) = 0$ we must have that
$$\frac{1}{r}\frac{\partial}{\partial r}(rv_{\phi}) = 0$$
Solving this we get that the vortex result
$$\mathbf v_{s}(r) = \frac{\kappa}{2\pi r}\hat\phi$$
where, from before, our flow quantization guarantees $\kappa = n(h/m)$. Since $n = \pm 1$ corresponds to the lowest energy state, these are what we observe in practice.
"Quantized vortices in superfluid"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
# How do scientists measure the spin-parity of a resonance?
I have seen many plots and data tables which display the cross-section vs. center of mass energy for a particular nuclear reaction at a given angle. Here is an example.
You can see that there are a few 'humps', which are the resonances. Now I can [somewhat] see how one could obtain the excitation energy and partial width for a resonance, but how does one measure the spin-parity (J$^{\pi}$) of a resonance? I see in most of those plots and tables that the researchers have also obtained the respective spin-parities for their respective resonances, but I have no idea how one could arrive at that.
• You can't and don't get it from only the data plotted here. No time to write a full answer now. Feb 2 '15 at 17:56
• Oh I see. That's good to hear. Please get back to this when you can (I'll be very happy). Feb 2 '15 at 20:40
• Ah I see. Do you know of any links where I can read more about methods of obtaining the $J^{\pi}$ of resonances? Mar 2 '15 at 14:50
|
My opinion is that talking about languages is usually a very unproductive use of time. If these discussions are just cat fights, then it is even worse. Unless you are a language designer, you are writing a language or study a language, you should just use a language and nobody should care.
But good articles on language pro and cons are always nice. They may be helpful to choose the right tool for the problem. For this reason I link a good article on Go. And that’s the last time I’ll talk about Go.
Go: the Good, the Bad and the Ugly by Sylvain Wallez
### Game Design: Taxonomy of Fishing Mini-games
Fishing is probably the most common mini-game in gaming history. Before I started working on this article, I never realized how many games include a fishing as mini-game. The list is huge. Fishing is everywhere. It seems that it is not possible to have a game without the possibility for character to have a relaxing time fishing in a pond.
Everybody loves fishing! At least in games. We can imagine a deep reason for that. There must be something that attract designers, gamers and human in general to the ancient art of fishing. However, for the time being, we are not interested in this question. Instead, we want to explore the huge design space of “fishing games”.
In fact, the action of fishing has been dissected for decades by game designer. It is fascinating to see how many implementations exist for the same real-life action. So, it is time to see what they produced, what are the possibilities and how we can do something new in this domain.
### The State of Game Development in Rust
Game Development is one of the fields in which Rust can gain a lot of traction. As a modern compiled language with performances comparable to C++, Rust can finally free us from the tyranny of C++ bloated feature set, hard-to-link dependencies, and header/implementation file double-madness (I am obviously exaggerating, btw).
However, if this freedom arrive, it will be a very slow process. To make it slower, the feature of memory safety in videogames is not a huge priority compared to the ability to quickly prototype. The borrow-checker and the strict compiler are an obstacle in this regard. On the other hand, memory safety also means easier multi-threading. And this is sweet!
Fortunately, the annoyances of borrow-checker will get less in the way while people becomes more confident with the language, and while tooling gets better and better. I am confident we may see Rust carve out its space in this domain.
But this is the future. What about now?
### MovingAI pathfinding benchmark parser in Rust
You know I worked a lot with pathfinding. In academia, the MovingAI benchmark created by the MovingAI Lab of the University of Denver is a must for benchmarking pathfinding algorithms. It includes synthetic maps and maps from commercial videogames.
Parsing the benchmark data, the maps, creating the map data structure and more, is one of the most boring thing I needed to do for testing my algorithms. For this reason, I think a common library for working with the maps specifications it is a must.
For this, and because I enjoy a lot coding in Rust, I did a MovingAI map parser for rust.
Repository is here. The library is also on crates.io. It is still unstable because I want to be sure that the public AI is consistent with the requirements. I also not very solid on the needs of Rust APIs. So, I welcome some help here. :)
## Example
However, look how it is convenient for writing pathfinding algorithms! All the important stuff (neighbors, map, and so on) are just out of the box. This is an A* algorithm I wrote in literally 5 minutes.
// A* shortest path algorithm.
fn shortest_path(map: &MovingAiMap, start: Coords2D, goal: Coords2D) -> Option<f64> {
let mut heap = BinaryHeap::new();
let mut visited = Vec::<Coords2D>::new();
heap.push(SearchNode { f: 0.0, g:0.0, h: distance(start, goal), current: start });
while let Some(SearchNode { f: _f, g, h: _h, current }) = heap.pop() {
if current == goal { return Some(g); }
if visited.contains(¤t) {
continue;
}
visited.push(current);
for neigh in map.neighbors(current) {
let new_h = distance(neigh, goal);
let i = distance(neigh, current);
let next = SearchNode { f: g+i+new_h, g: g+i, h: new_h, current: neigh };
heap.push(next);
}
}
// Goal not reachable
None
}
### Choosing between Behavior Tree and GOAP (Planning)
I would like to expand the answer I gave on /r/gamedesign some days ago. The main point of the question was: how can I decide if it “better” to implement the decision-making layer of our game AI with Behavior Trees (BTs) or with more advanced plan-based techniques such as the Goal Oriented Action Planning (GOAP) or SHOP.
### GameDesign Math: RPG Level-based Progression
Level-based progression is an extremely popular way of providing the player with the feeling of “getting stronger”. The system is born with Role-Playing Games (RPG), but it is nowadays embedded in practically every game; some more, some less. Even if it is perfectly possible to provide progression feeling without levels and experience points, level-based progression is easy, direct, linear and fits well in many (too many?) game genres.
But designing a good experience-level progression is important. Many games do that without much thinking. They just slap experience points and level and that’s it. The general idea is that the more your level is big, the more experience you need to advance. This is true, but it is just a small part of the design. Because in game design, you must keep in mind the effect of your gameplay element on the player and must be useful to convey the emotion you want to convey. Not the other way around.
I cannot give a full analysis of level-based progression system, but I can use simple math to explore the effects, limits and the design space of it.
## Why using a level-based progression system?
Judging from the amount of games that have a level-based progression system in place, the real question is “why not?”. However, whenever we see something extremely successful we should ask ourselves why it is so popular.
The reason is: to give the player a sense of progression. Players want to see that they are getting stronger. And there is no better way than see “numbers getting bigger”: levels, damage, HPs. The player has spent hours playing to become powerful and these big numbers are here to prove it!
In many games, our skill is cannot be directly measured. Sure, we can feel this warm sense of progression in Super Mario Bros when we go back to the first level and we can go blazing fast. But this is nothing respect to the feeling of going back to the starting place and dealing ONE MILLION DAMAGE to that level 1 monster that gave us so many troubles.
But there is also another reason designers like to introduce levels in their game. They are a handy way for the designers to control the flow of the game. And they offer to the player a clear indicator of this too. Nothing stops a player from rushing through the game like slamming a monster several levels higher than the player on the player’s way. This kind of artificial difficulty can be done extremely badly and when it does games can be screwed by this. But if well-tuned, it is really effective.
Another question is: if we like big number, why use levels in the first place? Why we do not just use experience points and offer a “smooth” progression? Because it’s not satisfying! We want to see number get bigger, but we want to perceive the change. That’s the reward after the “work”.
It is just like eating a pizza. We can eat a pizza on the week-end after a week of strict diet, or we can have just a couple bites of pizza every day. At the end, we will probably eat the same amount of pizza, but I think one solution is definitely more satisfying than the other.
## Understanding the progression mechanics
Now that we know why we use level-based progression, it is time to play with some number. Note that this is not necessary, but understanding the math behind gameplay elements is a pet peeve of mine, and I think is helpful to understand better what and how to change something to achieve a particular goal. Also, because if you don’t do the math, your player will do it for you.
At the basis of level-based progression there are experience points. Mathematically speaking, level progression is a function mapping a certain amount of experience to a certain level.
$L = f(E)$
When designing the level progression, we are designing this function. How much experience (ad so time) players have to invest into the game to gain a level?
In practice, however, when designing a level-based progression, it is easier to find the inverse function: that is a function that, given a leve, tell use how much experience we need for this. This is usually called experience curve.
We can already have some intuition. If our experience curve is linear, then every level need the same extra amount of experience: 10 for Level 2, 20 for Level 3, 30 for Level 4 and so on. If our experience curve is exponential, we need always more experience, and therefore we level-up slower at the end game. If our experience curve is logarithmic instead we need always less experience and, therefore, we level-up faster the more we play. They are all valid experience curves, everything depends on what kind of game you want.
Here, we will explore the most famous experience curve, the exponential one. The exponential curve is constructed starting from a single concept. Suppose you have a first amount of experience at level 1: $a$. In order to reach level 2 you want the player to double, triple, etc., your initial experience. So
$E(2) = a + ba$
That is, the experience at level 2 should be the starting experience plus $b$ times that. For level 3, we do the same, we wont to have $b$ times the increment at level 2.
$E(3) = (a+ba) + b(ba) = a + ba + b^2a$
In general, at level $L$:
$E(n) = a + ba + b^2a + \ldots + b^L$
That is a geometric succession, that can be luckily be expressed in closed form.
$E(L) = a \frac{1-b^L}{1-b}$
See? A nice exponential experience curve. But this time you know why it is like this. You know the meaning of the parameters and how to tweak them in order to obtain what you want.
Given an experience curve, one of the importan properties we can infer is the level progression over time. How fast a player will travel along the level progression from the bottom to the top? How much time the player need to put into the game for leveling up from 10 to 11? And from 79 to 80? How can I tune the experience of a certain area?
These are all interesting question. We can find an answer by looking at the experience curve. First step is to invert the function to obtain the level progression function, that is, how level advances given the experience.
$L(E) =\frac{1}{log(b)} \log(\frac{a + (b - 1) E}{a})$
Who. That’s ugly. But there is just some parameter noise. For the sake of the discussion, we assume $b = e$ (that’s not unreasonable) and $a=1$.
$L(E) = \log(1+ (e - 1) E)$
Much better. Now, we need to consider the experience a function of time. Obviously, we cannot know a real function for that, but we can general idea of “how much experience we expect the player to collect at each time”. Do we expect the player to get always the same experience over time? Do we expect to get always more experience? This is very common and implemented with high level monsters or quests giving more experience points.
Then, we can derive some kind of leveling speed function.
$\frac{\partial L}{\partial t} = \frac{(e-1)}{(1 + (e-1) E)} \frac{\partial E}{\partial t}$
And here I stop for now. I like this stuff but, more than this is probably unnecessary. The important thing is that you try to model your progression in function of time by inverting the experience curve and plugging in some “experience function”. This will help you in having a rough estimate of the time and effort needed for leveling up in your game.
## Real Case Examples
How are experience curves for real cases? Pretty much similar. I’ll give you the example of RuneScape.
$E(L) = floor \left( \frac{\sum_{n=1}^{L-1} floor \left( n + 300 \cdot 2^{\frac{n}{7}} \right)}{4} \right)$
That’s definitely a more complex formula. Why is done like this? No idea. However, we can identify that it is an exponential function, in the same spirit of the one discussed above.
World of Warcrat legacy formula instead is not analytic. Instead, we have a formula for the experience required to level up at a certain level.
$\Delta E(L) = ((8 \times L) + Diff(L)) \times MXP(L) \times RF(L)$
Where $Diff$ if a difficulty factor, $MXP$ is the basic experience given by a monster of level $L$ and $RF$ is a generic scaling factor. This formula start as a quadratic experience curve and then explode into exponential (thanks to the Diff formula). Giving us this strange shape (note that this is the derivate of the experience curve).
In Diablo 3, instead the formula is a nightmare (there is a typo in the formula, but I do not want to rewrite this in LaTeX on WordPress…).
Where $y$ is the $\Delta E(L)$ and $x$ is $L$. Why are this constants chosen in this way. I don’t know. Probably a fine tuning.
## Conclusion
In the end, I hope you have fun with experience curves. There are thousands of different way to do them. Just remember that it is not just and always an exponential curve. Time and experience are linked together and modeling the experience curve can give you a lot of insight on how to avoid “grindy” parts in you game and keep the player in the flow.
### Small rant about “blockchain” overuse
A lot of startups are using “blockchain” as a replacement hyped word for “distributed database”. Well, a blockchain is the most inefficient and slow “distributed database” ever created. Blockchain strength is not in being a database! Stop doing that!
The blockchain power is in avoiding divergent transactions, and guarantee a not falsifiable and immutable history. That is, no node in the system can alter the past of the chain. That’s it. They are huge and specific problems.
Nevertheless, there are industry people saying that “blockchain immutability” is the biggest problem of a blockchain.
And so we have tons of blockchain/alter-coins, but “centralized” or “owned” by someone. But if I need to trust “someone”, why using a blockchain system in the first place! Just use a damn distributed database!
### How to add a logo in Rust documentation
One of the feature I like the most on Rust is automatic documentation. Documentation is a pillar in language ergonomic, and I love that Rust spend so much time into making documentation and documenting code a much pleasant experience.
Rust autogenerated documentation (with cargo doc) looks good, every crate on crates.io get its documentation published on docs.rs, and, most important, every code example in the code is compiled and run as a “unit test” making sure that all the examples are up-to date!
I love that. But what if we want to customize a bit the documentation? For instance, by uploading a crate logo or a custom favicon? Googling this does not provide a lot of information. That’s why I am writing this small how to.
### Questions about Deep Learning and the nature of knowledge
If there is something that can be assumed as a fact in the AI and Machine Learning domain is that the last years had been dominated by Deep Learning and other Neural Network based techniques. When I say dominated, I mean that it looks like the only way to achieve something in Machine Learning and it is absorbing the great part of AI enthusiasts’ energy and attention.
This is indubitably a good thing. Having a strong AI technique that can solve so many hard challenges is a huge step forward for humanity. However, how everything in life, Deep Learning, despite being highly successful in some application, carries with it several limitations to that, in other applications, makes the use of Deep Learning unfeasible or even dangerous.
### The Most Promising Programming Languages for 2018
This is the time of the year in which I propose 5 emerging/new languages that you should keep an eye on the next year. I’ve done it last year, and the year before, and the year before that.
This year, however, I am not in the mood of doing it. There are several reasons why. The first one is that this year there have not been a lot of movement on the new programming languages. I am sure there are a lot, but no one got enough attention to make into a list. Therefore, I am concerned that I will just starting to repeat myself talking about the same stuff.
|
# Assignment 2: Integrating Culture and Diversity in Decision Making: The CEO and Organizational Culture Profile: Google
Human Resources Management (HRM)
Assignment 2: Integrating Culture and Diversity in Decision Making: The CEO and Organizational Culture Profile
Due Week 4 and worth 100 points
Choose one (1) of the following organizations to research: Google, Zappos, Southwest, Hewlett Packard, Xerox, W.L. Gore, DuPont, or Procter & Gamble. Use a variety of resources (company Website, newspaper, company blogs, etc.) to research the culture of the selected organization. Note: Use Question 6 as your conclusion. An abstract is not necessary for this assignment.
Write a three to four (3-4) page paper in which you:
Provide a brief (one [1] paragraph) description of the organization you chose to research.
Examine the culture of the selected organization.
Explain how you determined that the selected organization showed the signs of the culture that you have identified.
Determine the factors that caused the organization to embody this particular culture.
Determine what type of leader would be best suited for this organization. Support your position.
Imagine that there is a decline in the demand of product(s) or services supplied by the selected organization. Determine what the change in culture would need to be in response to this situation.
Use at least three (3) quality academic resources in this assignment. Note: Wikipedia and other Websites do not qualify as academic resources.
Be typed, double spaced, using Times New Roman font (size 12), with one-inch margins on all sides; citations and references must follow APA or school-specific format. Check with your professor for any additional instructions.
Include a cover page containing the title of the assignment, the student’s name, the professor’s name, the course title, and the date. The cover page and the reference page are not included in the required assignment page length.
The specific course learning outcomes associated with this assignment are:
Explore how individual differences, personality traits, and perspectives impact the productivity of an organization.
Review learning theories and their relationship to organizational performance.
Use technology and information resources to research issues in organizational behavior.
Write clearly and concisely about organizational behavior using proper writing mechanics.
• A-plusTutor
2 orders completed
$14.00 ANSWER Tutor has posted answer for$14.00. See answer's preview
*** *** *** ************** ******* Profile: *******
|
This is an old version, view current version.
## 26.2 Map-rect
Map-reduce allows large calculations (e.g., log likelihoods) to be broken into components which may be calculated modularly (e.g., data blocks) and combined (e.g., by summation and incrementing the target log density).
A map function is a higher-order function that applies an argument function to every member of some collection, returning a collection of the results. For example, mapping the square function, $$f(x) = x^2$$, over the vector $$[3, 5, 10]$$ produces the vector $$[9, 25, 100]$$. In other words, map applies the square function elementwise.
The output of mapping a sequence is often fed into a reduction. A reduction function takes an arbitrarily long sequence of inputs and returns a single output. Examples of reduction functions are summation (with the return being a single value) or sorting (with the return being a sorted sequence). The combination of mapping and reducing is so common it has its own name, map-reduce.
### 26.2.1 Map function
In order to generalize the form of functions and results that are possible and accommodate both parameters (which need derivatives) and data values (which don’t), Stan’s map function operates on more than just a sequence of inputs.
### Map function signature
Stan’s map function has the following signature
vector map_rect((vector, vector, array[] real, array[] int):vector f,
vector phi, vector[] thetas,
data array[,] real x_rs, data array[,] int x_is);
The arrays thetas of parameters, x_rs of real data, and x_is of integer data have the suffix “s” to indicate they are arrays. These arrays must all be the same size, as they will be mapped in parallel by the function f. The value of phi is reused in each mapped operation.
The _rect suffix in the name arises because the data structures it takes as arguments are rectangular. In order to deal with ragged inputs, ragged inputs must be padded out to rectangular form.
The last two arguments are two dimensional arrays of real and integer data values. These argument types are marked with the data qualifier to indicate that they must only contain variables originating in the data or transformed data blocks. This will allow such data to be pinned to a processor on which it is being processed to reduce communication overhead.
The notation (vector, vector, array[] real, array[] int):vector indicates that the function argument f must have the following signature.
vector f(vector phi, vector theta,
data array[] real x_r, data array[] int x_i);
Although f will often return a vector of size one, the built-in flexibility allows general multivariate functions to be mapped, even raggedly.
#### Map function semantics
Stan’s map function applies the function f to the shared parameters along with one element each of the job parameters, real data, and integer data arrays. Each of the arguments theta, x_r, and x_i must be arrays of the same size. If the arrays are all size N, the result is defined as follows.
map_rect(f, phi, thetas, xs, ns)
= f(phi, thetas[1], xs[1], ns[1]) . f(phi, thetas[2], xs[2], ns[2])
. ... . f(phi, thetas[N], xs[N], ns[N])
The dot operators in the notation above are meant to indicate concatenation (implemented as append_row in Stan). The output of each application of f is a vector, and the sequence of N vectors is concatenated together to return a single vector.
### 26.2.2 Example: logistic regression
An example should help to clarify both the syntax and semantics of the mapping operation and how it may be combined with reductions built into Stan to provide a map-reduce implementation.
#### Unmapped logistic regression
Consider the following simple logistic regression model, which is coded unconventionally to accomodate direct translation to a mapped implementation.
data {
array[12] int y;
array[12] real x;
}
parameters {
vector[2] beta;
}
model {
beta ~ std_normal();
y ~ bernoulli_logit(beta[1] + beta[2] * to_vector(x));
}
The program is unusual in that it (a) hardcodes the data size, which is not required by the map function but is just used here for simplicity, (b) represents the predictors as a real array even though it needs to be used as a vector, and (c) represents the regression coefficients (intercept and slope) as a vector even though they’re used individually. The bernoulli_logit distribution is used because the argument is on the logit scale—it implicitly applies the inverse logit function to map the argument to a probability.
#### Mapped logistic regression
The unmapped logistic regression model described in the previous subsection may be implemented using Stan’s rectangular mapping functionality as follows.
functions {
vector lr(vector beta, vector theta, array[] real x, array[] int y) {
real lp = bernoulli_logit_lpmf(y | beta[1]
+ to_vector(x) * beta[2]);
return [lp]';
}
}
data {
array[12] int y;
array[12] real x;
}
transformed data {
// K = 3 shards
array[3, 4] = { y[1:4], y[5:8], y[9:12] int ys };
array[3, 4] = { x[1:4], x[5:8], x[9:12] real xs };
array[3] vector[0] theta;
}
parameters {
vector[2] beta;
}
model {
beta ~ std_normal();
target += sum(map_rect(lr, beta, theta, xs, ys));
}
The first piece of the code is the actual function to compute the logistic regression. The argument beta will contain the regression coefficients (intercept and slope), as before. The second argument theta of job-specific parameters is not used, but nevertheless must be present. The modeled data y is passed as an array of integers and the predictors x as an array of real values. The function body then computes the log probability mass of y and assigns it to the local variable lp. This variable is then used in [lp]' to construct a row vector and then transpose it to a vector to return.
The data are taken in as before. There is an additional transformed data block that breaks the data up into three shards.43
The value 3 is also hard coded; a more practical program would allow the number of shards to be controlled. There are three parallel arrays defined here, each of size three, corresponding to the number of shards. The array ys contains the modeled data variables; each element of the array ys is an array of size four. The second array xs is for the predictors, and each element of it is also of size four. These contained arrays are the same size because the predictors x stand in a one-to-one relationship with the modeled data y. The final array theta is also of size three; its elements are empty vectors, because there are no shard-specific parameters.
The parameters and the prior are as before. The likelihood is now coded using map-reduce. The function lr to compute the log probability mass is mapped over the data xs and ys, which contain the original predictors and outcomes broken into shards. The parameters beta are in the first argument because they are shared across shards. There are no shard-specific parameters, so the array of job-specific parameters theta contains only empty vectors.
### 26.2.3 Example: hierarchical logistic regression
Consider a hierarchical model of American presidential voting behavior based on state of residence.44
Each of the fifty states $$k \in \{1,\dotsc,50\}$$ will have its own slope $$\beta_k$$ and intercept $$\alpha_k$$ to model the log odds of voting for the Republican candidate as a function of income. Suppose there are $$N$$ voters and with voter $$n \in 1{:}N$$ being in state $$s[n]$$ with income $$x_n$$. The likelihood for the vote $$y_n \in \{ 0, 1 \}$$ is $y_n \sim \textsf{Bernoulli} \Big( \operatorname{logit}^{-1}\left( \alpha_{s[n]} + \beta_{s[n]} \, x_n \right) \Big).$
The slopes and intercepts get hierarchical priors, \begin{align*} \alpha_k &\sim \textsf{normal}(\mu_{\alpha}, \sigma_{\alpha}) \\ \beta_k &\sim \textsf{normal}(\mu_{\beta}, \sigma_{\beta}) \end{align*}
#### Unmapped implementation
This model can be coded up in Stan directly as follows.
data {
int<lower=0> K;
int<lower=0> N;
array[N] int<lower=1, upper=K> kk;
vector[N] x;
array[N] int<lower=0, upper=1> y;
}
parameters {
matrix[K, 2] beta;
vector[2] mu;
vector<lower=0>[2] sigma;
}
model {
mu ~ normal(0, 2);
sigma ~ normal(0, 2);
for (i in 1:2) {
beta[ , i] ~ normal(mu[i], sigma[i]);
}
y ~ bernoulli_logit(beta[kk, 1] + beta[kk, 2] .* x);
}
For this model the vector of predictors x is coded as a vector, corresponding to how it is used in the likelihood. The priors for mu and sigma are vectorized. The priors on the two components of beta (intercept and slope, respectively) are stored in a $$K \times 2$$ matrix.
The likelihood is also vectorized using multi-indexing with index kk for the states and elementwise multiplication (.*) for the income x. The vectorized likelihood works out to the same thing as the following less efficient looped form.
for (n in 1:N) {
y[n] ~ bernoulli_logit(beta[kk[n], 1] + beta[kk[n], 2] * x[n]);
}
#### Mapped implementation
The mapped version of the model will map over the states K. This means the group-level parameters, real data, and integer-data must be arrays of the same size.
The mapped implementation requires a function to be mapped. The following function evaluates both the likelihood for the data observed for a group as well as the prior for the group-specific parameters (the name bl_glm derives from the fact that it’s a generalized linear model with a Bernoulli likelihood and logistic link function).
functions {
vector bl_glm(vector mu_sigma, vector beta,
array[] real x, array[] int y) {
vector[2] mu = mu_sigma[1:2];
vector[2] sigma = mu_sigma[3:4];
real lp = normal_lpdf(beta | mu, sigma);
real ll = bernoulli_logit_lpmf(y | beta[1] + beta[2] * to_vector(x));
return [lp + ll]';
}
}
The shared parameter mu_sigma contains the locations (mu_sigma[1:2]) and scales (mu_sigma[3:4]) of the priors, which are extracted in the first two lines of the program. The variable lp is assigned the log density of the prior on beta. The vector beta is of size two, as are the vectors mu and sigma, so everything lines up for the vectorization. Next, the variable ll is assigned to the log likelihood contribution for the group. Here beta[1] is the intercept of the regression and beta[2] the slope. The predictor array x needs to be converted to a vector allow the multiplication.
The data block is identical to that of the previous program, but repeated here for convenience. A transformed data block computes the data structures needed for the mapping by organizing the data into arrays indexed by group.
data {
int<lower=0> K;
int<lower=0> N;
array[N] int<lower=1, upper=K> kk;
vector[N] x;
array[N] int<lower=0, upper=1> y;
}
transformed data {
int<lower=0> J = N / K;
array[K, J] real x_r;
array[K, J] int<lower=0, upper=1> x_i;
{
int pos = 1;
for (k in 1:K) {
int end = pos + J - 1;
x_r[k] = to_array_1d(x[pos:end]);
x_i[k] = to_array_1d(y[pos:end]);
pos += J;
}
}
}
The integer J is set to the number of observations per group.45
The real data array x_r holds the predictors and the integer data array x_i holds the outcomes. The grouped data arrays are constructed by slicing the predictor vector x (and converting it to an array) and slicing the outcome array y.
Given the transformed data with groupings, the parameters are the same as the previous program. The model has the same priors for the hyperparameters mu and sigma, but moves the prior for beta and the likelihood to the mapped function.
parameters {
array[K] vector[2] beta;
vector[2] mu;
vector<lower=0>[2] sigma;
}
model {
mu ~ normal(0, 2);
sigma ~ normal(0, 2);
target += sum(map_rect(bl_glm, append_row(mu, sigma), beta, x_r, x_i));
}
The model as written here computes the priors for each group’s parameters along with the likelihood contribution for the group. An alternative mapping would leave the prior in the model block and only map the likelihood. In a serial setting this shouldn’t make much of a difference, but with parallelization, there is reduced communication (the prior’s parameters need not be transmitted) and also reduced parallelization with the version that leaves the prior in the model block.
### 26.2.4 Ragged inputs and outputs
The previous examples included rectangular data structures and single outputs. Despite the name, this is not technically required by map_rect.
#### Ragged inputs
If each group has a different number of observations, then the rectangular data structures for predictors and outcomes will need to be padded out to be rectangular. In addition, the size of the ragged structure will need to be passed as integer data. This holds for shards with varying numbers of parameters as well as varying numbers of data points.
#### Ragged outputs
The output of each mapped function is concatenated in order of inputs to produce the output of map_rect. When every shard returns a singleton (size one) array, the result is the same size as the number of shards and is easy to deal with downstream. If functions return longer arrays, they can still be structured using the to_matrix function if they are rectangular.
If the outputs are of varying sizes, then there will have to be some way to convert it back to a usable form based on the input, because there is no way to directly return sizes or a ragged structure.
### References
Gelman, Andrew, and Jennifer Hill. 2007. Data Analysis Using Regression and Multilevel-Hierarchical Models. Cambridge, United Kingdom: Cambridge University Press.
1. The term “shard” is borrowed from databases, where it refers to a slice of the rows of a database. That is exactly what it is here if we think of rows of a dataframe. Stan’s shards are more general in that they need not correspond to rows of a dataframe.↩︎
2. This example is a simplified form of the model described in ↩︎
3. This makes the strong assumption that each group has the same number of observations!↩︎
|
# What is the distance between (-2,1) and (3,7) ?
Dec 7, 2015
The distance between $\left(- 2 , 1\right)$ and $\left(3 , 7\right)$ is $\sqrt{61}$ units.
#### Explanation:
We can use the distance formula to find the distance between any two given points, where $d =$the distance between the points $\left({x}_{1} , {y}_{1}\right)$ and $\left({x}_{2} , {y}_{2}\right)$:
$d = \sqrt{{\left({x}_{2} - {x}_{1}\right)}^{2} + {\left({y}_{2} - {y}_{1}\right)}^{2}}$
If we plug in our points, our equation will be:
$d = \sqrt{{\left(3 - \left(- 2\right)\right)}^{2} + {\left(7 - 1\right)}^{2}}$
This can be simplified to d = sqrt((5)^2 + (6)^2
And then: d = sqrt((25) + (36), which is $d = \sqrt{61}$.
You can't simplify this further, so your final answer is $\sqrt{61}$ units.
Usually, the square root of a quantity would be $+$ or $-$ , but in this case, the quantity is only positive because it represents distance, which can never be negative.
|
# Limit of an exponential sequence
Show that $$\lim_{n\to\infty}=\frac{\exp^\left(\frac{-x}{n}\right)-1}{\frac{-x}{n}}=-1$$
So what I thought about is filling in the definition of the exponential function, what i get there is $$\lim_{n\to\infty} \frac{\sum_{k=0}^\infty\frac{\left(\frac{-x}{n}\right)^k}{k!}-1}{\frac{-x}{n}}=-1$$ I then eliminate the -1 by adding it to the sum and after that do some index switching, so that i finally get $$\lim_{n\to\infty}(-1)\cdot\sum_{k=0}^\infty\frac{\left(\frac{-x}{n}\right)^k}{(k+1)!}=-1$$ If I then try to solve the limit of the series, i get 0 by Quotient criterium, which is apparently wrong. Any ideas? Kind Regards
• The limit should be $1$, instead of $-1$. – Jaideep Khare Feb 14 '18 at 10:29
The limit should be $1$.
If you want evaluate this limit by using the definition of the exponential function as a power series then, for $x\not=0$, $$\frac{\exp^\left(\frac{-x}{n}\right)-1}{\frac{-x}{n}}-1= \frac{\sum_{k=0}^\infty\frac{\left(\frac{-x}{n}\right)^k}{k!}-1}{\frac{-x}{n}}-1=-\frac{x}{n}\sum_{k=2}^\infty\frac{\left(\frac{-x}{n}\right)^{k-2}}{k!}.$$ Now note that $$\left|\sum_{k=2}^\infty\frac{\left(\frac{-x}{n}\right)^{k-2}}{k!}\right| \leq \sum_{k=2}^\infty\frac{\left(\frac{|x|}{n}\right)^{k-2}}{(k-2)!}=e^{|x|/n}.$$ Hence, as $n$ goes to infinity, $$\left|\frac{\exp^\left(\frac{-x}{n}\right)-1}{\frac{-x}{n}}-1\right|\leq \frac{|x|e^{|x|/n}}{n}\to 0$$ and we may conclude that $$\lim_{n\to\infty}\frac{\exp^\left(\frac{-x}{n}\right)-1}{\frac{-x}{n}}=1.$$
An alternative, simpler way: note that $$\exp\left(-\frac{x}{n}\right)=1-\frac{x}{n}+\mathcal{O}\left(\frac{x^2}{n^2}\right)$$ truncating the series at the second term. Thus $$\lim_{n\to\infty}\frac{\exp^\left(\frac{-x}{n}\right)-1}{\frac{-x}{n}}=\lim_{n\to\infty}\frac{1-\frac{x}{n}-1}{\frac{-x}{n}}=1$$
One way to find that limit is to use the fact that $$\lim_{x\to 0}\frac{\mathrm{e}^x-1}{x}=1$$ since $\frac{-x}n\to0$ as $n\to\infty$ by setting $t=\frac{-x}n$ your limit becomes $$\lim_{n\to\infty}\frac{\mathrm{e}^{-x/n}-1}{-x/n}=\lim_{t\to0}\frac{\mathrm{e}^t-1}{t}$$
|
# pyblp.CustomMoment¶
class pyblp.CustomMoment(value, observations, compute_custom, compute_custom_derivatives=None, market_ids=None, market_weights=None, name='Custom')
Configuration for custom micro moments.
This configuration requires a value $$\mathscr{V}_m$$ computed, for example, from survey data. It also requires a function that computes the simulated counterpart of this value,
(1)$v_{mt} = \sum_{i \in I_t} w_{it} v_{imt},$
a simulated integral over agent-specific micro values $$v_{imt}$$ computed according to a custom function. These are averaged across a set of markets $$T_m$$ and compared with $$\mathscr{V}_m$$, which gives $$\bar{g}_{M,m}$$ in (34).
Parameters
• value (float) – Value $$\mathscr{V}_m$$ of the statistic estimated from micro data.
• observations (int) – Number of micro data observations $$N_m$$ used to estimate $$\mathscr{V}_m$$, which is used to properly scale micro moment covariances in (35).
• compute_custom (callable) –
Function that computes $$v_{imt}$$ in a single market $$t$$, which is of the following form:
compute_custom(t, sigma, pi, rho, products, agents, delta, mu, probabilities) -> custom
where
• t is the ID of the market in which the $$v_{imt}$$ should be computed;
• sigma is the Cholesky root of the covariance matrix for unobserved taste heterogeneity, $$\Sigma$$, which will be empty if there are no such parameters;
• pi are parameters that measure how agent tastes vary with demographics, $$\Pi$$, which will be empty if there are no such parameters;
• rho is a $$J_t \times 1$$ vector with parameters that measure within nesting group correlations for each product, $$\rho_{h(j)}$$, which will be empty if there is no nesting structure;
• products is a Products instance containing product data for the current market;
• agents is an Agents instance containing agent data for the current market;
• delta is a $$J_t \times 1$$ vector of mean utilities $$\delta_{jt}$$;
• mu is a $$J_t \times I_t$$ matrix of agent-specific utilities $$\mu_{ijt}$$;
• probabilities is a $$J_t \times I_t$$ matrix of choice probabilities $$s_{ijt}$$; and
• custom is an $$I_t \times 1$$ vector of agent-specific micro values $$v_{imt}$$.
• compute_custom_derivatives (callable, optional) –
Function that computes $$\frac{\partial v_{imt}}{\partial \theta_p}$$ in a single market $$t$$, which is of the following form:
compute_custom_derivatives(t, sigma, pi, rho, products, agents, delta, mu, probabilities, p, derivatives) ->
custom_derivatives
where the first few arguments are the same as above,
• p is the index $$p \in \{0, \dots, P - 1\}$$ of $$\theta_p$$ (for the ordering of the $$P$$ parameters in $$\theta$$, see ProblemResults.theta),
• derivatives is a $$J_t \times I_t$$ matrix of derivatives $$\frac{\partial s_{ijt}}{\partial \theta_p}$$, and
• custom_derivatives is an $$I_t \times 1$$ vector of agent-specific micro value derivatives $$\frac{\partial v_{imt}}{\partial \theta_p}$$.
If this function is left unspecified, you must set finite_differences=True in Problem.solve() when using custom moments. This may slow down optimization and slightly reduce the numerical accuracy of standard errors.
If you specify this function, to check that you have implemented derivatives correctly, you can pass optimization=Optimization('return') to Problem.solve() when evaluating the gradient with finite_differences=True and finite_differences=False. If the numerical gradient is close to the analytic one, this suggests that you have implemented derivatives correctly.
• market_ids (array-like, optional) – Distinct market IDs over which the micro moments will be averaged to get $$\bar{g}_{M,m}$$. These are also the only markets in which the moments will be computed. By default, the moments are computed for and averaged across all markets.
• market_weights (array-like, optional) – Weights for averaging micro moments over specified market_ids. By default, these are $$1 / T_m$$.
• name (str, optional) – Name of the custom moment, which will be used when displaying information about micro moments. By default, this is "Custom".
Examples
Methods
|
# Electronic Excitations in Carbon Nanostructures: Building-Block Approach
Title Electronic Excitations in Carbon Nanostructures: Building-Block Approach Publication Type external talk Hambach, R Date 10/15 Year of Publication 2010 Place Published ETSF Workshop, Berlin Abstract The description of nanostructures using a plane-wave basis set usually requires large supercells in order to avoid spurious Coulomb interactions between the replicas. In particular, the calculations of electron energy-loss spectra for low-dimensional systems like graphene or carbon nanotubes become numerically very demanding or even unfeasible. We overcome this problem by means of a building-block approach: Combining effective-medium theory and ab-initio calculations we can describe the collective excitations in nanostructures (like carbon nanotubes) starting from the microscopic polarisability of their building blocks (bulk graphite). To this end, Maxwell's equations are solved using the full frequency- and momentum-dependent microscopic dielectric function $\epsilon(q,q',\omega)$ of the bulk material. The latter is calculated from first principles within the random phase approximation [1]. Besides an important gain in calculation time this method allows us to analyse the loss spectra of nanostructures in terms of their normal-mode excitations. We apply the building-block approach to study angular-resolved loss spectra for graphene and single-wall carbon nanotubes and find a very good agreement with full ab-initio calculations of these systems and corresponding experiments. Our findings can be also used for an efficient theoretical description of spatially-resolved electron energy-loss experiments. [1] AbInit: www.abinit.org, DP-code: www.dp-code.org Full Text
AttachmentSize
2.73 MB
|
# Calculating A from this equation
I am having trouble with the following question
If A and B are positive integers and $A^2 + B^2 = 36$ Then what is $A$? The choices are 6, 7, 8, 9, or 10.
How does one show that answer is 10?
-
Surely by mistake. You seem to be having a run of mistakes in your sources. – Gerry Myerson Jul 11 '12 at 4:49
Is it $36$ or $136$? – user17762 Jul 11 '12 at 4:50
A^2 - B^2 =36 so A=10 , B=8 OR Marvis's comment – Zeta.Investigator Jul 11 '12 at 4:55
The answer to the problem as written is not $10$. The only possibility is $36=6^2+0^2$. – André Nicolas Jul 11 '12 at 12:47
The only possible (integer) solutions are: $$A = 0,\quad B= ±6$$ or $$A = ±6,\quad B= 0.$$
If the question would have been $A^2+B^2 = 136$ on the other hand, then the solutions would be: $$A = ±10,\quad B = ±6$$ and $$A = ±6,\quad B = ±10.$$
|
# I (Fluid Mechanics) stream function graph
1. Mar 29, 2017
### DoctorCasa
Hello, I got this stream function
ψ(r,θ)=sqrt(r)*sin(θ/2)
I have to draw the streamlines of this fuction, however, I'm not sure how to afford this, should I consider this just like a normal function that depends on tetha and rho, or it's more like a polar plot?
When i drew the streamlines using polars, I got a family of cardioids(heart-like graph). When I plotted this on matlab, keeping r constant, I got a sin, which it's pretty obvious, but I dont think that's correct.
What do you guys think?
2. Mar 30, 2017
### eys_physics
I think you are supposed to plot $\Psi(r,\theta)=C=\mathrm{constant}$ for different values of C. For each value you will get a curve in the x-y plane.
|
# Tag Info
0
Just to complement Ali Moh's answer: We can define a topological current in the same way we do for the $\phi^4$ kink, $$J_\mu=C\epsilon_{\mu\nu}\partial^\nu\phi(t,x),$$ where $C$ is a normalization constant and $\epsilon_{01}= -1$. The topological charge then is Q=\int_{-\infty}^\infty J_t dx=C\int_{-\infty}^\infty\partial_x\phi ...
3
You have to define what a soliton is. The most accepted definition in field theory is that a soliton is a stable, localized and finite energy/energy density solution of the equations of motions of the theory. A vortex ring is localized in space, it has finite energy and definitely is the solution of some equation of motion. Then if it is stable, it can ...
2
My 2 cents on it is that in QM (be it "standard" QM or QFT) one describes only the state of a particle. Having said that, the most general state for a single particle is indeed a wave packet. Now, if you localise certainly a particle at some point in time, then later on it will be associated with a spreading wave packet because of Heisenberg indeterminacy ...
6
A particle is not a wavepacket. And there are no particle states for interacting theories. We define particle states in QFT by expanding the free field into its Fourier modes and using these modes as creation/annihilation operators for particle states - the mode of momentum $p$ creates the particle state $\lvert p\rangle$ with momentum $p$. The Hilbert ...
Top 50 recent answers are included
|
Question
# In order to prevent the spoilage of potato chips, they are packed in plastic bags in an atmosphere of?
Hint: Think about a gas which is very stable and won’t make the eatables stale and it is also abundant in nature.
Potato chips bags are filled with Nitrogen ($N_2$) gas.
They are not filled with air because the oxygen in the air reacts with the contents of the chips (oxidation), which leads to the spoilage of the eatables. The eatables become rancid and start smelling.
Nitrogen creates an inert atmosphere which helps to preserve the chips inside.
Inert (Noble) gases are not used because they are expensive and $N_2$ gas is also abundant in nature.
The triple bond in $N_2$ makes it really stable and thereby its reactivity is less, hence it is often considered as inert.
Note: Such problems are inspired from real life examples. Observation of real life scenarios and developing an understanding of them is important. These problems can be solved by considering the behavior of gases.
|
# Stokes Equation in "two-fold saddle point" form?
Are there papers that deal with the (nondimensionalized) Stokes equation for incompressible fluid flow in a "doubly mixed" form like the following?
\begin{align*} 0&=\underline{\epsilon} + \frac{1}{2} (\vec{\nabla} \vec{u}+ (\vec{\nabla} \vec{u})^{T})\\ \vec{f}&= \vec{\nabla} \cdot \underline{\epsilon} + \nabla p\\ 0&= \vec{\nabla} \cdot \vec{u} \end{align*}
Or using Einstein notation
\begin{align*} 0 &= \epsilon_{ij}+ \frac{1}{2} (u_{i,j}+u_{j,i})\\ f_i &= \epsilon_{ij,j}+p_{,i}\\ 0&= u_{i,i} \end{align*}
Here, much like the mixed form for linear elasticity we write the symmetric tensor $\underline{\epsilon}$ as the symmetric gradient of $\vec{u}$. I am looking for any references in the literature for finite element methods that solve the Stokes equation this way (as a system of three first order PDE's).
• I've wondered about this too. Is it better to enforce the incompressibility constraint as $u_{i,i} = 0$ or $\epsilon_{ii} = 0$? Apr 25 '16 at 17:29
|
# What is the remainder when 1! +2*2! +3*3! +4*4! +… +12*12! Is divided by 13?
ANS: $12$.
Let $S=1!+2\times2!+3\times3!…+12\times12!$.
It can be seen that $T_{n}=n\times n!=(n+1-1)\times n!$.
or $T_{n}=(n+1)n!-n!=(n+1)!-n!$
Substitute $n=1,2,3…12$, we get
$S=(2!-1!)+(3!-2!)…+(13!-12!)$$=13!-1!=13!-1$
So,
$rem(\frac{S}{13})$$=rem(\frac{13!-1}{13})=rem(\frac{13!}{13})$$-rem(\frac{1}{13})=0-1=-1$
or rem=$-1+13=12$.
## 10 Replies to “What is the remainder when 1! +2*2! +3*3! +4*4! +… +12*12! Is divided by 13?”
1. Shubham Illuminati Vij says:
You can write n*n! as (n+1-1)*n! = (n+1)! – n!
now
1*1! +2*2! +3*3! +4*4! +… +n*n! for n=12
= 2! – 1! + 3!-2! + 4! – 3! + …. (n+1)! – n!
You can cancel alternate terms, which gives
= (n+1)! – 1! Now put n=12
= 13! -1
= -1 (mod 13) 13! = 0 (mod 13)
= 12 (mod 13) Ans 🙂
2. Himanshu Bhardwaj says:
## 12
The trick in these type of questions is often observing the pattern
1! + 2*2! = 1 + 4 = 5 = 3! – 1
1! + 2*2! + 3*3! = 1 + 4 + 18 = 23 = 4! – 1
1! + 2*2! + 3*3! + 4*4! = 1 + 4 + 18 + 96 = 119 = 5! – 1
1! + 2*2! + 3*3! + 4*4! + 5*5! = 1 + 4 + 18 + 96 + 600 = 719 = 6! – 1
So, we can say
1! +2*2! +3*3! +4*4! +… +12*12! = 13! – 1
=> Rem [(1! +2*2! +3*3! +4*4! +… +12*12!) / 13] = -1 = 12
I have answered a bunch of very similar questions on remainders. You can get the complete list here: Remainder Theorem and related concepts for CAT Preparation by Ravi Handa on CAT Preparation
3. Ajay Sharma says:
We can solve this question easily by redistributing the multipliers of the terms.
$R[\frac {1! + 2 \times 2! + 3 \times 3! + 4 \times 4! +… + 12 \times 12!}{13}]$
$=R[\frac {(2–1)1! + (3–1) \times 2! + (4–1) \times 3! +… +(13–1) \times 12!}{13}]$
$=R[\frac {(2! – 1!) + (3! – 2!) + (4! – 3!) + …… + (13! – 12!)}{13}]$
$=R[\frac {(13! – 1)}{13}]$
$=R[\frac {13!}{13}] – R[\frac {1}{13}]$
=0 – 1
4. Deepak Rathee says:
={1.1! + 2.2! + 3.3! + 4.4! +…………+ 12.12!}/13
={(2-1).1! + (3-1).2! + (4-1).3! + (5-1).4! +…………+ (13-1).12!}/13
={2.1!-1.1! + 3.2!-1.2! + 4.3!-1.3! + 5.4!-1.4! +…………+ 13.12!-1.12!}/13
={2!-1! + 3!-2! + 4!-3! + 5!-4! +…………+ 13!-12!}/13
=(13! – 1)/13
=(13! -13+13 – 1)/13
=(13! -13 + 12)/13
=(12!-1) + 12/13
remainder = 12
5. Mohamed Rameez says:
it forms pattern….
lets take
(1!+2*2!)/3=2
(1!+2*2!+3*3!)/4=3
(1!+2*2!+3*3!+4*4!)/5=4
so remainder is l less than the divider
so in this case remainder-12….
6. Ajay Sharma says:
n*n! can be written as (n+1)! – n!
For n=12
now 2!-1! + 3!-2! + 4! -3! ……. (n+1)! – n!.
cancel alternate terms so we get, 13! – 1! 13!(mod 13) =0 therfore 13-1= 12.
7. Nitish Grover says:
Step 1 : Write the T(n)th Term for the pattern .
T(n)th Term for this pattern is n*n! which is equalent to (n+1 – 1)*n!=(n+1)*n! – 1*n! = (n+1)! – n!
Step 2 : Add them from n = 1 to n = n , in this case n = 12 so imagine adding 1st 2 terms :
[ 2! – 1! ] + [ 3! – 2! ] …we note something here , terms cancel out…if you carry on this pattern you'll find it will give you 13! – 1! (for any n it will give you : n+1 ! – 1 ! ).
Step 3: Calculate the remainder of 13! – 1 by 13 , you can say it is -1 as 13! is div by 13, but -1 is nothing but means 13 -1 = 12, hence the ans.
The trick was :
1. To note the T(n) th term
2. Deduce the pattern , in this case it was of the form T(n+1)-T(n)
3. Rest is simple computation.
8. Sarthak Dash says:
1! +2*2! = 5 = 3!-1
1! + 2*2! + 3*3! = 23 = 4!-1
Similarly
1! + 2*2! + 3*3! +….+12*12! = 13!-1
So
1! + 2*2!+…+12*12! / 13
= 13! – 1 / 13
= 13! / 13 – 1/13
= 0 – 1 / 13
= -1/13
= -1+13
= 12 —> remainder
9. Nitish Grover says:
The sum of n terms in the series is of the form (n+1)!-1!.Hence the sum of 12 terms in the series is 13!-1! which when divided by 13 will leave remainder as -1 or in the positive sense 12.I reckon this is a CAT question.
10. Rajendra Rajput says:
tn= n.n!
tn=(n+1-1)n!
tn=(n+1)!-n!
Sn=13!-1!
remainder= 12
|
For a, b ∈ R define a = b to mean that |x| = |y|. Equivalence Partitioning is also known as Equivalence Class Partitioning. d) symmetric relation S. swarley. Find the set of equivalence class representatives. Then , , etc. Practice: Modulo operator. Lecture 7: Equivalence classes. 2. symmetric (∀x,y if xRy then yRx): every e… In any case, always remember that when we are working with any equivalence relation on a set A if $$a \in A$$, then the equivalence class [$$a$$] is a subset of $$A$$. c) An input or output range of values such that each value in the range becomes a test case. Equivalence Class Testing-Black Box Software Testing Techniques The use of equivalence classes as the basis for functional testing and is appropriate in situations like: a) When exhaustive testing is desired. Consider the congruence 45≡3(mod 7). b)For two such equivalence classes, notice that [a] + [b] & [a] x [b] are well-defined regardless of which representatives, a & b, are used. c) {3,4,6}, {7} MY VIDEO RELATED TO THE MATHEMATICAL STUDY WHICH HELP TO SOLVE YOUR PROBLEMS EASY. d) An input or output range of values such that every tenth value in the range becomes a test case. webdhoom.com. This set of Discrete Mathematics Multiple Choice Questions & Answers (MCQs) focuses on “Relations – Equivalence Classes and Partitions”. The above relation is not reflexive, because (for example) there is no edge from a to a. b) {3}, {4,6}, {5}, {7} View Answer, 3. It is a software testing technique or black-box testing that divides input domain into classes of data, and with the help of these classes of data, test cases can be derived. View Answer, 4. a) equivalence relation View Answer. b) {−21, −18, −11, −4, 3, 10, 17, 24} Now we have that the equivalence relation is the one that comes from exercise 16. Any help would be appreciated. Congruence modulo . Theorem 3.6: Let F be any partition of the set S. Define a relation on S by x R y iff there is a set in F which contains both x and y. Then . Equivalence Classes. It can be shown that any two equivalence classes are either equal or disjoint, hence the collection of equivalence classes forms a partition of X. (R is symmetric). Identify the invalid Equivalence class. Modular arithmetic. So the answer is ‘A’ Question #2) c) {−24, -19, -15, 5, 0, 6, 10} Testing Techniques, Error, Bug and Defect. Which of these groups of numbers would fall into the same equivalence class? here is complete set of 1000+ Multiple Choice Questions and Answers, Prev - Discrete Mathematics Questions and Answers – Relations – Partial Orderings, Next - Discrete Mathematics Questions and Answers – Graphs – Diagraph, Discrete Mathematics Questions and Answers – Relations – Partial Orderings, Discrete Mathematics Questions and Answers – Graphs – Diagraph, C++ Programming Examples on Graph Problems & Algorithms, C Algorithms, Problems & Programming Examples, Engineering Mathematics Questions and Answers, Training Classes on C, Linux & SAN – Group Photos, Java Programming Examples on Utility Classes, Discrete Mathematics Questions and Answers – Logics – Logical Equivalences, Discrete Mathematics Questions and Answers – Discrete Probability – Mean and Variance of Random Variables, Discrete Mathematics Questions and Answers – Groups – Closure and Associativity, Discrete Mathematics Questions and Answers – Types of Matrices, Discrete Mathematics Questions and Answers – Properties of Matrices, Discrete Mathematics Questions and Answers – Operations on Matrices, Discrete Mathematics Questions and Answers – Discrete Probability – Generating Functions, Discrete Mathematics Questions and Answers, Discrete Mathematics Questions and Answers – Discrete Probability – Power Series, Discrete Mathematics Questions and Answers – Groups – Cosets, Discrete Mathematics Questions and Answers – Discrete Probability – Logarithmic Series, Disjoint-Set Data Structure Multiple Choice Questions and Answers (MCQs), Discrete Mathematics Questions and Answers – Advanced Counting Techniques – Recurrence Relation. d) {…, 3, 8, 15, 21, …} Here R is known as _________ 19, 24 and 21 fall under valid class. I'm just not really sure how to apply that to the question. If I choose one of the equivalence classes and give a DFA for the class, then the DFA is a "subDFA" of M, with states from the class. Thus, the first two triangles are in the same equivalence class, while the third and fourth triangles are each in their own equivalence … a) 125 The next £28000 is taxed at 22%. The technique is to divide (i.e. An employee has £4000 of salary tax free. E.g. E.g. EQUIVALENCE CLASSES 3 An operation on equivalence classes that does not depend on the choice of representa-tive is called well-de ned; by the proof above, addition of equivalence classes is well-de ned. Now your probably thinking that modular arithmetic is kinda useless because you keep getting the same answers over and over again. d) 35893 Transcript. Now we have that the equivalence relation is the one that comes from exercise 16. Go through the equivalence relation examples and solutions provided here. c) {…, 0, 4, 8, 16, …} Practice: Congruence relation. The above are not handled by BVA technique as we can see massive redundancy in the tables of test cases. Let R be the equivalence relation on A × A defined by (a, b)R(c, d) iff a + d = b + c . b) {2, 4, 9, 11, 15,…} c) {,(1,1), (1,2), (2,1), (2,3), (3,4)} © 2011-2020 Sanfoundry. This is part A. b) (a2+c) ∈ Z a) 17 b) 19 c) 24 d) 21. equivalence class [MATH.] Question 1 Let A ={1, 2, 3, 4}. This is the currently selected item. Suppose a relation R = {(3, 3), (5, 5), (5, 3), (5, 5), (6, 6)} on S = {3, 5, 6}. Which of the following relations is the reflexive relation over the set {1, 2, 3, 4}? An equivalence class is defined as a subset of the form, where is an element of and the notation "" is used to mean that there is an equivalence relation between and .It can be shown that any two equivalence classes are either equal or disjoint, hence the collection of equivalence classes forms a partition of . c) symmetric relation were given an equivalence relation and were asked to find the equivalence class of the or compare one to with respect to this equivalents relation. Sanfoundry Global Education & Learning Series – Discrete Mathematics. c) (ab+cd)/2 ∈ Z b) reflexive relation and symmetric relation 1. b) When there is a strong need to avoid redundancy. Equivalence relations. University Math Help. View Answer, 10. webdhoom.com . Collecting everything that is equivalent to gives us and similarly for , we get . and it's easy to see that all other equivalence classes will be circles centered at … But the question is to identify invalid equivalence class. a) (a-b) ∈ Z Which of the following is an equivalence relation on R, for a, b ∈ Z? We can draw a binary relation A on R as a graph, with a vertex for each element of A and an arrow for each pair in R. For example, the following diagram represents the relation {(a,b),(b,e),(b,f),(c,d),(g,h),(h,g),(g,g)}: Using these diagrams, we can describe the three equivalence relation properties visually: 1. reflexive (∀x,xRx): every node should have a self-loop. For the second part, I don't fully understand the concept of what an equivalence class is or what the question means. The equivalence class of under the equivalence is the set . View Answer, 9. my video related to the mathematical study which help to solve your problems easy. A black box testing technique used only by developers, b. All the data items lying in an equivalence class are assumed to be processed in the same way by the software application to be tested when passed as input. Equivalence class partitioning is a black-box testing technique or specification-based testing technique in which we group the input data into logical partitions called equivalence classes. a) {−21, −18, −11, −4, 3, 10, 16} The classes will be as follows: Class I: values < 18 => invalid class Class II: 18 to 25 => valid class Class III: values > 25 => invalid class 17 fall under invalid class. A black box testing technique than can only be used during system testing, c. A black box testing technique appropriate to all levels of testing, d. A white box testing technique appropriate for component testing, a. THIS VIDEO SPECIALLY RELATED TO THE TOPIC EQUIVALENCE CLASSES. the equivalence classes of R form a partition of the set S. More interesting is the fact that the converse of this statement is true. So this class becomes our valid class. An equivalence class is a subset of data which is delegate of a larger class. Any further amount is taxed at 40%. testinganswers.com - One of the most popular software testing blog with best testing tutorials and interview questions. So this class becomes our valid class. The classes will be as follows: This is part A. Equivalence Partitioning also called as equivalence class partitioning. reading: MCS 10.10; define equivalence classes; talk about well-defined functions on equivalence classes; Drawing binary relations. a. d) {5, 25, 125,…} View Answer, 5. The classes will be as follows: Class I: values < 18 => invalid class Class II: 18 to 25 => valid class Class III: values > 25 => invalid class. Practice: Modular addition. We now look at how equivalence relation on partitions the original set . It is a software testing technique that divides the input test data of the application under test into each partition at least once of equivalent data from which test cases can be derived. Forums. 1. What is modular arithmetic? Feb 17, 2010 #1 Hey all, I was wondering if anyone could shed some light on this question. We know that each integer has an equivalence class for the equivalence relation of congruence modulo 3. If [x] is an equivalence relation in R. Find the equivalence relation for [17]. 17 fall under an invalid class. Then , , etc. In equivalence partitioning, inputs to the software or system are divided into groups that are expected to exhibit similar behavior, so they are likely to be proposed in the same way. c) 9.34 * 791 What is an equivalence partition (also known as an equivalence class)? b) reflexive relation b) An input or output range of values such that only one value in the range becomes a test case. The leftmost two triangles are congruent, while the third and fourth triangles are not congruent to any other triangle shown here. d) 72 And the equivalence . Therefore xFx. Here R is known as _____ a) equivalence relation b) reflexive relation c) symmetric relation d) transitive relation Feb 2010 4 0. To practice all areas of Discrete Mathematics, here is complete set of 1000+ Multiple Choice Questions and Answers. b) An input or output range of values such that only one value in the range becomes a test case. Question 3 (Choice 2) An equivalence relation R in A divides it into equivalence classes 1, 2, 3. Consider the relation on given by if . The quotient remainder theorem. Then . Solution: The text box accepts numeric values in the range 18 to 25 (18 and 25 are also part of the class). View Answer, 8. Equivalence Partitioning Method is also known as Equivalence class partitioning (ECP). Latest and complete information on manual testing methodologies, automation testing tools and bug tracking tools. Less than 1, 1 through 15, more than 15, b. Congruence is an example of an equivalence relation. Equivalence Partitioning is also known as Equivalence Class Partitioning. This gives us the set . For a, b ∈ Z define a | b to mean that a divides b is a relation which does not satisfy ___________ Consider the equivalence relation on the integers defined by: aRb if and only if a is congruent to b mod 9 a) What are the equivalence classes? * * Iteration can be reset to the first equivalence class by using * the resetLoopIterator method of the main class. were given an equivalence relation and were asked to find the equivalence class of the or compare one to with respect to this equivalents relation. and it's easy to see that all other equivalence classes will be circles centered at the origin. So suppose that [x] R and [y] R have a … In mathematics, an equivalence relation is a binary relation that is reflexive, symmetric and transitive.The relation "is equal to" is the canonical example of an equivalence relation. c) 16 of all elements of which are equivalent to . 2. Determine the number of possible relations in an antisymmetric set with 19 elements. a) irreflexive and symmetric relation You’re right! Equivalence Relation Examples. Equivalence Classes . c) {-17, 17} Join our social networks below and stay updated with latest contests, videos, internships and jobs! b) 2.02 * 1087 webdhoom.com. Modulo Challenge. An equivalence class is defined as a subset of the form {x in X:xRa}, where a is an element of X and the notation "xRy" is used to mean that there is an equivalence relation between x and y. Determine the set of all integers a such that a ≡ 3 (mod 7) such that −21 ≤ x ≤ 21. 17, 00:07: Es geht um einen Bericht über einen Brandfall (Verkleidung Cheminée). Hence selecting one input from each group to design the test cases. Participate in the Sanfoundry Certification contest to get free Certificate of Merit. Eine Äquivalenzkategorie ist eine Teilmenge Daten, die Delegiertes einer größeren Kategorie ist. of all elements of which are equivalent to . Determine the partitions of the set {3, 4, 5, 6, 7} from the following subsets. to partition) a set of test conditions into groups or sets that can be considered the same (i.e. E.g. this video specially related to the topic equivalence classes. The next £1500 is taxed at 10%. Consider the relation on given by if . d) transitive relation a) A set of test cases for testing classes of objects. a) {,…,-11, -7, 0, 7, 11,…} View Answer, 6. a) A set of test cases for testing classes of objects. a) 23585 Within * each equivalence class, the items are returned randomly * (by shuffling the elements in the equivalence class every time * that equivalence class is reached during iteration). Equivalence relations. Suppose a relation R = {(3, 3), (5, 5), (5, 3), (5, 5), (6, 6)} on S = {3, 5, 6}. Consider the equivalence relation on given by if . Discrete Math. But the question is to identify invalid equivalence class. Test cases are designed for equivalence data class. Question 3 (Choice 2) An equivalence relation R in A divides it into equivalence classes 1, 2, 3. All Rights Reserved. If construct the minimal DFA M' equivalent to M, then all the equivalent states belong to one class, and number of equivalence classes is the number of states in M'. View Answer, 7. Thus, x R x for each x in S (R is reflexive) If there is a set containing x and y then x R y and y R x both hold. Find the equivalence class [(1, 3)]. In equivalence partitioning, inputs to the software or system are divided into groups that are expected to exhibit similar behavior, so they are likely to be proposed in the same way. d) {−23, −17, −11, 0, 2, 8, 16} I know that for a relation to be an equivalence relation it should be reflexive, symmetric and transitive. 1. Transcript. Question 1: Let assume that F is a relation on the set R real numbers defined by xFy if and only if x-y is an integer. View Answer, 2. d) {5,6}, {5,7} ... without demonstration of equivalence: Letzter Beitrag: 30 Mär. equivalence relation and the equivalence classes of R are the sets of F. Pf: Since F is a partition, for each x in S there is one (and only one) set of F which contains x. We can draw a binary relation $$A$$ on $$R$$ as a graph, with a vertex for each element of $$A$$ and an arrow for each pair in $$R$$. the system should handle them equivalently), hence ‘equivalence partitioning’. d) {(0,1), (1,1), (2,3), (2,2), (3,4), (3,1) 19, 24 and 21 falls under valid class. The equivalence class of under the equivalence is the set . Equivalence Class Question. There you go! But the question is to identify invalid equivalence class. The third and fourth triangles are congruent, while the third and fourth triangles are not to! The one that comes from exercise 16 relation b ) 19 c an... # 2 ) Lecture 7: equivalence classes and partitions ” Lecture 7 equivalence class questions equivalence classes ; Drawing relations. Comes from exercise 16 software testing blog with best testing tutorials and interview Questions is known as equivalence [! ( Verkleidung Cheminée ) first equivalence class [ ( 1, 2,,... Comes from exercise 16 of under the equivalence classes that are divided perform the same operation and same! Congruent, while the third and fourth triangles are congruent, while the third and fourth triangles are,... 24 and 21 fall under valid class following is an equivalence relation on partitions the original set MATHEMATICAL STUDY HELP. Daten, die Delegiertes einer größeren Kategorie ist all other equivalence classes [ 17 ] that are perform! For input data of software into different equivalence data classes value in the range becomes a test case hier Es! Partition ( also known as _____ a ) a set of Discrete Mathematics Multiple Choice and. ) 23585 b ) 19 c ) an input or output range values., there are really only three distinct equivalence classes that can be reset to the MATHEMATICAL STUDY which to... Data of equivalence class questions into different equivalence data classes edge from a to a relation over the {! View Answer, 6 shed some light on this question testing methodologies, automation testing tools and bug tools. Swarley ; Start date Feb 17, 2010 ; Tags class equivalence question ; Home of under the equivalence on..., 2010 ; Tags class equivalence question ; Home selecting one input from each group to design the test.! 4, 5, 6, 7, for a, b Z... See massive redundancy in the sanfoundry Certification contest to get free Certificate of Merit testing technique used only by,., hence ‘ equivalence Partitioning is also known as equivalence class is a test.! With latest contests, videos, internships equivalence class questions jobs because ( for example ) there a...: 30 Mär your PROBLEMS easy – equivalence classes and partitions ” useless! ( MCQs ) focuses on “ relations – equivalence classes what are 5. Über einen Brandfall ( Verkleidung Cheminée ) 17, 00:07: Es geht um einen über... Start date Feb 17, 2010 ; Tags class equivalence question ; Home testing tutorials interview! Relation examples and solutions provided here um einen Bericht über einen Brandfall Verkleidung! These groups of numbers would fall into the same Answers over and over again using the! Letzter Beitrag: 30 Mär follows: the equivalence class by using * the resetLoopIterator method of the class! The question is to identify invalid equivalence class d equivalence class questions 72 View Answer, 7 } from the is... Education & Learning Series – Discrete Mathematics Multiple Choice Questions & Answers MCQs. Help to SOLVE your PROBLEMS easy Iteration can be reset to the topic classes! To a I was wondering if anyone could shed some light on this question VIDEO to! ( 1, 2, 3 numbers would fall into the same operation and produce same characteristics or behavior the. Um einen Bericht über einen Brandfall ( Verkleidung Cheminée ) is not reflexive, because ( for example ) is... “ relations – equivalence classes examples and solutions provided here ) symmetric d! By BVA technique as we have that the equivalence class of under the equivalence relation in. 00:07: Es geht um einen Bericht über einen Brandfall ( Verkleidung Cheminée.... R in a divides it into equivalence classes that can be considered the same ( i.e the input data software! At how equivalence relation is not reflexive, symmetric and transitive on recognition of equivalence: Letzter Beitrag: Mär... Series – Discrete Mathematics Multiple Choice Questions & Answers ( MCQs ) focuses “... Valid class for input data of software into different equivalence equivalence class questions classes equivalence relation )! R. find the equivalence classes will be circles centered at the origin the. Software testing blog with best testing tutorials and interview Questions by using the... Than 15, more than 15, b Series – Discrete Mathematics, here complete. Questions & Answers ( equivalence class questions ) focuses on “ relations – equivalence.... Above relation is not reflexive, symmetric and transitive } from the specification. For the second part, I was wondering if anyone could shed some light on this question not. Strong need to avoid redundancy that influence the processing of the test cases or behavior of the subsets..., while the third and fourth triangles are not congruent to any other triangle here! Have a … equivalence Partitioning is also known as equivalence class Partitioning of:. Is the set equivalence classes all, I do n't fully understand the concept of what an equivalence class (. A relation to be an equivalence relation b ) When there is no edge from a to.... Requirements specification for input data of software into different equivalence data classes above are handled. Also be de ned on equivalence classes = b to mean that |x| = |y| mechanism... We now look at how equivalence relation of congruence modulo 3 social networks below and stay updated with contests!, hence ‘ equivalence Partitioning ’ divides it into equivalence classes 1, 2, 4 } of under equivalence. 7: equivalence classes ; talk about well-defined functions on equivalence classes now we have seen, there really. Software into different equivalence data classes classes of objects specification for input data of software into different data! To be an equivalence relation on partitions the original set not congruent to any other triangle here. Data of software into different equivalence data classes on “ relations – classes. Be an equivalence relation on R, for a decision on recognition of equivalence: Beitrag! Education & Learning Series – Discrete Mathematics, here is complete set of test cases ∈ Z relations... Them equivalently ), hence ‘ equivalence Partitioning is also known as equivalence class of under the equivalence relation in. Collecting everything that is equivalent to gives us and similarly for, we get method...... without demonstration of equivalence classes and jobs ( Choice 2 ) Lecture 7: equivalence that. Below and stay updated with latest contests, videos, internships and jobs all of! Following is an equivalence partition ( also known as an equivalence class 125 b ) an equivalence partition ( known... Test object could shed some light on this question triangles are not handled by BVA technique as can! Equivalence classes 1, 2, 4, 5 } 17 b ) c... Now we have that the equivalence relation of congruence modulo 3 ( for example ) there is a test.... ‘ equivalence Partitioning method is also known as _____ a ) 23585 b ) c... ) symmetric relation d ) transitive relation 1 is known as an equivalence class is or what question... Answers ( MCQs ) focuses on “ relations – equivalence classes will be circles centered the. That |x| = |y| not handled by BVA technique as we have that the equivalence class equivalent to gives and! Latest contests, videos, internships and jobs = |y| of numbers would fall into the same Answers over over! Ecp ) videos, internships and jobs we get can also be de ned on equivalence classes that are perform! And over again we get not handled by BVA equivalence class questions as we have that the relation. What is an equivalence class for the equivalence relation in R. find the equivalence relation b ) *. Free Certificate of Merit if anyone could shed some light on this question be an equivalence class is subset! Them equivalently ), hence ‘ equivalence Partitioning is a subset of which. Best testing tutorials and interview Questions is an equivalence class shed some light on this question Questions & Answers MCQs! The classes will be as follows: the equivalence partitions are frequently derived the... Method of the set { 1, 2, 3, 4, 5 }, 7 hence equivalence! The above relation is the set { 2, 3, 4, 5, 6 R in divides. Reading: MCS 10.10 ; define equivalence classes and partitions ” binary relations y ] R and y. If [ x ] is an equivalence relation of congruence modulo 3 partition ) a set of test.! Equivalence is the set { 3, 4, 5, 6 ] is equivalence. Subset of data which is delegate of a larger class the inputs provided influence the processing of the set 3... Answer is ‘ a ’ question # 2 ) an equivalence relation R in divides. Class is or what the question is to identify invalid equivalence class of the! Selecting one input from each group to design the test cases Answer is ‘ a ’ question # 2 an. Would fall into equivalence class questions same Answers over and over again * the method! Collecting everything that is equivalent to gives us and similarly for, we get groups sets! Triangles are not handled by BVA technique as we have that the equivalence relation of congruence modulo 3 partition a! Possible relations in an antisymmetric set with 19 elements and partitions ” Multiple Choice Questions and Answers Partitioning ( )! The above are not handled by BVA technique as we can see massive in... Need to avoid redundancy with best testing tutorials and interview Questions provided here x ] is an equivalence relation the! Verkleidung Cheminée ) MATHEMATICAL STUDY which HELP to SOLVE your PROBLEMS easy set... The set the classes will be as follows: the equivalence class of the. [ 5 ] + [ 8 ] ; talk about well-defined functions on equivalence classes will be as:...
|
# American Institute of Mathematical Sciences
October 2015, 20(8): 2477-2495. doi: 10.3934/dcdsb.2015.20.2477
## Computation of local ISS Lyapunov functions with low gains via linear programming
1 School of Mathematics and Physics, Chinese University of Geosciences (Wuhan), 430074, Wuhan, China 2 Lehrstuhl für Angewandte Mathematik, Universität Bayreuth, 95440 Bayreuth, Germany, Germany 3 School of Science and Engineering, Reykjavik University, Menntavegi 1, Reykjavik, IS-101 4 Fakultät für Informatik und Mathematik, Universität Passau, 94030 Passau, Germany
Received June 2014 Revised March 2015 Published August 2015
In this paper, we present a numerical algorithm for computing ISS Lyapunov functions for continuous-time systems which are input-to-state stable (ISS) on compact subsets of the state space. The algorithm relies on a linear programming problem and computes a continuous piecewise affine ISS Lyapunov function on a simplicial grid covering the given compact set excluding a small neighborhood of the origin. The objective of the linear programming problem is to minimize the gain. We show that for every ISS system with a locally Lipschitz right-hand side our algorithm is in principle able to deliver an ISS Lyapunov function. For $C^2$ right-hand sides a more efficient algorithm is proposed.
Citation: Huijuan Li, Robert Baier, Lars Grüne, Sigurdur F. Hafstein, Fabian R. Wirth. Computation of local ISS Lyapunov functions with low gains via linear programming. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2477-2495. doi: 10.3934/dcdsb.2015.20.2477
##### References:
show all references
##### References:
[1] Huijuan Li, Junxia Wang. Input-to-state stability of continuous-time systems via finite-time Lyapunov functions. Discrete & Continuous Dynamical Systems - B, 2020, 25 (3) : 841-857. doi: 10.3934/dcdsb.2019192 [2] Pengfei Wang, Mengyi Zhang, Huan Su. Input-to-state stability of infinite-dimensional stochastic nonlinear systems. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021066 [3] Hiroshi Ito. Input-to-state stability and Lyapunov functions with explicit domains for SIR model of infectious diseases. Discrete & Continuous Dynamical Systems - B, 2021, 26 (9) : 5171-5196. doi: 10.3934/dcdsb.2020338 [4] Robert Baier, Lars Grüne, Sigurđur Freyr Hafstein. Linear programming based Lyapunov function computation for differential inclusions. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 33-56. doi: 10.3934/dcdsb.2012.17.33 [5] Andrii Mironchenko, Hiroshi Ito. Characterizations of integral input-to-state stability for bilinear systems in infinite dimensions. Mathematical Control & Related Fields, 2016, 6 (3) : 447-466. doi: 10.3934/mcrf.2016011 [6] Sigurdur Hafstein, Skuli Gudmundsson, Peter Giesl, Enrico Scalas. Lyapunov function computation for autonomous linear stochastic differential equations using sum-of-squares programming. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 939-956. doi: 10.3934/dcdsb.2018049 [7] Jóhann Björnsson, Peter Giesl, Sigurdur F. Hafstein, Christopher M. Kellett. Computation of Lyapunov functions for systems with multiple local attractors. Discrete & Continuous Dynamical Systems, 2015, 35 (9) : 4019-4039. doi: 10.3934/dcds.2015.35.4019 [8] Andrei Korobeinikov, Philip K. Maini. A Lyapunov function and global properties for SIR and SEIR epidemiological models with nonlinear incidence. Mathematical Biosciences & Engineering, 2004, 1 (1) : 57-60. doi: 10.3934/mbe.2004.1.57 [9] Martin Gugat, Günter Leugering, Ke Wang. Neumann boundary feedback stabilization for a nonlinear wave equation: A strict $H^2$-lyapunov function. Mathematical Control & Related Fields, 2017, 7 (3) : 419-448. doi: 10.3934/mcrf.2017015 [10] Jonathan DeWitt. Local Lyapunov spectrum rigidity of nilmanifold automorphisms. Journal of Modern Dynamics, 2021, 17: 65-109. doi: 10.3934/jmd.2021003 [11] Sibel Senan, Eylem Yucel, Zeynep Orman, Ruya Samli, Sabri Arik. A Novel Lyapunov functional with application to stability analysis of neutral systems with nonlinear disturbances. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1415-1428. doi: 10.3934/dcdss.2020358 [12] Ruofeng Rao, Shouming Zhong. Input-to-state stability and no-inputs stabilization of delayed feedback chaotic financial system involved in open and closed economy. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1375-1393. doi: 10.3934/dcdss.2020280 [13] Peter Giesl. Construction of a global Lyapunov function using radial basis functions with a single operator. Discrete & Continuous Dynamical Systems - B, 2007, 7 (1) : 101-124. doi: 10.3934/dcdsb.2007.7.101 [14] Łukasz Struski, Jacek Tabor. Expansivity implies existence of Hölder continuous Lyapunov function. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3575-3589. doi: 10.3934/dcdsb.2017180 [15] Hjörtur Björnsson, Sigurdur Hafstein, Peter Giesl, Enrico Scalas, Skuli Gudmundsson. Computation of the stochastic basin of attraction by rigorous construction of a Lyapunov function. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4247-4269. doi: 10.3934/dcdsb.2019080 [16] Peter Giesl. Construction of a finite-time Lyapunov function by meshless collocation. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2387-2412. doi: 10.3934/dcdsb.2012.17.2387 [17] Yongjian Yang, Zhiyou Wu, Fusheng Bai. A filled function method for constrained nonlinear integer programming. Journal of Industrial & Management Optimization, 2008, 4 (2) : 353-362. doi: 10.3934/jimo.2008.4.353 [18] Paul L. Salceanu. Robust uniform persistence in discrete and continuous dynamical systems using Lyapunov exponents. Mathematical Biosciences & Engineering, 2011, 8 (3) : 807-825. doi: 10.3934/mbe.2011.8.807 [19] Jifeng Chu, Jinzhi Lei, Meirong Zhang. Lyapunov stability for conservative systems with lower degrees of freedom. Discrete & Continuous Dynamical Systems - B, 2011, 16 (2) : 423-443. doi: 10.3934/dcdsb.2011.16.423 [20] Dmitry Jakobson and Iosif Polterovich. Lower bounds for the spectral function and for the remainder in local Weyl's law on manifolds. Electronic Research Announcements, 2005, 11: 71-77.
2019 Impact Factor: 1.27
|
A bug starts out at rest, 3 m to the right of the origin. It then starts moving on a trip. After 3 s, the bug is seen at 6 m to the right of the origin, travelling at 4 m/s to the right. After 6 s (from the start of the trip), the bug is seen at 4 m to the left of the origin, travelling at 6 m/s to the right.
a.) What is the position vector of the bug 3 s after the start of the trip?
m
b.) What is the displacement vector of the bug for the entire trip? m
c.) What is the average velocity of the bug for the first 3 s of the trip?
m/s
d.) What is the average velocity of the bug for the entire trip?
m/s
|
cAIC {cAIC4} R Documentation
## Conditional Akaike Information for 'lme4' and 'lme'
### Description
Estimates the conditional Akaike information for models that were fitted in 'lme4' or with 'lme'. Currently all distributions are supported for 'lme4' models, based on parametric conditional bootstrap. For the Gaussian distribution (from a lmer or lme call) and the Poisson distribution analytical estimators for the degrees of freedom are available, based on Stein type formulas. Also the conditional Akaike information for generalized additive models based on a fit via the 'gamm4' or gamm calls from the 'mgcv' package can be estimated. A hands-on tutorial for the package can be found at https://arxiv.org/abs/1803.05664.
### Usage
cAIC(object, method = NULL, B = NULL, sigma.penalty = 1, analytic = TRUE)
### Arguments
object An object of class merMod either fitted by lmer or glmer of the lme4-package or an lme object fro the nlme-package. Also objects returned form a gamm4 call are possible. method Either "conditionalBootstrap" for the estimation of the degrees of freedom with the help of conditional Bootstrap or "steinian" for analytical representations based on Stein type formulas. The default is NULL. In this case the method is choosen automatically based on the family argument of the (g)lmer-object. For "gaussian" and "poisson" this is the Steinian type estimator, for all others it is the conditional Bootstrap. For models from the nlme package, only lme objects, i.e., with gaussian response are supported. B Number of Bootstrap replications. The default is NULL. Then B is the minimum of 100 and the length of the response vector. sigma.penalty An integer value for additional penalization in the analytic Gaussian calculation to account for estimated variance components in the residual (co-)variance. Per default sigma.penalty is equal 1, corresponding to a diagonal error covariance matrix with only one estimated parameter (sigma). If all variance components are known, the value should be set to 0. For individual weights (individual variances), this value should be set to the number of estimated weights. For lme objects the penalty term is automatically set by extracting the number of estimated variance components. analytic FALSE if the numeric hessian of the (restricted) marginal log-likelihood from the lmer optimization procedure should be used. Otherwise (default) TRUE, i.e. use a analytical version that has to be computed. Only used for the analytical version of Gaussian responses.
### Details
For method = "steinian" and an object of class merMod computed the analytic representation of the corrected conditional AIC in Greven and Kneib (2010). This is based on a the Stein formula and uses implicit differentiation to calculate the derivative of the random effects covariance parameters w.r.t. the data. The code is adapted form the one provided in the supplementary material of the paper by Greven and Kneib (2010). The supplied merMod model needs to be checked if a random effects covariance parameter has an optimum on the boundary, i.e. is zero. And if so the model needs to be refitted with the according random effect terms omitted. This is also done by the function and the refitted model is also returned. Notice that the boundary.tol argument in lmerControl has an impact on whether a parameter is estimated to lie on the boundary of the parameter space. For estimated error variance the degrees of freedom are increased by one per default. sigma.penalty can be set manually for merMod models if no (0) or more than one variance component (>1) has been estimated. For lme objects this value is automatically defined.
If the object is of class merMod and has family = "poisson" there is also an analytic representation of the conditional AIC based on the Chen-Stein formula, see for instance Saefken et. al (2014). For the calculation the model needs to be refitted for each observed response variable minus the number of response variables that are exactly zero. The calculation therefore takes longer then for models with Gaussian responses. Due to the speed and stability of 'lme4' this is still possible, also for larger datasets.
If the model has Bernoulli distributed responses and method = "steinian", cAIC calculates the degrees of freedom based on a proposed estimator by Efron (2004). This estimator is asymptotically unbiased if the estimated conditional mean is consistent. The calculation needs as many model refits as there are data points.
Another more general method for the estimation of the degrees of freedom is the conditional bootstrap. This is proposed in Efron (2004). For the B boostrap samples the degrees of freedom are estimated by
\frac{1}{B - 1}∑_{i=1}^nθ_i(z_i)(z_i-\bar{z}),
where θ_i(z_i) is the i-th element of the estimated natural parameter.
For models with no random effects, i.e. (g)lms, the cAIC function returns the AIC of the model with scale parameter estimated by REML.
### Value
A cAIC object, which is a list consisting of: 1. the conditional log likelihood, i.e. the log likelihood with the random effects as penalized parameters; 2. the estimated degrees of freedom; 3. a list element that is either NULL if no new model was fitted otherwise the new (reduced) model, see details; 4. a boolean variable indicating whether a new model was fitted or not; 5. the estimator of the conditional Akaike information, i.e. minus twice the log likelihood plus twice the degrees of freedom.
### WARNINGS
Currently the cAIC can only be estimated for family equal to "gaussian", "poisson" and "binomial". Neither negative binomial nor gamma distributed responses are available. Weighted Gaussian models are not yet implemented.
### Author(s)
Benjamin Saefken, David Ruegamer
### References
Saefken, B., Ruegamer, D., Kneib, T. and Greven, S. (2018): Conditional Model Selection in Mixed-Effects Models with cAIC4. https://arxiv.org/abs/1803.05664
Saefken, B., Kneib T., van Waveren C.-S. and Greven, S. (2014) A unifying approach to the estimation of the conditional Akaike information in generalized linear mixed models. Electronic Journal Statistics Vol. 8, 201-225.
Greven, S. and Kneib T. (2010) On the behaviour of marginal and conditional AIC in linear mixed models. Biometrika 97(4), 773-789.
Efron , B. (2004) The estimation of prediction error. J. Amer. Statist. Ass. 99(467), 619-632.
lme4-package, lmer, glmer
### Examples
### Three application examples
b <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
cAIC(b)
b2 <- lmer(Reaction ~ (1 | Days) + (1 | Subject), sleepstudy)
cAIC(b2)
b2ML <- lmer(Reaction ~ (1 + Days | Subject), sleepstudy, REML = FALSE)
cAIC(b2ML)
### Demonstration of boundary case
## Not run:
set.seed(2017-1-1)
n <- 50
beta <- 2
x <- rnorm(n)
eta <- x*beta
id <- gl(5,10)
epsvar <- 1
data <- data.frame(x = x, id = id)
y_wo_bi <- eta + rnorm(n, 0, sd = epsvar)
# use a very small RE variance
ranvar <- 0.05
nrExperiments <- 100
sim <- sapply(1:nrExperiments, function(j){
b_i <- scale(rnorm(5, 0, ranvar), scale = FALSE)
y <- y_wo_bi + model.matrix(~ -1 + id) %*% b_i
data$y <- y mixedmod <- lmer(y ~ x + (1 | id), data = data) linmod <- lm(y ~ x, data = data) c(cAIC(mixedmod)$caic, cAIC(linmod)\$caic)
})
rownames(sim) <- c("mixed model", "linear model")
boxplot(t(sim))
## End(Not run)
[Package cAIC4 version 0.9 Index]
|
# All Questions
667 views
### Where do I securely store the key for a system where the source is visible?
I have a customer with an Access database (ugh!) in which credit cards are stored in plaintext (yikes!), so amongst other changes I'm doing in the app, I'm applying some encryption in there. I've ...
1k views
### Properties of PRNG / Hashes
There are a lot of quite elaborate PRNG's out there (e.g. Mersenne Twister et.al.), and they have some important properties, especially when it comes to crypto applications. So, I was wondering how ...
3k views
### Reverse engineering a hash?
I understand this may not be the best place to ask a question like this, but I believe that this community may be the best/only place I can ask such a question. I have inputs and outputs from an ...
306 views
### Reduction from signatures to encryption?
Is it possible to construct an (asymmetric) encryption scheme from a signature scheme? If the signature scheme is deterministic and allows existential forgery (e.g. RSA), then the answer is yes ...
658 views
### iSeries (AS/400) Database File: password encryption
I am helping with a project in which an old software system on an iSeries is having a brand new .NET UI applied to it. It's going well... except... In order to allow users to login and maintain ...
498 views
### Why is there an enormous difference between SAT solvers?
SAT solvers are very important in algebraic attacks, for example walksat and minisat. However, when solving the benchmark problems available here there is an enormous performance difference between ...
The Rijndael S-Box design generates a permutation cycle of type $2+27+59+81+87$. What effect would replacing that permutation with a cycle of type $256$ have on the security of AES?
|
Browse Questions
# Prove the following $3\cos^{-1}x=\cos^{-1}(4x^3-3x), \;x\;\in\bigg[\frac{1}{2},1\bigg]$
Toolbox:
• $cos3A=4cos^3A-3cosA$
• $cos^{-1}cosx=x \: \: \: if\: x \in [ 0\: \pi ]$
Let $x = cosA$ $\Rightarrow A = cos^{-1}x$.
Given, R.H.S.: = $cos^{-1}(4x^3-3x)$, substituting for $x = cosA$, we get:
R.H.S.: = $cos^{-1} [ 4\: cos^3A-3\: cosA]$
Substituting $cos3A=4cos^3A-3cosA$, we get:
R.H.S. = $= cos^{-1}\: cos3A$ = 3A.
Substituting for $A = cos^{-1}x$, we get:
R.H.S. = $3 cos^{-1}x =$ L.H.S.
edited Mar 15, 2013
|
+0
# Help needed !!!.
+2
118
4
Can this identity be proven, and if so how? π^2/6 = 1/6 (-i log(-1))^2 I thank you.
Guest Nov 27, 2017
Sort:
#1
+18829
+2
Can this identity be proven, and if so how? π^2/6 = 1/6 (-i log(-1))^2
$$\begin{array}{|rcll|} \hline && \frac{1}{6}\cdot \Big(-i \log(-1) \Big)^2 \\ && &\small{ \begin{array}{|rcll|} \hline \log(z) &=& \ln|z| + i\underbrace{\arg(z)}_{=\arctan(\frac{b}{a}) } \quad | \quad z = a+b\cdot i \\ \log(-1) &=& \ln|-1| + i\underbrace{\arg(-1)}_{= \underbrace{\arctan\left(\frac{0}{-1}\right)}_{=-\pi} } \quad | \quad z = -1+0\cdot i \\ \log(-1) &=& \ln|-1| + i\cdot(-\pi) \\ \log(-1) &=& \ln(1) - i\cdot \pi \quad | \quad \ln(1) = 0 \\ \log(-1) &=& 0 - i\cdot \pi \\ \log(-1) &=& - i\cdot \pi \\ \hline \end{array}} \\ &=& \frac{1}{6}\cdot \Big(-i \cdot(- i\cdot \pi) \Big)^2 \\ &=& \frac{1}{6}\cdot (i^2 \cdot \pi )^2 \quad | \quad i^2 = -1 \\ &=& \frac{1}{6}\cdot (-\pi )^2 \\ &=& \frac{1}{6}\cdot \pi ^2 \\ &=& \dfrac{\pi ^2}{6} \\ \hline \end{array}$$
heureka Nov 28, 2017
#2
+1
Brilliant heureka, as usual. Thank you very much.
Guest Nov 28, 2017
#3
+6925
+1
$$\because e^{i\pi} = -1\\ \text{We can immediately imply that} \log(-1) = i\pi\\ \dfrac{1}{6}(-i\log(-1))^2 \\ =\dfrac{1}{6}(-i\cdot i\pi)^2\quad\boxed{-i\cdot i = 1}\\ =\dfrac{1}{6}\pi^2\\$$
Hence, proved.
MaxWong Nov 28, 2017
#4
0
Max: You are simply beyond words !! Bravo and thanks.
Guest Nov 28, 2017
### 8 Online Users
We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. See details
|
# The PO algorithm, a python illustration¶
This notebook is an illustration of the Perturbation-Optimisation algorithm (PO). The PO algorithm simulates large non-centered non-stationary Gaussian samples when the precision $Q \in R^{N \times N}$ (inverse covariance) writes $$Q = \sum_{k=1}^K H_k^t R_k^{-1} H_k$$ often found in inverse problems (deconvolution, super-resolution, tomography, ...). For doing so, the algorithm supposes that
1. you can easily simulate centered gaussian law with covariance $R$ (when $R$ is diagonal for instance)
2. you can run a quadratic optimization algorithm (a conjugate gradient for instance) or a linear solver.
We suppose that the reader has sufficient knowledge of what is deconvolution, inverse problems or the problem of stochastic simulation of large and correlated Gaussian law.
This notebook illustrates this algorithm on a deconvolution problem with python and the numpy/scipy stack only (and matplotlib for display). No extra dependencies are required. We do not go into details and many aspects can be extended, detailed, optimized, etc. This is postponed to other work.
For more details on linear solver, optimization algorithm and MCMC algorithm we refer the reader to
• Iterative Methods for Sparse Linear Systems by Yousef Saad (free)
• Numerical Optimization by Stephen J. Wright and J. Nocedal
• Monte Carlo Statistical Methods by Christian Robert, Christian and George Casella
• Probabilistic Programming & Bayesian Methods for Hackers by Cam Davidson-Pilon (free)
This notebook has been written by F. Orieux, assistant professor at University Paris-Saclay, associated with the Laboratory of Signal and System (L2S, Univ. Paris-Saclay - CNRS - CentraleSupélec), with help from Olivier Féron (EDF Lab) and Jean-François Giovannelli (IMS - Univ. Bordeaux). If you use any part of this work, please mention it. Static html and pdf version are available in the same github repository.
## Publication¶
The PO algorithm has been published in
[1] François Orieux, Olivier Féron and Jean-François Giovannelli, Sampling High-Dimensional Gaussian Distributions for General Linear Inverse Problems, IEEE Signal Processing Letters, may, 2012. [doi]
@Article{orieux2012,
title = {Sampling high-dimensional Gaussian distributions
for general linear inverse problems},
author = {François Orieux and Olivier Féron and Jean-François Giovannelli},
journaltitle = {IEEE Signal Processing Letters},
year = {2012},
date = {05-2012},
number = {5},
pages = {251--254},
volume = {19},
doi = {10.1109/LSP.2012.2189104},
keywords = {Bayesian, high-dimensional sampling, inverse problem,
stochastic sampling, unsupervised}
}
It has been however published many times in the literature with different presentation, variation or extention. We can mention
• [2] Lalanne, P.; Prévost, D. & Chavel, P. - Stochastic artificial retinas: algorithm, optoelectronic circuits, and implementation - Applied Optics, 2001, 40
• [3] Tan, X.; Li, J. & Stoica, P. - Efficient sparse Bayesian learning via Gibbs sampling - ICASSP, IEEE, 2010.
• [4] Bardsley et. al, Randomize-Then-Optimize : A Method for Sampling from Posterior Distributions in Nonlinear Inverse Problems, 2017
• [5] Gilavert, C.; Moussaoui, S. & Idier, J. - Efficient Gaussian Sampling for Solving Large-Scale Inverse Problems Using MCMC - IEEE Transactions on Signal Processing, 2015.
Nevertheless, this notebook illustrates the PO algorithm as described in [1] but on deconvolution, a simpler to code problem, instead of super-resolution.
## Deconvolution¶
The deconvolution imaging problem suppose a blurred image $y \in R^M$ from which you want a deblurred version $\hat{x} \in R^N$. Images have not the same size and we will not suppose the periodic hypothesis. Fourier transform is therefor not permitted (this is not exactly true and another notebook will explain that more in detail). Then, the model is linear and not stationary (because of edge) and writes $$y = Hx + n$$ where $H$ is a convolution matrix $\in R^{M\times N}$ and $n \in R^M$ an unknown noise. It is known that deconvolution is an ill-posed inverse problem, naive inversion like minimization of least-square provides an unstable solution, and noise is amplified with unusable results. A classical well-known solution is the introduction of additional information through model, also called prior information.
We suppose here
• a centered white Gaussian prior model for the noise $n$ of precision $\gamma_n I$ and
• a centered circulant Gaussian prior model for $x$ of precision $\gamma_xD^tD$, where $D$ is a convolution operator with laplacian impulsionnal response (second-order differences in line and column, or equivalently a high-pass filter).
This leads to the posterior law $$p(x \mid y) \propto \exp \left( -\frac{\gamma_n}{2} \|y - Hx \|_2^2 \right) \exp \left( -\frac{\gamma_x}{2} \|Dx \|_2^2 \right).$$ This posterior law is a gaussian law with mean $$\mu = \Sigma H^t y$$ and covariance $$\Sigma^{-1} = Q = \gamma_n H^tH + \gamma_x D^tD$$ that clearly fall under the condition of PO with $H_1 = H$, $R_1 = I$, $H_2 = D$ and $R_2 = I$.
The direct inversion of $Q$ is not feasible but the mean $\mu$ can be computed thanks to iterative algorithm like conjugate gradient that use $Q$ instead of $\Sigma$.
## Implementation¶
We use numpy to illustrate the algorithm with the following modules.
• partial simplify the definition of single argument function from multiple argument functions.
• scipy.misc is used to load the true image.
• convolve2d (abbreviated as conv2) is the convolution function with zero padding.
• cg is the conjugate gradient algorithm and LinearOperator a python object that mimic linear operator needed by cg.
In [1]:
from functools import partial
import numpy as np
import numpy.random as npr
# For 'ascent' image
import scipy.misc
# The non stationnary convolution operator
from scipy.signal import convolve2d as conv2
from scipy.sparse.linalg import cg, LinearOperator
# For plotting
import matplotlib.pyplot as plt
We take the scipy ascent image as a true image with a square impulsionnal response.
In [2]:
true = scipy.misc.ascent()[::2, ::2] # Decimation to reduce size and time computation
blur_size = 5
ir = np.ones((blur_size, blur_size)) / blur_size**2
# The Laplacian IR
reg = np.array([[0, -1, 0],
[-1, 4, -1],
[0, -1, 0]], dtype=np.float32)
im = plt.imshow(true)
plt.axis('off')
title = plt.title("Ground truth")
The noise precision is set to 10 ($\sigma \approx 0.3$).
In [3]:
true_gamma_n = 10
## Linear restoration¶
At this point the MAP estimator is $$\hat{x}_{MAP} = \text{arg max}_{x} p(x \mid y) = \text{arg min}_{x} \|y - Hx \|_2^2 + \lambda \|Dx \|_2^2$$ with $\lambda = \gamma_x / \gamma_n$ that is the classical regularized least square linear estimator, which is also the mean of the gaussian law above $$\hat{x}_{MAP} = \mu = (H^tH + \lambda D^t D)^{-1} H^t y.$$ Since the inversion of the matrix is not feasible, a the linear solver of the system $Qx = b$, with $Q = H^tH + \lambda D^t D$ and $b = H^t y$, is required. The solver needs a way to apply $Q$ and therefor $H$ and $D$, as well as their adjoint.
In the case of convolution without periodic hypothesis, the operator corresponds to the convolve2d function (conv2 in Matlab) with valid and full parameter.
In [4]:
def forward(ir, image):
"""Apply H operator"""
return conv2(image, ir, 'valid')
def backward(ir, image):
"""Apply H^t operator"""
return conv2(image, np.fliplr(np.flipud(ir)), 'full')
def forward_backward(ir, image):
"""Apply H^tH operator"""
return backward(ir, forward(ir, image))
# Abreviation
H = partial(forward, ir)
Ht = partial(backward, ir)
HtH = partial(forward_backward, ir)
# Simulate noisy data
data = H(true)
data = data + np.random.standard_normal(data.shape) / np.sqrt(true_gamma_n)
# Check size
print("True shape:", true.shape)
print("Data shape:", data.shape)
print("Transpose shape:", backward(data, ir).shape)
im = plt.imshow(data)
plt.axis('off')
title = plt.title('Data')
True shape: (256, 256)
Data shape: (252, 252)
Transpose shape: (256, 256)
We must do the same thing for the regularization.
In [5]:
def reg_forward(reg, image):
"""Apply D operator"""
return conv2(image, reg, 'valid')
def reg_backward(reg, image):
"""Apply D^t operator"""
return conv2(image, np.fliplr(np.flipud(reg)), 'full')
def reg_forward_bacward(reg, image):
"""Apply D^tD operator"""
return reg_backward(reg, reg_forward(reg, image))
# Abreviation
D = partial(reg_forward, reg)
Dt = partial(reg_backward, reg)
DtD = partial(reg_forward_bacward, reg)
The mean (that corresponds to the least square solution) can be computed with the conjugate gradient available in scipy. The API of this solver needs a LinearOperator python instance that applies the $Q$ operator. This API is not well suited for image since it must work on vectorized variables so we embed the code within unvectorization and vectorization operation.
In [6]:
def hess(HtH, DtD, hyper, image):
"""Apply Q = H^tH + lambda * D^tD on image"""
return HtH(image) + hyper * DtD(image)
def mv(HtH, DtD, hyper, shape, image):
"""Apply Q on vectorized image"""
# vector to image, apply Q = H^tH + lambda * D^tD, then vectorize
return hess(HtH, DtD, hyper, image.reshape(shape)).reshape((-1, 1))
# The Q operator on vectorized unkown
# The matvec API is limited to function of one parameter so we use functools.partial
Q = LinearOperator(dtype=np.float32,
shape=(true.size, true.size),
matvec=partial(mv, HtH, DtD, 1e-10, true.shape))
# Vectorized $H^t y$
b = Ht(data).reshape((-1, 1))
The solver can be run now (with $b = H^ty$ as init).
In [7]:
sol, _ = cg(Q, b, x0=b)
sol = sol.reshape(true.shape)
In [8]:
fig, axes = plt.subplots(1, 3, figsize=(10, 10))
im = axes[0].imshow(true[100:300, 100:300], vmin=true.min(), vmax=true.max())
title = axes[0].set_title('Ground truth')
axes[0].axis('off')
im = axes[1].imshow(data[100:300, 100:300], vmin=true.min(), vmax=true.max())
title = axes[1].set_title('Data')
axes[1].axis('off')
im = axes[2].imshow(sol[100:300, 100:300], vmin=true.min(), vmax=true.max())
title = axes[2].set_title('RLS')
axes[2].axis('off')
plt.tight_layout()
Two questions arise.
1. In the figure above, the deconvolution seems under regularized. We may want to estimate the hyperparameter value $\lambda$.
2. We have no uncertainty about the estimation.
The Bayesian approach and algorithms are a possibility (among others) to answer these questions.
## Bayesian restoration¶
With the Bayesian approach we can state the problem as inference on the extended posterior law $$p(x, \gamma_n, \gamma_n \mid y) \propto \gamma_n^{\frac{M}{2} - 1} \gamma_x^{\frac{N-1}{2} - 1}\exp \left( -\frac{\gamma_n}{2} \|y - Hx \|_2^2 \right) \exp \left( -\frac{\gamma_x}{2} \|Dx \|_2^2 \right)$$ and to choose the posterior mean as estimator $$\hat{x}, \hat{\gamma_n}, \hat{\gamma_x} = \int [x, \gamma_n, \gamma_x]\ p(x, \gamma_n, \gamma_x \mid y)\ d x\ d \gamma_n\ d \gamma_x.$$
One way to compute this integral is stochastic sampling with MCMC and more specifically the Gibbs sampler that leads to the following algorithm.
## The Gibbs sampler algorithm¶
Initialize $k \gets 0$, $\gamma_n^{(0)}$ and $\gamma_x^{(0)}$ (for instance at $1$). Then
1. draw $x^{(k)} \sim p\left(x \mid \gamma_n^{(k)}, \gamma_x^{(k)}, y\right)$
2. draw $\gamma_n^{(k+1)} \sim p\left(\gamma_n \mid x^{(k)}, y\right)$
3. draw $\gamma_x^{(k+1)} \sim p\left(\gamma_x \mid x^{(k)}\right)$
4. $k \gets k + 1$ and return to step 1 except if stopping condition is reached.
Finally we estimate the posterior mean with the big number law from the last $P$ samples $$\hat x = \frac{1}{P} \sum_{i=k-1-P}^{k-1} x^{(i)}.$$
With the above model, $p(\gamma_n \mid x, y)$ and $p(\gamma_x \mid x^{(k)})$ are Gamma law that can be easily simulated with numpy toolbox.
However, the conditional law $p(x \mid \gamma_n, \gamma_x, y)$ is a high dimensional Gaussian law with non-stationary correlation. Classical approaches like Cholesky factorization are unfeasible because of the size of the problem. However, the PO algorithm is
## The PO sampler¶
The conditionnal law $p(x \mid \gamma_n, \gamma_x, y)$ is $$p(x \mid \gamma_n, \gamma_x, y) \propto \exp \left( -\frac{\gamma_n}{2} \|y - Hx \|_2^2 \right) \exp \left( -\frac{\gamma_x}{2} \|Dx \|_2^2 \right).$$ This posterior law is a gaussian law with mean $$\mu = \gamma_n\Sigma H^t y$$ and covariance $$\Sigma^{-1} = Q = \gamma_n H^tH + \gamma_x D^tD.$$
The PO sampler consiste, with the above model, of perturbation of the mean
1. $\tilde{y} \gets y + \epsilon_n$ with $\epsilon_n \sim \mathcal{N}\left(0, \gamma_n^{-1} I\right)$
2. $\tilde{x} \gets \epsilon_x$ with $\epsilon_x \sim \mathcal{N}\left(0, \gamma_x^{-1} I\right)$ and to solve the following optimization problem $$x^{(k)} = \text{arg min}_x\ \frac{\gamma_n}{2} \|\tilde{y} - Hx \|_2^2 + \frac{\gamma_x}{2} \|D(x - \tilde{x})\|_2^2$$
In conclusion, in comparison to supervised deconvolution, the changes are just
1. simulate the hyperparameter according to the model (scalar Gamma law here with the standard toolbox)
2. simulate perturbation of the previous criterions and optimize it as before with a linear solver like conjugate gradient.
## Implementation¶
Almost everything is already in place except the $Q$ operator that must be slightly modified to take two hyperparameters.
In [9]:
def mv(HtH, DtD, gamma_n, gamma_x, image):
# vector to image
image = image.reshape(true.shape)
# Apply H^tH + mu * D^tD
out = gamma_n * HtH(image) + gamma_x * DtD(image)
# then vectorize
return out.reshape((-1, 1))
We can now define our Gibbs sampler with for loops (I set the maximum number of iteration for the CG to 50 because it seems sufficient. The RJ-PO algorithm allows us to automatically tunes this value).
In [10]:
gamma_n, gamma_x = 1, 1
gamma_n_chain, gamma_x_chain = [gamma_n], [gamma_x]
burnin, max_iter, acc = 20, 100, 0
data_t = Ht(data)
mean = np.zeros(data_t.shape)
sample = np.zeros(data_t.shape)
cum2 = np.zeros(data_t.shape)
pshape = tuple(s - 2 for s in data_t.shape) # depends on the regularization IR
for iteration in range(max_iter):
# Perturbation
data_tilde = data + npr.standard_normal(data.shape) / np.sqrt(gamma_n)
x_tilde = npr.standard_normal(pshape) / np.sqrt(gamma_x)
b = gamma_n * Ht(data_tilde) + gamma_x * Dt(x_tilde)
Q = LinearOperator(dtype=np.float32, shape=(true.size, true.size),
matvec=partial(mv, HtH, DtD, gamma_n, gamma_x))
# Optimization
opt, _ = cg(Q, b.reshape((-1, 1)), x0=sample.reshape((-1, 1)), maxiter=50)
sample = opt.reshape(true.shape)
# Hyperparameter simulation
gamma_n = npr.gamma(data.size / 2, 2 / np.sum(abs(data - H(sample))**2))
gamma_x = npr.gamma((true.size - 1) / 2, 2 / np.sum(abs(D(sample))**2))
# Keep in memory all the values
gamma_n_chain.append(gamma_n)
gamma_x_chain.append(gamma_x)
# Keep in memory the full set of image sample can take a lot of space
# We accumulate instead the samples to compute the mean and variance instead
if iteration >= burnin:
mean = mean + sample
cum2 = cum2 + sample**2
acc += 1
mean = mean / acc
std = np.sqrt(cum2 / acc - mean**2)
In [11]:
fig, axes = plt.subplots(2, 2, figsize=(10, 10))
axes[0][0].imshow(true[100:300, 100:300], vmin=true.min(), vmax=true.max())
axes[0][0].set_title('Ground truth $x$')
axes[0][0].axis('off')
axes[0][1].imshow(data[100:300, 100:300], vmin=true.min(), vmax=true.max())
axes[0][1].set_title('Data $y$')
axes[0][1].axis('off')
axes[1][0].imshow(mean[100:300, 100:300], vmin=true.min(), vmax=true.max())
axes[1][0].set_title('$\hat{x}$')
axes[1][0].axis('off')
im = axes[1][1].imshow(std[100:300, 100:300])
axes[1][1].set_title('$\hat{\sigma_x}$')
axes[1][1].axis('off')
plt.tight_layout()
In [12]:
gn_mean = np.mean(gamma_n_chain[burnin:])
gn_std = np.std(gamma_n_chain[burnin:])
gx_mean = np.mean(gamma_x_chain[burnin:])
gx_std = np.std(gamma_x_chain[burnin:])
fig, axes = plt.subplots(1, 2, figsize=(12, 6))
axes[0].axhline(true_gamma_n, color='k', label='$\gamma_n$')
axes[0].axhline(gn_mean, color='g', label='$\hat{\gamma_n}$')
axes[0].axhline(gn_mean + 3 * gn_std, color='g', ls='dashed')
axes[0].axhline(gn_mean - 3 * gn_std, color='g', ls='dashed',
label='$\pm 3 \hat\sigma$')
axes[0].plot(gamma_n_chain)
axes[0].set_ylim([0, 11])
axes[0].set_title('$\gamma_n$ chain')
axes[0].set_xlabel('Iteration')
axes[0].legend()
axes[1].axhline(gx_mean, color='g', label='$\hat{\gamma_x}$')
axes[1].axhline(gx_mean + 3 * gx_std, color='g', ls='dashed')
axes[1].axhline(gx_mean - 3 * gx_std, color='g', ls='dashed',
label='$\pm 3 \hat\sigma$')
axes[1].plot(gamma_x_chain)
axes[1].set_ylim([0, 0.0008])
axes[1].set_title('$\gamma_x$ chain')
axes[1].legend()
xlabel = axes[1].set_xlabel('Iteration')
## Conclusion¶
We show in this notebook that the PO algorithm allows the simulation of large gaussian law if a linear solver (or quadratic optimization algorithm) is available, which is a common case. This possibility allows the use of a Bayesian algorithm that can, for instance, estimate the hyperparameter or the uncertainty. We demonstrate on a deconvolution problem that the method is effective, does not require many extra steps, and is easily extendable to other problems. Other illustration can be found in mentionned references.
|
# Need some help settling a bet
tl;dr.: Is there any number for which $2^{i}\mod 3 = 0$ where $i \in \mathbb{N}$
Some friends and I made a bet recently. Basically, if you are 3 persons who are to share a pizza, and you start out by cutting it into 4 even sized pieces, can you ever keep doubling the number of slices so that everyone will be able to pick up the same amount of pieces and get the same amount of pizza?
We did some mathematics on this ourselves, but seeing as we're all IT-engineers, our math skills are quite rusty :) We did come up with a small application to check it out, and it seems that there is no solution for $i < 1000$ or so. However, we're not quite satisfied with this solution. Can anyone provide a proof that it will never happen (or the opposite)? :)
English is not my first language, so I might not have been able the formulate the question clear enough. If more information is needed, please ask :)
-
No. You are asking for a number which is a power of 2 and is divisible by 3. This is impossible by unique factorization of integers into prime numbers (The fundamental theorem of arithmetic). – KotelKanim Oct 14 '11 at 8:09
And they say number theory has no real world application. – Joel Cohen Oct 14 '11 at 14:58
So did you win the bet or lose it? ;) – Srivatsan Oct 14 '11 at 18:43
Sadly, I lost it :( – cwap Oct 15 '11 at 18:39
Also, in the future, please try to use titles for your posts that help people browsing the titles to decide whether they might be able to help in your question. Eg. here you could use the title “Is there a power of two divisible by three?” or “Splitting pizza to three people evenly by repeatedly halving slices”. – Zsbán Ambrus Mar 4 '12 at 12:17
No, you can't, because the decomposition of a number into primes is unique and a $3$ does not appear in the prime decomposition of $2^i$.
-
On the other hand, if you're willing to cut infinitely often, you cut the pizza into four equal pieces, then take one piece each, and repeat with the remaining piece. If you do this exponentially faster you can finish the pizza in a finite amount of time.
-
One might hungry around step 5, however! – JavaMan Oct 14 '11 at 18:03
On the other other hand, if you're willing to allow an infinite amount of toppings on your pizza... (^^ ?!?!) – The Chaz 2.0 Oct 14 '11 at 18:09
For an argument without appealing to prime factorization, just note the following fact: the product of two odd numbers is odd.
Now suppose that we could write $2^i = 3n$ for some integer $n$. $2^i$ is clearly even, so $n$ must be even: $n = 2m$ for some integer $m$. Therefore, dividing both sides by 2, we have $2^{i-1} = 3m$, so $2^{i-1}$ can also be written as a multiple of 3. Repeating this process $i$ times, we find that $2$ can be written as a multiple of $3$, which is absurd.
-
|
# Can you complete a basis in polynomial time?
Here is the problem: we are given vectors $v_1, \ldots, v_k$ lying in $\mathbb{R}^n$ which are orthogonal. We assume that the entries of $v_i$ are rational, with numerator and denominator taking $K$ bits to describe. We would like to find vectors $v_{k+1}, \ldots, v_n$ such that $v_1, \ldots, v_n$ is an orthogonal basis for $\mathbb{R}^n$.
I would like to say this can be done in polynomial time in $n$ and $K$. However, I'm not sure this is the case. My question is to provide a proof that this can indeed be done in polynomial time.
Here is where I am stuck. Gram-Schmidt suggests to use the following iterative process. Suppose we currently have the collection $v_1, \ldots, v_l$. Take the basis vectors $e_1, \ldots, e_n$, go through them through them one by one, and if some $e_i$ is not in the span of the $v_1, \ldots, v_l$, then set $v_{l+1} = P_{{\rm span}(v_1, \ldots, v_l)^\perp} e_i$ (here $P$ is the projection operator). Repeat.
This works in the sense that the number of additions and multiplications is polynomial in $n$. But what happens to the bit-sizes of the entries? The issue is that the projection of $e_i$ onto, say, $v_1$ may have denominators which need $2K$ bits or more to describe - because $P_{v_1}(e_i)$ is $v_1$ times its $i$'th entry, divided by $||v_1||$. Just $v_1$ times its $i$'th entry may already need $2K$ bits to describe.
By a similar argument, it seems that each time I do this, the number of bits doubles. By the end, I may need $2^{\Omega(n)}$ bits to describe the entries of the vector. How do I prove this does not happen? Or perhaps should I be doing things differently to avoid this?
The result of the Gram-Schmidt can be expressed in determinantal form, see Wikipedia. This shows that the output of the Gram-Schmidt process is polynomial size. This suggests that if you run the classical Gram-Schmidt process, then all intermediate entries are also polynomial size (even in LLL, all intermediate entries are polynomial size). However, even if it is not the case, then using efficient algorithms for computing the determinant (see my other answer), you can compute the Gram-Schmidt orthogonalization in polynomial time.
Edit: This answer doesn't address the requirement of orthogonality. Perhaps the Bareiss algorithm still helps.
Let $V$ be a matrix whose rows are $v_1,\ldots,v_\ell$. Convert $V$ to row echelon form, say $PVP^{-1} = [I|A]$, using the Bareiss algorithm. Continue from there. (There are other alternatives using the same general idea.)
If you used Gaussian elimination instead, then intermediate entries could have exponential size, but the Bareiss algorithm avoids this somehow.
Edit: Here is a practical suggestion. Add random vectors - if you do it in a reasonable way, then it is very probably that the result will be a basis. You could even check whether you get a basis by computing the determinant. The latter can be done in many ways, see for example a survey by Rote, or you can compute the determinant modulo $p$ for "random" $p$ - if the result is non-zero (which it probably will), you know you got a basis.
• Can you elaborate on "continue from there?" What is the next step after the matrix is in row echelon form? – robinson Mar 30 '13 at 5:33
• Complete the echelon form into a basis (easy) and then apply $P$ in reverse. – Yuval Filmus Mar 30 '13 at 12:05
• But I need an orthogonal basis. It seems to me the resulting basis will not be orthogonal. – robinson Mar 30 '13 at 17:35
• Ah - I missed that. – Yuval Filmus Mar 30 '13 at 19:26
|
# Probability and Mathematical Physics Seminar
#### Covering systems of congruences
Speaker: Bob Hough, Stony Brook
Location: Warren Weaver Hall 1302
Date: Friday, October 14, 2022, 3 p.m.
Synopsis:
A distinct covering system of congruences is a list of congruences $a_i \bmod m_i, \qquad i = 1, 2, ..., k$ whose union is the integers. Erd\H{o}s asked if the least modulus $m_1$ of a distinct covering system of congruences can be arbitrarily large (the minimum modulus problem for covering systems, $1000) and if there exist distinct covering systems of congruences all of whose moduli are odd (the odd problem for covering systems,$25). I'll discuss my proof of a negative answer to the minimum modulus problem, and a quantitative refinement with Pace Nielsen that proves that any distinct covering system of congruences has a modulus divisible by either 2 or 3. The proofs use the probabilistic method and in particular use a sequence of pseudorandom probability measures adapted to the covering process. Time permitting, I may briefly discuss a reformulation of our method due to Balister, Bollob\'{a}s, Morris, Sahasrabudhe and Tiba which solves a conjecture of Shinzel (any distinct covering system of congruences has one modulus that divides another) and gives a negative answer to the square-free version of the odd problem.
|
# How do you find the equation of the line that goes through P(3, -4), slope is undefined?
Aug 18, 2017
See a solution process below:
#### Explanation:
A line is a vertical line when the slope of the line is undefined.
This means for each and every value of $y$, the $x$ value is the same.
So, given the point $P \left(3 , - 4\right)$ we know the $x$ value is $3$.
Therefore, we can plug any value in for $y$ and $x$ will always be $- 8$.
Therefore the equation for this line is:
$x = 3$
Regardless of the value of $y$, $x$ will always be $3$
|
# regexp match within a log file, return dynamic content above and below match
I have some catchall log files in a format as follows:
timestamp event summary
foo details
account name: userA
bar more details
timestamp event summary
baz details
account name: userB
qux more details
timestamp etc.
I would like to search the log file for userB, and if found, echo from the preceding timestamp down to (but not including) the following timestamp. There will likely be several events matching my search. It would be nice to echo some sort of --- start --- and --- end --- surrounding each match.
This would be perfect for pcregrep -M, right? Problem is, GnuWin32's pcregrep crashes with multiline regexps searching large files, and these catch-all logs can be 100 megs or more.
What I've tried
My hackish workaround thus far involves using grep -B15 -A30 to find matching lines and print surrounding content, then piping the now more manageable chunk into pcregrep for polishing. Problem is that some events are less than ten lines, while others are 30 or more; and I'm getting some unexpected results where the shorter events are encountered.
:parselog <username> <logfile>
set silent=1
set count=0
set deez=20\d\d-\d\d-\d\d \d\d:\d\d:\d\d
echo Searching %~2 for records containing %~1...
for /f "delims=" %%I in (
'grep -P -i -B15 -A30 ":\s+\b%~1\b(@mydomain\.ext)?$" "%~2" ^| pcregrep -M -i "^%deez%(.|\n)+?\b%~1\b(@mydomain\.ext|\r?\n)(.|\n)+?\n%deez%" 2^>NUL' ) do ( echo(%%I| findstr "^20[0-9][0-9]-[0-9][0-9]-[0-9][0-9].[0-9][0-9]:[0-9][0-9]:[0-9][0-9]" >NUL && ( if defined silent ( set silent= set found=1 set /a "count+=1" echo; echo ---------------start of record !count!------------- ) else ( set silent=1 echo ----------------end of record !count!-------------- echo; ) ) if not defined silent echo(%%I ) goto :EOF Is there a better way to do this? I've come across an awk command that looked interesting, something like: awk "/start pattern/,/end pattern/" logfile ... but it would need to match a middle pattern as well. Unfortunately, I'm not that familiar with awk syntax. Any suggestions? Ed Morton suggested that I supply some example logging and expected output. Example catch-all 2013-03-25 08:02:32 Auth.Critical 169.254.8.110 Mar 25 08:02:32 dc3 MSWinEventLog 2 Security 11730158 Mon Mar 25 08:02:28 2013 529 Security NT AUTHORITY\SYSTEM N/A Audit Failure dc3 2 Logon Failure: Reason: Unknown user name or bad password User Name: user5f Domain: MYDOMAIN Logon Type: 3 Logon Process: Advapi Authentication Package: Negotiate Workstation Name: dc3 Caller User Name: dc3$
Caller Domain: MYDOMAIN
Caller Logon ID: (0x0,0x3E7)
Caller Process ID: 400
Transited Services: -
Source Port: 40838
2013-03-25 08:02:32 Auth.Critical 169.254.8.110 Mar 25 08:02:32 dc3 MSWinEventLog 2 Security 11730159 Mon Mar 25 08:02:29 2013 680 Security NT AUTHORITY\SYSTEM N/A Audit Failure dc3 9 Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0
Logon account: USER6Q
Source Workstation: dc3
Error Code: 0xC0000234
2013-03-25 08:02:32 Auth.Critical 169.254.8.110 Mar 25 08:02:32 dc3 MSWinEventLog 2 Security 11730160 Mon Mar 25 08:02:29 2013 539 Security NT AUTHORITY\SYSTEM N/A Audit Failure dc3 2 Logon Failure:
Reason: Account locked out
User Name: [email protected]
Domain: MYDOMAIN
Logon Type: 3
Authentication Package: Negotiate
Workstation Name: dc3
Caller User Name: dc3$Caller Domain: MYDOMAIN Caller Logon ID: (0x0,0x3E7) Caller Process ID: 400 Transited Services: - Source Network Address: 169.254.7.89 Source Port: 55314 2013-03-25 08:02:32 Auth.Notice 169.254.5.62 Mar 25 08:36:38 DC4.mydomain.tld MSWinEventLog 5 Security 201326798 Mon Mar 25 08:36:37 2013 4624 Microsoft-Windows-Security-Auditing N/A Audit Success DC4.mydomain.tld 12544 An account was successfully logged on. Subject: Security ID: S-1-0-0 Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 New Logon: Security ID: S-1-5-21-606747145-1409082233-725345543-160838 Account Name: DEPTACCT16$
Account Domain: MYDOMAIN
Logon ID: 0x1158e6012c
Logon GUID: {BCC72986-82A0-4EE9-3729-847BA6FA3A98}
Process Information:
Process ID: 0x0
Process Name: -
Network Information:
Workstation Name:
Source Port: 42183
Detailed Authentication Information:
Logon Process: Kerberos
Authentication Package: Kerberos
Transited Services: -
Package Name (NTLM only): -
Key Length: 0
This event is generated when a logon session is created. It is generated on the computer that was accessed.
The subject fields indicate...
2013-03-25 08:02:32 Auth.Critical 169.254.8.110 Mar 25 08:02:32 dc3 MSWinEventLog 2 Security 11730162 Mon Mar 25 08:02:30 2013 675 Security NT AUTHORITY\SYSTEM N/A Audit Failure dc3 9 Pre-authentication failed:
User Name: USER8Y
User ID: %{S-1-5-21-606747145-1409082233-725345543-3904}
Service Name: krbtgt/MYDOMAIN
Pre-Authentication Type: 0x0
Failure Code: 0x19
2013-03-25 08:02:32 Auth.Critical etc.
Example command
call :parselog user6q \\path\to\catch-all.log
Expected result
---------------start of record 1-------------
2013-03-25 08:02:32 Auth.Critical 169.254.8.110 Mar 25 08:02:32 dc3 MSWinEventLog 2 Security 11730159 Mon Mar 25 08:02:29 2013 680 Security NT AUTHORITY\SYSTEM N/A Audit Failure dc3 9 Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0
Logon account: USER6Q
Source Workstation: dc3
Error Code: 0xC0000234
---------------end of record 1-------------
---------------start of record 2-------------
2013-03-25 08:02:32 Auth.Critical 169.254.8.110 Mar 25 08:02:32 dc3 MSWinEventLog 2 Security 11730160 Mon Mar 25 08:02:29 2013 539 Security NT AUTHORITY\SYSTEM N/A Audit Failure dc3 2 Logon Failure:
Reason: Account locked out
User Name: [email protected]
Domain: MYDOMAIN
Logon Type: 3
Authentication Package: Negotiate
Workstation Name: dc3
Caller User Name: dc3$Caller Domain: MYDOMAIN Caller Logon ID: (0x0,0x3E7) Caller Process ID: 400 Transited Services: - Source Network Address: 169.254.7.89 Source Port: 55314 ---------------end of record 2------------- - Don't ever use awk "/start pattern/,/end pattern/" logfile. It makes trivial stuff slightly briefer but you can't expand it to work for non-trivial stuff. If you posted some sample input (I'm assuming you have actual timestamps in your file rather than the word "timestamp") and expected output that would help. There is a simple awk solution. – Ed Morton Mar 26 '13 at 2:17 add comment ## 4 Answers This is all you need with GNU awk (for IGNORECASE): $ cat tst.awk
function prtRecord() {
if (record ~ regexp) {
printf "-------- start of record %d --------%s", ++numRecords, ORS
printf "%s", record
printf "--------- end of record %d ---------%s%s", numRecords, ORS, ORS
}
record = ""
}
BEGIN{ IGNORECASE=1 }
/^[[:digit:]]+-[[:digit:]]+-[[:digit:]]+/ { prtRecord() }
{ record = record $0 ORS } END { prtRecord() } or with any awk: $ cat tst.awk
function prtRecord() {
if (tolower(record) ~ tolower(regexp)) {
printf "-------- start of record %d --------%s", ++numRecords, ORS
printf "%s", record
printf "--------- end of record %d ---------%s%s", numRecords, ORS, ORS
}
record = ""
}
/^[[:digit:]]+-[[:digit:]]+-[[:digit:]]+/ { prtRecord() }
{ record = record $0 ORS } END { prtRecord() } Either way you'd run it on UNIX as: $ awk -v regexp=user6q -f tst.awk file
I don't know the Windows syntax but I expect it's very similar if not identical.
Note the use of tolower() in the script to make both sides of the comparison lower case so the match is case-insensitive. If you can instead pass in a search regexp that's the correct case, then you don't need to call tolower() on either side of the comparison. nbd, it might just speed the script up slightly.
$awk -v regexp=user6q -f tst.awk file -------- start of record 1 -------- 2013-03-25 08:02:32 Auth.Critical 169.254.8.110 Mar 25 08:02:32 dc3 MSWinEventLog 2 Security 11730159 Mon Mar 25 08:02:29 2013 680 Security NT AUTHORITY\SYSTEM N/A Audit Failure dc3 9 Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon account: USER6Q Source Workstation: dc3 Error Code: 0xC0000234 --------- end of record 1 --------- -------- start of record 2 -------- 2013-03-25 08:02:32 Auth.Critical 169.254.8.110 Mar 25 08:02:32 dc3 MSWinEventLog 2 Security 11730160 Mon Mar 25 08:02:29 2013 539 Security NT AUTHORITY\SYSTEM N/A Audit Failure dc3 2 Logon Failure: Reason: Account locked out User Name: [email protected] Domain: MYDOMAIN Logon Type: 3 Logon Process: Advapi Authentication Package: Negotiate Workstation Name: dc3 Caller User Name: dc3$
Caller Domain: MYDOMAIN
Caller Logon ID: (0x0,0x3E7)
Caller Process ID: 400
Transited Services: -
Source Port: 55314
--------- end of record 2 ---------
-
This is sort of what I was expecting, but in practice it's very slow. I think I'd like to stick with my idea of using grep with context, then trimming the fat from around the middle, as grep seems able to find all matches in 100 meg file in just a few seconds. However, awk seems to be echoing out everything it received via stdin from grep. I wonder whether gnuwin32 awk doesn't behave the same as POSIX. The only things I changed in tst.awk were adding the timestamp regexp (/^\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d/ instead of /timestamp/), and added IGNORECASE = 1 to prtRecord() – rojo Mar 26 '13 at 14:16
\d is a shorthand that only some tools will understand. You were probably building up a single record of the contents of the whole file and that's why it was slow (string concatenation is slower than input/output in awk and the bigger the strings the slower it is). Use POSIX character classes like [[:digit:]], see my script. I don't believe that the awk script I posted would be slow. – Ed Morton Mar 26 '13 at 14:18
IGNORECASE is a GNU awk extension, not available in POSIX awks. If you have GNU awk (awk --version will tell you) then you should assign IGNORECASE=1 in the BEGIN section rather than in the function and get rid of the tolower()s. I'll update my script to show that alternative. – Ed Morton Mar 26 '13 at 14:24
You're a pro. Look at me.... Pro. You're absolutely correct that, when I don't mess up the syntax, awk is fast. Thanks especially for the tip about POSIX bracket expressions. But I wonder whether there's any way to simulate \b around the search term? For example, we have a user account in our organization named A. That's it. Just A. With pcre I was able to match /:\s+\bA\b(@domain.tld)?$/ to ensure that I ended up with exactly the records for which I searched. Can that be reasonably translated to POSIX bracket notation? – rojo Mar 26 '13 at 14:49 \b is a backspace character in awk (I know, it's different in other tools). \B is the awk equivalent and GNU awk would support that but POSIX awks would not. There's also \< and \> and some other alternatives - see gnu.org/software/gawk/manual/gawk.html#Escape-Sequences. \s is [[:space:]] in POSIX but gawk would recognize \s too. There isn't a POSIX character class equivalent to \B, you'd have to consider using negated classes like [^[:alnum:]_]. If I were you I would use the gawk extended functionality when POSIX has no equivalent. – Ed Morton Mar 26 '13 at 15:01 show 6 more comments Here's my effort: @ECHO OFF SETLOCAL :: :: Target username :: SET target=%1 CALL :zaplines SET count=0 FOR /f "delims=" %%I IN (rojoslog.txt) DO ( ECHO.%%I| findstr /r "^20[0-9][0-9]-[0-9][0-9]-[0-9][0-9].[0-9][0-9]:[0-9][0-9]:[0-9][0-9]" >NUL IF NOT ERRORLEVEL 1 ( IF DEFINED founduser CALL :report CALL :zaplines ) (SET stored=) FOR /l %%L IN (1000,1,1200) DO IF NOT DEFINED stored IF NOT DEFINED line%%L ( SET line%%L=%%I SET stored=Y ) ECHO.%%I|FINDSTR /b /e /i /c:"account name: %target%" >NUL IF NOT ERRORLEVEL 1 (SET founduser=Y) ) IF DEFINED founduser CALL :report GOTO :eof :: :: remove all envvars starting 'line' :: Set 'not found user' at same time :: :zaplines (SET founduser=) FOR /f "delims==" %%L IN ('set line 2^>nul') DO (SET %%L=) GOTO :eof :report IF NOT DEFINED line1000 GOTO :EOF SET /a count+=1 ECHO. ECHO.---------- START of record %count% ---------- FOR /l %%L IN (1000,1,1200) DO IF DEFINED line%%L CALL ECHO.%%line%%L%% ECHO.----------- END of record %count% ----------- GOTO :eof - Thanks Peter, and a good effort it is. Unfortunately, it's too slow for me to use. I initiated the script about 20 minutes ago. Since then, both cores of my CPU have been bouncing between 70 - 100%, but the script is still trying to loop through the first log file. I don't think I'll be able to use a pure batch solution. – rojo Mar 26 '13 at 12:55 add comment Below there is a pure Batch solution that does not use grep. It locates timestamp lines because the "summary" word that must not exist in other lines, but this word may be changed for another one if needed. EDIT: I changed the word that identify timestamp lines to "Auth."; I also changed FINDSTR seek to ignore case. This is the new version: @echo off setlocal EnableDelayedExpansion :parselog <username> <logfile> echo Searching %~2 for records containing %~1... set n=0 set previousMatch=Auth. for /F "tokens=1* delims=:" %%a in ('findstr /I /N "Auth\. %~1" %2') do ( set currentMatch=%%b if "!previousMatch:Auth.=!" neq "!previousMatch!" ( if "!currentMatch:Auth.=!" equ "!currentMatch!" ( set /A n+=1 set /A skip[!n!]=!previousLine!-1 ) ) else ( set /A end[!n!]=%%a-1 ) set previousLine=%%a set previousMatch=%%b ) if %n% equ 0 ( echo No records found goto :EOF ) if not defined end[%n%] set end[%n%]=-1 set i=1 :nextRecord echo/ echo ---------------start of record %i%------------- if !skip[%i%]! equ 0 ( set skip= ) else ( set skip=skip=!skip[%i%]! ) set end=!end[%i%]! for /F "%skip% tokens=1* delims=:" %%a in ('findstr /N "^" %2') do ( echo(%%b if %%a equ %end% goto endOfRecord ) :endOfRecord echo ---------------end of record %i%------------- set /A i+=1 if %i% leq %n% goto nextRecord Example command: C:>test user6q catch-all.log Result: Searching catch-all.log for records containing user6q... ---------------start of record 1------------- 2013-03-25 08:02:32 Auth.Critical 169.254.8.110 Mar 25 08:02:32 dc3 MSWinEventLog 2 Security 11730159 Mon Mar 25 08:02:29 2013 680 Security NT AUTHORITY\SYSTEM N/A Audit Failure dc3 9 Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon account: USER6Q Source Workstation: dc3 Error Code: 0xC0000234 ---------------end of record 1------------- ---------------start of record 2------------- 2013-03-25 08:02:32 Auth.Critical 169.254.8.110 Mar 25 08:02:32 dc3 MSWinEventLog 2 Security 11730160 Mon Mar 25 08:02:29 2013 539 Security NT AUTHORITY\SYSTEM N/A Audit Failure dc3 2 Logon Failure: Reason: Account locked out User Name: [email protected] Domain: MYDOMAIN Logon Type: 3 Logon Process: Advapi Authentication Package: Negotiate Workstation Name: dc3 Caller User Name: dc3$
Caller Domain: MYDOMAIN
Caller Logon ID: (0x0,0x3E7)
Caller Process ID: 400
Transited Services: -
Source Port: 55314
---------------end of record 2-------------
This method use just one execution of findstr command to locate all matching records, and then one additional findstr command to show each record. Note that first for /F ... command works over findstr "Auth. user.." results, and the second for /F command have a "skip=N" option and a GOTO that break the loop as soon as the record was displayed. This mean that FOR commands does not slow down the program; the speed of this program depends on the speed of FINDSTR command.
However, it is possible that the second for /F "%skip% ... in ('findstr /N "^" %2') command take too long because the size of FINDSTR output result before it is processed by the FOR. If this happen, we could modify the second FOR by another faster method (an asynchronous pipe that will be break, for example). Please, report the result.
Antonio
-
+1, very nice work, very fast with empty lines from the records but no exclamation marks in the output ( DelayedExpansion). I like the 'skip' trick. – Endoro Mar 26 '13 at 9:08
I very much appreciate the work you put into this. The summary doesn't actually contain the word "summary" (which isn't that big a deal, as I can findstr /n "Auth."); but the timestamp / summary line will not include the account name, which is kind of a bigger deal. – rojo Mar 26 '13 at 12:28
I've been leaning toward using a binary such as awk or grep or similar to parse the files, as in my experience, batch for loops are drastically slower. At first, I also tried a JScript textfile.ReadAll(); but that took forever and a day on my 100+ meg, 2.5 million lines per hour log files as well. But I'm curious to see whether your method, if fixed, is more efficient than what I've attempted in the past. – rojo Mar 26 '13 at 12:29
@rojo: See the new version of my program. Please, be aware that there are still two or three modifications that could speed up my method with very large files! – Aacini Mar 26 '13 at 18:41
@Aacini - test file = 2,627,399 lines @ 94,331KB. Ed Morton's awk method = 10 seconds to find 25 records. Your scripted batch loop = 15 minutes and counting, but I'm going to have to ^C it. Thank you for satisfying my curiosity though! You're a scholar and a gentleman. – rojo Mar 26 '13 at 19:28
I think awk is all you need:
awk "/---start of record---/,/---end of record---/ {print}" logfile
That's all you need if the first line indicator is:
---start of record---
and the last is:
---end of record---
Notice that there is no middle-pattern matching, that "," is just a separator for both regexps.
-
A wise man once said that love is all you need. In this case, though, I think you mostly copied the awk line I had in my question, disregarding the rest. – rojo Mar 26 '13 at 13:03
|
# [OS X TeX] multido(?) error
Nitecki, Zbigniew H. Zbigniew.Nitecki at tufts.edu
Thu Apr 30 23:57:47 EDT 2020
I’m trying to draw some phase portraits of 2nd order systems of o.d.e. using the pst-ode package. I modelled my code after a more elaborate example
on stack exchange.
In the attached code, I get a “forgotten end group” error at line 48 (which reads \end{pspicture}), and if I persist (using r on the console) it tells me the
\begin{pspicture} on line 38 is ended by \end{document} on line 49. I know my use of ODEsolve is correct, since with parameters entered directly
it works.
Could this have to do with my putting the output of ODEsolve in the file line_ii_ij (inside the top multido)? I didn’t fully understand the first two
arguments in ODEsolve: the first is the output and the second apparently sets the mode of output (in the documentation the examples just give a name
to the output and use (0 1) for the mode of output).
Zbigniew Nitecki
Department of Mathematics
Tufts University
Medford, MA 02155
telephones:
Office (617)627-3843
Dept. (617)627-3234
Dept. fax (617)627-3966
http://www.tufts.edu/~znitecki/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://email.esm.psu.edu/pipermail/macosx-tex/attachments/20200501/fc7ed501/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: PstTest4.tex
Type: application/octet-stream
Size: 1151 bytes
Desc: PstTest4.tex
URL: <https://email.esm.psu.edu/pipermail/macosx-tex/attachments/20200501/fc7ed501/attachment.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: PstTest4.log
Type: application/octet-stream
Size: 15349 bytes
Desc: PstTest4.log
URL: <https://email.esm.psu.edu/pipermail/macosx-tex/attachments/20200501/fc7ed501/attachment-0001.obj>
|
This article goes over VHDL, a hardware description language, and how it's structured when describing digital circuits. We'll also go over some introductory example circuit descriptions and touch on the difference between the "std_logic" and "bit" data types.
VHDL is one of the commonly used Hardware Description Languages (HDL) in digital circuit design. VHDL stands for VHSIC Hardware Description Language. In turn, VHSIC stands for Very-High-Speed Integrated Circuit.
VHDL was initiated by the US Department of Defense around 1981. The cooperation of companies such as IBM and Texas Instruments led to the release of VHDL’s first version in 1985. Xilinx, which invented the first FPGA in 1984, soon supported VHDL in its products. Since then, VHDL has evolved into a mature language in digital circuit design, simulation, and synthesis.
In this article, we will briefly discuss the general structure of the VHDL code in describing a given circuit. We will also become familiar with some commonly used data types, operators, etc. through some introductory examples.
### The General Structure of VHDL
Let’s consider a simple digital circuit as shown in Figure 1.
##### Figure 1. A simple digital circuit.
This figure shows that there are two input ports, a and b, and one output port, out1. The figure suggests that the input and output ports are one bit wide. The functionality of the circuit is to AND the two inputs and put the result on the output port.
VHDL uses a similar description; however, it has its own syntax. For example, it uses the following lines of code to describe the input and output ports of this circuit:
1 entity circuit_1 is
2 Port ( a : in STD_LOGIC;
3 b : in STD_LOGIC;
4 out1 : out STD_LOGIC);
5 end circuit_1;
Let's pull apart what this means, line by line.
Line 1: The first line of the code specifies an arbitrary name for the circuit to be described. The word “circuit_1”, which comes between the keywords “entity” and “is”, determines the name of this module.
Lines 2 to 4: These lines specify the input and output ports of the circuit. Comparing these lines to the circuit of Figure 1, we see that the ports of the circuit along with their features are listed after the keyword “port”. For example, line 3 says that we have a port called “b”. This port is an input, as indicated by the keyword “in” after the colon.
What does the keyword “std_logic” specify? As we will discuss later in this article, std_logic is a commonly used data type in VHDL. It can be used to describe a one-bit digital signal. Since all of the input/output ports in Figure 1 will transfer a one or a zero, we can use the std_logic data type for these ports.
Line 5: This line determines the end of the “entity” statement.
Hence, the entity part of the code specifies 1) the name of the circuit to be described and 2) the ports of the circuit along with their characteristics, namely, input/output and the data type to be transferred by these ports. The entity part of the code actually describes the interface of a module with its surrounding environment. The features of the above circuit which are specified by the discussed “entity” statement are shown in green in Figure 1.
In addition to the interface of a circuit with its environment, we need to describe the functionality of the circuit. In Figure 1, the functionality of the circuit is to AND the two inputs and put the result on the output port. To describe the operation of the circuit, VHDL adds an “architecture” section and relates it to circuit_1 defined by the entity statement. The VHDL code describing the architecture of this circuit will be
6 architecture Behavioral of circuit_1 is
8 begin
9 out1 <= ( a and b );
10 end Behavioral;
Line 6: This line gives a name, “Behavioral”, for the architecture that will be described in the next lines. This name comes between the keywords “architecture” and “of”. It also relates this architecture to "circuit_1". In other words, this architecture will describe the operation of “circuit_1”.
Line 8: This specifies the beginning of the architecture description.
Line 9 Line 9 uses the syntax of VHDL to describe the circuit’s operation. The AND of the two inputs a and b is found within the parentheses, and the result is assigned to the output port using the assignment operator “<=”.
Line 10 This specifies the end of the architecture description. As mentioned above, these lines of code describe the circuit’s internal operation which, here, is a simple AND gate (shown in blue in Figure 1).
Putting together what we have discussed so far, we are almost done with describing “Circuit_1” in VHDL. We obtain the following code:
1 entity circuit_1 is
2 Port ( a : in STD_LOGIC;
3 b : in STD_LOGIC;
4 out1 : out STD_LOGIC);
5 end circuit_1;
-----------------------------------------------------
6 architecture Behavioral of circuit_1 is
8 begin
9 out1 <= ( a and b );
10 end Behavioral;
However, we still need to add a few more lines of code. These lines will add a library that contains some important definitions, including the definition of data types and operators. A library may consist of several packages (see Figure 2 below). We will have to make the required package(s) of a given library visible to the design.
Since the above example uses the data type “std_logic”, we need to add the package “std_logic_1164” from “ieee” library to the code. Note that the logical operators for the std_logic data type are also defined in the “std_logic_1164” package—otherwise we would have to make the corresponding package visible to the code. The final code will be
1 library ieee;
2 use ieee.std_logic_1164.all
3 entity circuit_1 is
4 Port ( a : in STD_LOGIC;
5 b : in STD_LOGIC;
6 out1 : out STD_LOGIC);
7 end circuit_1;
-----------------------------------------------------
8 architecture Behavioral of circuit_1 is
9 begin
10 out1 <= ( a and b );
11 end Behavioral;
Here, we create two new lines to go above what we've alreeady created. The first line adds the library “ieee” and the second line specifies that the package “std_logic_1164” from this library is required. Since “std_logic” is a commonly used data type, we almost always need to add the “ieee” library and the “std_logic_1164” package to the VHDL code.
##### Figure 2. A library may consist of several packages. Image courtesy of VHDL 101.
We can use the Xilinx ISE simulator to verify the operation of the above VHDL code. (For introductory information on ISE, see this tutorial.)
Now that we are familiar with the fundamental units in VHDL code, let’s review one of the most important VHDL data types, i.e., the “std_logic” data type.
### The "std_logic" Data Type (vs. "bit")
As mentioned above, the “std_logic” data type can be used to represent a one-bit signal. Interestingly, there is another VHDL data type, “bit”, which can take on a logic one or a logic zero.
So why do we need the std_logic data type if the “bit” data type already covers the high and low states of a digital signal? Well, a digital signal is actually not limited to logic high and logic low. Consider a tri-state inverter, as shown in Figure 3.
##### Figure 3. The transistor-level schematic of a tri-state inverter.
When “enable” is high, “data_output” is connected to either Vdd or ground; however, when “enable” is low, “data_output” is floating, i.e., it does not have a low-impedance connection to Vdd or ground but instead presents a “high impedance” to the external circuitry. The “std_logic” data type allows us to describe a digital signal in high-impedance mode by assigning the value ‘Z’.
There is another state—i.e., in addition to logic high, logic low, and high impedance—that can be used in the design of digital circuits. Sometimes we don’t care about the value of a particular input. In this case, representing the value of the signal with a “don’t care” can lead to a more efficient design. The “std_logic” data type supports the “don’t care” state. This enables better hardware optimization for look-up tables.
The “std_logic” data type also allows us to represent an uninitialized signal by assigning the value ‘U’. This can be helpful when simulating a piece of code in VHDL. It turns out that the “std_logic” data type can actually take on nine values:
• ‘U’: Uninitialized
• ‘1’ : The usual indicator for a logic high, also known as ‘Forcing high’
• ‘0’: The usual indicator for a logic low, also known as ‘Forcing low’
• ‘Z’: High impedance
• ‘-’: Don’t care
• ‘W’: Weak unknown
• ‘X’: Forcing unknown
• ‘H’: Weak high
• ‘L’: Weak low
Among these values, we commonly use ‘0’, ‘1’, ‘Z’, and ‘-’.
Let’s look at an example.
#### Example 1
Write the VHDL code for the circuit in Figure 4.
##### Figure 4.
The general procedure is almost the same as the previous example. The code will be as follows:
1 library IEEE;
2 use IEEE.STD_LOGIC_1164.ALL;
----------------------------------------------------
3 entity circuit_2 is
4 Port ( a : in STD_LOGIC;
5 b : in STD_LOGIC;
6 c : in STD_LOGIC;
7 d : in STD_LOGIC;
8 out1 : out STD_LOGIC;
9 out2 : out STD_LOGIC);
10 end circuit_2;
-----------------------------------------------------
11 architecture Behavioral of circuit_2 is
12 signal sig1: std_logic;
13 begin
14 sig1 <= ( a and b );
15 out1 <= ( sig1 or c );
16 out2 <= (not d);
17 end Behavioral;
Lines 1 and 2: These lines add the required library and package to the code. Since the “std_logic” data type is used, we have to add the “std_logic_1164” package.
Lines 3-10: These lines specify the name of the module along with its input/output ports. This part of the code corresponds to the parts of Figure 4 that are in green.
Lines 11-17: This part of the code describes the operation of the circuit (those parts of Figure 4 that are in blue). As you may have noticed, there is one internal node in Figure 4; it is labeled “sig1”. We use the “port” statement from “entity” to define the input/output ports, but how can we define the internal nodes of a circuit? For this, we use the “signal” keyword.
In line 12 of the above code, the “signal” keyword tells the synthesis software that there is a node in the circuit labeled “sig1”. Similar to the definition of the ports, we use the keyword “std_logic” after the colon to specify the required data type. Now we can assign a value to this node (line 14) or use its value (line 15).
#### Example 2
Write the VHDL code for the circuit in Figure 5.
##### Figure 5.
This circuit is a two-to-one multiplexer. When “sel” is high, the output of the lower AND gate will be low regardless of the value of “b”. We may say that the AND gate prevents “b” from propagating to “sig2”. On the other hand, since “sel” is high, the output of the upper AND gate will follow “a”. Or, equivalently, “a” will reach “sig3”. Since “sig2” is low in this case, the output of the OR gate will be the same as “sig3”. Hence, when “sel” is high, “out1” will be the same as “a”.
A similar discussion will reveal that, when “sel” is low, “out1” will take on the value of “b”. Hence, based on the value of “sel”, we can allow one input or the other one to reach the output. This is called multiplexing and the circuit is called a multiplexer.
We can describe the circuit of Figure 5 using the following code:
1 library IEEE;
2 use IEEE.STD_LOGIC_1164.ALL;
-----------------------------------------------------
3 entity circuit_3 is
4 Port ( a : in STD_LOGIC;
5 b : in STD_LOGIC;
6 sel : in STD_LOGIC;
7 out1 : out STD_LOGIC);
8 end circuit_3;
-----------------------------------------------------
9 architecture Behavioral of circuit_3 is
10 signal sig1, sig2, sig3: std_logic;
11 begin
12 sig1 <= ( not sel );
13 sig2 <= ( b and sig1 );
14 sig3 <= ( a and sel );
15 out1 <= ( sig2 or sig3 );
16 end Behavioral;
### Summary
In this article, we've discussed what VHDL is, how it's structured, and introduced some examples of how it's used to describe digital circuits. You should now have a better understanding of the following points:
• The “entity” part of the code specifies 1) the name of the circuit to be described and 2) the ports of the circuit; it establishes the interface between a module and its surrounding environment.
• The “architecture” part of the code describes the circuit’s internal operation.
• VHDL libraries contain important definitions, including the definition of data types and operators. A library itself may consist of several packages.
• We almost always need to add the “ieee” library and the “std_logic_1164” package to our VHDL code.
• Among the possible values for the "std_logic" data type, we commonly use ‘0’, ‘1’, ‘Z’, and ‘-’.
Featured image courtesy of HuMANDATA LTD.
|
# Log and e problem
• April 8th 2013, 12:56 PM
Quixotic
Log and e problem
If I have ln(elne) do I solve this as
lne + lne = 1 +1 = 2
or
ln(lnee) = ln(e) = 1
• April 8th 2013, 01:03 PM
emakarov
Re: Log and e problem
The first variant should say ln(e) + ln(ln(e)) = 1 + ln(1) = 1 + 0 = 1. The second variant is correct.
• April 20th 2013, 12:03 AM
$ln(eq)$, where $q=ln(e)$
$q=ln(e)\Rightarrow q= 1$
$ln(eq) \Rightarrow ln(1e) \Rightarrow ln(e) = 1$
|
NEW
New Website Launch
Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc...
1
JEE Advanced 2013 Paper 1 Offline
For $$a > b > c > 0,$$ the distance between $$(1, 1)$$ and the point of intersection of the lines $$ax + by + c = 0$$ and $$bx + ay + c = 0$$ is less than $$\left( {2\sqrt 2 } \right)$$. Then
A
$$a + b - c > 0$$
B
$$a - b + c < 0$$
C
$$a - b + c = > 0$$
D
$$a + b - c < 0$$
2
IIT-JEE 2011 Paper 1 Offline
A straight line $$L$$ through the point $$(3, -2)$$ is inclined at an angle $${60^ \circ }$$ to the line $$\sqrt {3x} + y = 1.$$ If $$L$$ also intersects the x-axis, then the equation of $$L$$ is
A
$$y + \sqrt {3x} + 2 - 3\sqrt 3 = 0$$
B
$$y - \sqrt {3x} + 2 + 3\sqrt 3 = 0$$
C
$$\sqrt {3y} - x + 3 + 2\sqrt 3 = 0$$
D
$$\sqrt {3y} + x - 3 + 2\sqrt 3 = 0$$
3
IIT-JEE 2007
The lines $${L_1}:y - x = 0$$ and $${L_2}:2x + y = 0$$ intersect the line $${L_3}:y + 2 = 0$$ at $$P$$ and $$Q$$ respectively. The bisector of the acute angle between $${L_1}$$ and $${L_2}$$ intersects $${L_3}$$ at $$R$$.
Statement-1: The ratio $$PR$$ : $$RQ$$ equals $$2\sqrt 2 :\sqrt 5$$. because
Statement-2: In any triangle, bisector of an angle divides the triangle into two similar triangles.
A
Statement-1 is True, Statement-2 is True; Statement-2 is not a correct explanation for Statement- 1
B
Statement-1 is True, Statement-2 is True; Statement-2 is NOT a correct explanation for Statement-1.
C
Statement-1 is True, Statement-2 is False.
D
Statement-1 is False, Statement-2 is True.
4
IIT-JEE 2007
Let $$O\left( {0,0} \right),P\left( {3,4} \right),Q\left( {6,0} \right)$$ be the vertices of the triangles $$OPQ$$. The point $$R$$ inside the triangle $$OPQ$$ is such that the triangles $$OPR$$, $$PQR$$, $$OQR$$ are of equal area. The coordinates of $$R$$ are
A
$$\left( {{4 \over 3},3} \right)$$
B
$$\left( {3,{2 \over 3}} \right)$$
C
$$\left( {3,{4 \over 3}} \right)$$
D
$$\left( {{4 \over 3},{2 \over 3}} \right)$$
Joint Entrance Examination
JEE Main JEE Advanced WB JEE
|
# How do you write the balanced acid equation and the dissociation expression Ka for following compounds in water? a) H_3PO_4 b) HClO_2 c) CH_3COOH d) HCO_3^- e) HSO_4^-
Sep 15, 2017
$H A \left(a q\right) + {H}_{2} O \left(l\right) r i g h t \le f t h a r p \infty n s {H}_{3} {O}^{+} + {A}^{-}$
#### Explanation:
And ${K}_{a} = \frac{\left[{H}_{3} {O}^{+}\right] \left[{A}^{-}\right]}{\left[H A\right]}$
If ${K}_{a}$ is LARGE, then we gots a strong acid...if ${K}_{a}$ is small, then we got a weak acid, and the equilibrium LIES to the LEFT as written.
For $a .$ we gots a diprotic acid.....
${H}_{3} P {O}_{4} \left(a q\right) + 2 {H}_{2} O \left(l\right) r i g h t \le f t h a r p \infty n s H P {O}_{4}^{-} + 2 {H}_{3} {O}^{+}$
${K}_{a} = \frac{\left[H P {O}_{4}^{2 -}\right] {\left[{H}_{3} {O}^{+}\right]}^{2}}{\left[{H}_{3} P {O}_{4}\right]}$
For $b .$ we gots a monoprotic weak acid.....
$H C l {O}_{2} \left(a q\right) + {H}_{2} O \left(l\right) r i g h t \le f t h a r p \infty n s C l {O}_{2}^{-} + {H}_{3} {O}^{+}$
${K}_{a} = \frac{\left[H C l {O}_{2}^{-}\right] \left[{H}_{3} {O}^{+}\right]}{\left[H C l {O}_{2}\right]}$
For $c .$ we gots a monoprotic weak acid.....
$H O A c \left(a q\right) + {H}_{2} O \left(l\right) r i g h t \le f t h a r p \infty n s A c {O}^{-} + {H}_{3} {O}^{+}$
K_a=??. Have bash at the remaining expressions yourself.
|
is the conjugate of . = 19√(6)/2. Make sure to place their quotient under a new radical sign. 56. Dividing radicals and whole numbers? Lv 4. dividing by i complex numbers. A radical cannot be divided by a whole number unless the radical is not simplified. Convert the radical to it's equivalent decimal number and do the division. Dividing Radicals: When dividing radicals (with the same index), divide under the radical, and then divide in front of the radical (divide any values multiplied times the radicals). 3 risposte. Identities with complex numbers. In mathematics, an nth root of a number x is a number r which, when raised to the power n, yields x: =, where n is a positive integer, sometimes called the degree of the root. So not only is . Example: 2 6 / 2 3 = 2 6-3 = 2 3 = 2⋅2⋅2 = 8. Multiplying square roots is typically done one of two ways. Note in the last example above how I ended up with all whole numbers. Some of … Thank you for your support! We're asked to divide. Multiplying square roots is typically done one of two ways. Students use models to show their thinking on each of these problems. Divide Unit Fractions and Whole Numbers … I'll explain as we go. Solving quadratic equations by completing the squa.. Help with multiplying radicals, cannot seem to und.. What else can the atomic number equal to besides t.. What is the derivative of F(x) = 2x - 3/x by the D.. Why does sound increase by 6dBs when the amplitude.. Is the number: 4.1 rational or irrational? Algebra 2 Roots and Radicals. Note in the last example above how I ended up with all whole numbers. The two numbers inside the square roots can be combined as a fraction inside just one square root. Dividing variables in an algebra problem is fairly straightforward. Dividing exponents with different bases When the bases are different and the exponents of a and b are the same, we can divide a and b first: a n / b n = ( a / b ) n Dividing Whole Numbers. keywords: whole,by,radicals,number,Dividing,Dividing radicals by a whole number. Free Radicals Calculator - Simplify radical expressions using algebraic rules step-by-step This website uses cookies to ensure you get the best experience. What is its area? In this lesson, we are only going to deal with square roots only which is a specific type of radical expression with an index of \color{red}2.If you see a radical symbol without an index explicitly written, it is understood to have an index of \color{red}2.. Below are the basic rules in multiplying radical expressions. Dividing radicals with different index numbers, quadratics vb formula, hard negative exponent problem, problems of multiplication properties of exponents, convert galois field matlab decimal, When simplifying a rational expression, why do you need to factor the numerator and … Displaying top 8 worksheets found for - Unit 11 Radicals Homework 3 Addingsubtracting Radicals. The following rules can help with the operation of multiplication when radical terms are involved in a sum or when simplifying. Dividing Decimals by Whole Numbers. If you would like a lesson on solving radical equations, then please visit our lesson page . By using this website, you agree to our Cookie Policy. As you can see from this worked example - the skill to dividing radicals, is not the division process, but the process of identifying the rules of algebra, and being able to apply them to radical numbers - and also, knowing the rules of radicals, and how to simplify them.. Conjugates & Dividing by Radicals. The radical square root of 4 cannot be used to... 3 \sqrt {98} - 2 \sqrt {50} + 8 \sqrt{32} = ? Alissa Fong. 10 Questions Show answers. {/eq}). To divide two radicals, you can first rewrite the problem as one radical. That's a good thing when you're trying to get square roots out of the bottom of a fraction. After recording the essential understandings in their own words (on the bottom of the handout with practice problems) students work in pairs to complete the six problems (3 dividing whole numbers by fractions and 3 multiplying whole numbers by fractions). And we're dividing six plus three i by seven minus 5i. {/eq} power and simplifies to the value under the radical symbol. Question 1 5 Questions Show answers. freeonlinequizzestests.com. When dividing radical expressions, we use the quotient rule to help solve them. Then, using the greatest common factor, you divide the numbers and reduce. In the equation above, x = 2. H ERE IS THE RULE for multiplying radicals: It is the symmetrical version of the rule for simplifying radicals. When dividing radical expressions, use the quotient rule. Herein, can you divide by a radical? More worksheet. Dividing radical is based on rationalizing the denominator.Rationalizing is the process of starting with a fraction containing a radical in its denominator and determining fraction with no radical in its denominator. 56. dividing decimals by whole numbers worksheet with answers on dividing decimals worksheets 6th grade math aids. Im really confused.. Multiplying Radical Expressions. To play this quiz, please finish editing it. Dividing whole number by radical? Divide unit fractions and whole numbers math methods charts anchor chart teaching dividing how to by a number: 7 steps (with pictures) in 2020 answers printable worksheets play learn single digit divisors. Once you do this, you can simplify the fraction inside and then take the square root. Dividing square roots with exponents; Dividing exponents with same base. Which of the following is a third root of 64? {/eq} refers to the cube root of {eq}8 Now we must find the number by which the original index has been multiplied, so that the new index is 12 and we do it dividing this common index by the original index of each root: That is to say, the index of the first root has been multiplied by 4, that of the second root by 3 and that of the third root by 6. Dividing Radicals and Rationalizing the Denominator - Concept. Dividing Radicals and Rationalizing the Denominator - Concept. The number coefficients are reduced the same as in simple fractions. MULTIPLYING AND DIVIDING RADICALS. For example, 144 36 = 4 {\displaystyle {\frac {144}{36}}=4} , so 144 36 = 4 {\displaystyle {\sqrt {\frac {144}{36}}}={\sqrt {4}}} . When dividing variables, you write the problem as a fraction. The third is quite hard. Risposta preferita. Finding the determinant of a matrix by adding mult.. How to solve system by elimination method. In this case, 22 divided by 5 = 22/5 (Yep, sometimes you wind up with a fraction or a decimal; that’s why I’m giving an example like this.) Click to see full answer. I forgot how to do this, but I think you have to change the whole number into a radical first and then you can divide it. I multiplied two radical binomials together and got an answer that contained no radicals. This finds the largest even value that can equally take the square root of, and leaves a number under the square root symbol that does not come out to an even number. If no root is specified, it can be assumed the radical is representing the square root. In division problems, you're allowed to move the decimal points, but only if you move them by the same amount for each number. The first one is just straightforward arithmetic. We can use this property to obtain an analogous property for radicals: 1 1 1 (using the property of exponents given above) n n n n n n a a b b a b a b = ⎛⎞ =⎜⎟ ⎝⎠ = Quotient Rule for Radicals … Looking at how multiplication represents repeated addition, as well as special cases of multiplying and dividing whole numbers. Objective Learn how to apply the division algorithm to dividing decimals.. As is the case with addition, subtraction, and multiplication, dividing decimals requires only the use of the standard algorithm together with a method for placing the decimal point. Multiplying and Dividing Fractions: Dividing Decimals by Whole Numbers: Adding and Subtracting Radicals: Subtracting Fractions: Factoring Polynomials by Grouping: Slopes of Perpendicular Lines: Linear Equations: Roots - Radicals 1: Graph of a Line: Sum of the Roots of a Quadratic: Writing Linear Equations Using Slope and Point But we can find a fraction equivalent to by multiplying the numerator and denominator by .. Now if we need an approximate value, we divide . Dividing exponents with different bases. Now our problem is 30 ÷ 12. As you can see from this worked example - the skill to dividing radicals, is not the division process, but the process of identifying the rules of algebra, and being able to apply them to radical numbers - and also, knowing the rules of radicals, and how to simplify them.. Step 1: To divide complex numbers, you must multiply by the conjugate.To find the conjugate of a complex number all you have to do is change the sign between the two terms in the denominator. Date : 19 Aug, 2018. Since {eq}2^3=8 Assume that the expression is 19â(3/2). As well as being able to add and subtract radical terms, we can also perform the task of multiplying and dividing radicals when required. DIVIDING DECIMALS BY WHOLE NUMBERS WORKSHEET WITH ANSWERS. The end result is the same, . Dividing Radicals: When dividing radicals (with the same index), divide under the radical, and then divide in front of the radical (divide any values multiplied times the radicals).Divide out front and divide under the radicals.Then simplify the result. A worked example of simplifying an expression that is a sum of several radicals. Dividing Decimals by Whole Numbers. Multiplying and Dividing Radicals. Can you multiply radicals with whole numbers? The conjugate (KAHN-juh-ghitt) has the same numbers but the opposite sign in the middle. the conjugate of , but . In this lesson, we are only going to deal with square roots only which is a specific type of radical expression with an index of \color{red}2.If you see a radical symbol without an index explicitly written, it is understood to have an index of \color{red}2.. Below are the basic rules in multiplying radical expressions. {/eq} root of a value, which is the value that can be raised to the {eq}n^{th} Discover the skills to Dividing Radicals, Step by Step. 1 decennio fa. Sciences, Culinary Arts and Personal A radical is an expression containing a radical symbol ({eq}^n\sqrt{} Dividing fractions is somewhat difficult conceptually. In the radical below, the radicand is the number '5'.. Refresher on an important rule involving dividing square roots: The rule explained below is a critical part of how we are going to divide square roots so make sure you take a second to brush up on this. This finds the largest even value that can equally take the square root of, and leaves a number under the square root symbol that does not come out to an even number. This escape room is completely digital through the use of a Google Form. How would I do this: 19 * the square root of 3/ 2? To see the answer, pass your mouse over the colored area. Bookmark File PDF Dividing Radicals E2020 Quiz whole number by whole number and radical by radical to simplify first. So I want to get some real number plus some imaginary number, so some multiple of i's. Just like the method used to multiply, the quicker way of dividing is by dividing the component parts: $\frac{8 \sqrt{6}}{2 \sqrt{3}}$ Divide the whole numbers: Create your account. Alissa Fong. Quiz & Worksheet – Dividing Radical Expressions from Dividing Radicals Worksheet, source:guillermotull.com. ANSWER: Divide out front and divide under the radicals. How to rationalize the denominator when dealing with an imaginary number. 1 decade ago. Discover the skills to Dividing Radicals, Step by Step. 1. If you multiply two conjugates, your result is always an integer or a whole or a whole number. If any of the boxes do not require an answer, it may be left blank. When dividing radical expressions, the rules governing quotients are similar: . Prove the identity sinx + tanx = tan.. One is through the method described above. The radicand refers to the number under the radical sign. Alissa Fong. All rights reserved. This quiz is incomplete! True or False. We look to divide whole number by whole number and radical by radical to simplify first. In this example, the index is the 3 and it is indicating the cube root of 27. The quotient rule states that a radical involving a quotient is equal to the quotients of two radicals. © 2008-2010 http://www.science-mathematics.com . Jeffro. Example 1. keywords: whole,by,radicals,number,Dividing,Dividing radicals by a whole number. © copyright 2003-2020 Study.com. There is a box for the whole number, numerator, and denominator. Program by zplan cms. To read our review of the Math Way -- which is what fuels this page's calculator, please go here . How do you divide radicals by whole numbers? Begin by recalling how we think about the division of whole numbers. Radical Pre Algebra Order of Operations Factors & Primes Fractions Long Arithmetic Decimals Exponents & Radicals Ratios & Proportions Percent Modulo Mean, Median & Mode Scientific Notation Arithmetics This breakout escape room is a fun way for students to test their skills with dividing radicals without variables. In this example, we simplify √(2x²)+4√8+3√(2x²)+√8. Next lesson. Can you divide a radical by a whole number? Related. If n is even, and a ≥ 0, b > 0, then. For instance, if the answer is 6, then you would enter 6 in the first box and leave the numerator, and denominator box blank. Lean how to divide rational expressions with a radical in the denominator. For exponents with the same base, we should subtract the exponents: a n / a m = a n-m. Operations with cube roots, fourth roots, and other higher-index roots work similarly to square roots, though, in some spots, we'll need to extend our thinking a bit. Simplifying Higher-Index Terms. How exactly do you solve this equation by completi.. Vocabulary Refresher. This lets you turn the problem into whole numbers. jehowell2000. We factor, find things that are squares (or, which is the same thing, find factors that occur in pairs), and then we pull out one copy of whatever was squared (or of whatever we'd found a pair of). Set each term with √ to get: 19√(3)/√(2) Then, multiply the top and bottom by √2 to rationalize the denominator: 19√(3)/√(2) * √2/√2. Students learn to divide square roots by dividing the numbers that are inside the radicals. Dividing complex numbers review. Rispondi Salva. You use the rules of exponents to divide […] Become a Study.com member to unlock this Module 4: Dividing Radical Expressions Recall the property of exponents that states that m m m a a b b ⎛⎞ =⎜⎟ ⎝⎠. The same is true of roots: . Therefore, it is a good idea to first see the process used to divide a whole number by a fraction whose numerator is 1, and then use that discussion to motivate the concept of reciprocal. Note: If a +1 button is dark blue, you have already +1'd it. {/eq}, {eq}^3\sqrt{8}=2 Example: 18 / radical 3. Students also learn that if there is a square root in the denominator of a fraction, the problem can be simplified by multiplying both the numerator and denominator by the square root that is in the denominator. As you can see, simplifying radicals that contain variables works exactly the same way as simplifying radicals that contain only numbers. A radical cannot be divided by a whole number unless the radical is not simplified. Add, Subtract, Multiply and Simplify (old quiz): original and answer key; Divide Radicals. True or False. Multiplying Radical Expressions. (If you are not logged into your Google account (ex., gMail, Docs), a login window opens when you click on +1. Relevance. I hope this helps! Vocabulary Refresher. - Definition, Equations & Graphs, Parallelograms: Definition, Properties, and Proof Theorems, Addition Property of Equality: Definition & Example, Undefined Terms of Geometry: Concepts & Significance, Arithmetic Sequence: Formula & Definition, How to Solve 'And' & 'Or' Compound Inequalities, How to Divide Polynomials with Long Division, Deciding on a Method to Solve Quadratic Equations, High School Algebra I: Homework Help Resource, NY Regents Exam - Integrated Algebra: Help and Review, NY Regents Exam - Integrated Algebra: Tutoring Solution, Precalculus Algebra for Teachers: Professional Development, Algebra Connections: Online Textbook Help, McDougal Littell Algebra 1: Online Textbook Help, Prentice Hall Pre-Algebra: Online Textbook Help, OSAT Advanced Mathematics (CEOE) (111): Practice & Study Guide, AP EAMCET E & AM (Engineering, Agriculture & Medical) Study Guide, BITSAT Exam - Math: Study Guide & Test Prep, Math 99: Essentials of Algebra and Statistics, Biological and Biomedical Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. The radical symbol denotes calculating the {eq}n^{th} See All. If the radical is not simplified, simplify the expression and... Our experts can answer your tough homework and study questions. First, consider these three practice questions. True or False. To play this quiz, please finish editing it. Triangle ABC is an equilateral triangle with an altitude of 6. Theme by wukong . A common way of dividing the radical expression is to have the denominator that contain no radicals. This quiz is incomplete! Each variable is considered separately. It is valid for a and b greater than or equal to 0. A root of degree 2 is called a square root and a root of degree 3, a cube root.Roots of higher degree are referred by using ordinal numbers, as in fourth root, twentieth root, etc.. (Okay, technically they're integers, but the point is that the terms do not include any radicals.) Step 2: Distribute (or FOIL) in both the numerator and denominator to remove the parenthesis. How do you divide: radical 12 by 2 ? To divide complex numbers. That's a mathematical symbols way of saying that when the index is even there can be no negative number in the radicand, but when the index is odd, there can be. The product of two similar radicals will become a whole number When multiplying radicals, numbers outside stay outside, numbers inside stay inside Ex: Simplify the following Video transcript. Example: To turn 3.0 ÷ 1.2 into whole numbers, move the decimal points one space to the right. Just like the method used to multiply, the quicker way of dividing is by dividing the component parts: $\frac{8 \sqrt{6}}{2 \sqrt{3}}$ Divide the whole numbers: For example, {eq}^3\sqrt{8} Need help two number verbal words problems for mat.. Finding the percentage of alloy by mass( Max point.. Dividing radical is based on rationalizing the denominator.Rationalizing is the process of starting with a fraction containing a radical in its denominator and determining fraction with no radical in its denominator. In the equation above, x = The second one throws in a little geometry. Divide dividend by number under the radical. Objective Learn how to apply the division algorithm to dividing decimals.. As is the case with addition, subtraction, and multiplication, dividing decimals requires only the use of the standard algorithm together with a method for placing the decimal point. Take the answer you get, 22/5, and multiply it by the radical. Answer Save. Why won't scientists admit that the record ra.. Where does a fraction go on a number line, Double mass, stopping distance increase by, Please pick 5 random numbers between 1-100. Conjugate pairs. {/eq}. Dividing - we want to have our answers with rational denominators (no radicals left in the bottom of the fraction). 22/5 x √5 = … The index is the superscript number to the left of the radical symbol, which indicates the degree of the radical. If you like this Site about Solving Math Problems, please let Google know by clicking the +1 button. Square roots of numbers that are not perfect squares are irrational numbers. Let's divide the following 2 complex numbers $\frac{5 + 2i}{7 + 4i}$ Step 1. Type any radical equation into calculator , and the Math Way app will solve it form there. Day 3: Divide Radicals SKILLS REVIEW . = 19√(6)/√4. When we rationalize the denominator, we write an equivalent fraction with a rational number in the denominator.. Let’s look at a numerical example. The Product Rule states that the product of two or more numbers raised to a power is equal to the product of each number raised to the same power. Problem 1. 3. MA, Stanford University ... your result is always an integer or a whole or a whole number. Just like the method used to multiply, the quicker way of dividing is by dividing the component parts: $\frac{8 \sqrt{6}}{2 \sqrt{3}}$ Divide the whole numbers: 1. keywords: whole,by,radicals,number,Dividing,Dividing radicals by a whole number. answer! Multiplying And Dividing Whole Numbers By All Powers Ten from Dividing Radicals Worksheet, source:koogra.com. Alissa Fong. Question 1 {/eq}. In the radical below, the radicand is the number '5'.. Refresher on an important rule involving dividing square roots: The rule explained below is a critical part of how we are going to divide square roots so make sure you take a second to brush up on this. It is the process of removing the root from the denominator. If you think of the radicand as a product of two factors (here, thinking about 64 as the product of 16 and 4), you can take the square root of each factor and then multiply the roots. For all real values, a and b, b ≠ 0. Fractions multiply add subtract with whole numbers ; convert deciamls into radicals ; simplify expressions worksheet ; factoring algebraic expression calculators ; GCSE maths worksheets ; absolute value of an algebraic quantity exercises to practice ; absolute value function in grade nine math ; trigonometric calculator ; math 6th grade pre test Pertinenza. : Step 3: Simplify the powers of i, specifically remember that i 2 = –1. Determine the conjugate of the denominator So would 18 be something like 6 radical 3? Please pick 5 random numbers between 1-100; New. Dividing Polynomials using Long Division Algebra 2 Polynomials. This breakout escape room is a fun way for students to test their skills with multiplying and dividing decimals.Important: (How to Make Completely Digital)This product normally requires the printing of the questions to accompany a digital form for students to input answers. 3.0 becomes 30, and 1.2 becomes 12. You may want to review the properties of the 30-60-90 Triangle and the Equilateral Triangleif those are unfamiliar. All other trademarks and copyrights are the property of their respective owners. Dividing Radical Expressions. If n is odd, and b ≠ 0, then. That's a good thing when you're trying to get square roots out of the bottom of a fraction. Dividing Radicals from Dividing Radicals Worksheet, source:printable-math-worksheets.com 4 Answers. A common way of dividing the radical expression is to have the denominator that contain no radicals. Dividing Radical Expressions. Addition and Subtraction Using Radical Notation, Rationalizing Denominators in Radical Expressions, Solving Radical Equations with Two Radical Terms, Simplifying Expressions with Rational Exponents, Radical Expression: Definition & Examples, Practice Adding and Subtracting Rational Expressions, Inverse Variation: Definition, Equation & Examples, Direct Variation: Definition, Formula & Examples, Solving Linear Inequalities: Practice Problems, How to Add, Subtract, Multiply and Divide Functions, What is a Radical Function? For any of these, it may well be that, even if you did all your multiplication and division correc… Services, Working Scholars® Bringing Tuition-Free College to the Community. One is through the method described above. Multiply. And in particular, when I divide this, I want to get another complex number. In the previous pages, we simplified square roots by taking out of the radical any factor which occurred in sets of two. Radicals - Math 20-2 Unit 11 Radicals Homework 3 Addingsubtracting Radicals. First, find the complex conjugate of the denominator, multiply the numerator and denominator by that conjugate and simplify. Finding the determinant of a matrix .. Divide the numbers as you would any whole number. The radicand refers to the number under the radical sign. If you like this Page, please click that +1 button, too.. I divide this, you can see, simplifying radicals that contain variables works exactly same. Would I do this: 19 * the square roots by dividing the radical divide a radical by a or. Inside the radicals. when dealing with an altitude of 6 radicals by whole numbers that contained no radicals )! Same numbers but the opposite sign in the last example above how I ended up all... Factor, you divide radicals by whole number by whole numbers got an answer pass. This lets you turn the problem as one radical equation into calculator, and the Triangleif. Equilateral triangle with an imaginary number our review of the denominator dividing complex numbers review the root the... Multiply it by the radical is not simplified respective owners click that +1 button is dark blue you! Radical by radical to simplify first Credit & get your Degree, get to! Breakout escape room is completely digital through the use of a matrix by adding mult.. how to rationalize denominator! = –1 contain only numbers worked example of simplifying an dividing radicals by whole numbers that is a root... Note: if a +1 button get square roots can be combined as a fraction solve... A worked example of simplifying an expression containing a radical can not be by... Dividing variables in an algebra problem is fairly straightforward get, 22/5, and a ≥ 0 then. Number plus some imaginary number, numerator, and multiply it by the radical any factor which occurred sets..., get access to this video and our entire Q & a.... Site about solving Math problems, please go here dividing whole numbers by dividing the radical.... Complex conjugate of the rule for simplifying radicals. is what fuels this page 's,... I do this, you have already +1 'd it tough Homework and study questions without variables equation... Be divided by a whole or a whole or a whole or a whole number,! Variables in an algebra problem is fairly straightforward root of 64 is valid a... Any radicals. following rules can help with the operation of multiplication when radical terms are involved in a geometry... Math aids or when simplifying divide a radical involving a quotient is equal to.. Removing the root from the denominator when dealing with an imaginary number be left blank is fun! Trademarks and copyrights are the property of their respective owners worksheets 6th grade Math aids 18 something... Way app will solve it form there radicand refers to the right Step 1 blue! This page 's calculator, please go here.. how to rationalize the denominator that contain no.. Dividing whole numbers division of whole numbers the +1 button, too =2 { /eq } {! ) has the same as in simple Fractions radicals left in the last example how. How I ended up with all whole numbers radicals, you write the problem into whole numbers determinant... Multiplication when radical terms are involved in a little geometry the two numbers inside the root. Numbers Worksheet with answers on dividing decimals by whole number by whole numbers numbers but the dividing radicals by whole numbers is the. Contain only numbers to test their skills with dividing radicals without variables 3 and it is the symmetrical of. Multiply and simplify ( old quiz ): original and answer key divide. Get square roots by taking out of the boxes do not require an answer that contained no radicals ). Worksheets found for - Unit 11 radicals Homework 3 Addingsubtracting radicals. Worksheet – dividing radical expressions algebraic! A good thing when you 're trying to get square roots out of the way. Expressions from dividing radicals, you agree to our Cookie Policy is dark blue you... Old quiz ): original and answer key ; divide radicals. last example above how I ended with... Indicating the cube root of 64 { /eq }, { eq } ^n\sqrt }... To have the denominator of dividing the radical sign way of dividing dividing radicals by whole numbers radical any factor occurred. = –1, simplifying radicals. radical can not be divided by a whole number root the! 2⋅2⋅2 = 8 a common way of dividing the radical sign greatest common factor, you have already +1 it! Following rules can help with the operation of multiplication when radical terms are involved a. Is representing the square root of 27 of these problems, it may be left blank is the. Representing the square root of 3/ 2 little geometry, when I this... Math way app will solve it form there discover the skills to dividing dividing radicals by whole numbers by whole numbers: if +1! Page, please finish editing it can simplify the fraction inside and then take the answer, it be. This quiz, please finish editing it will solve it form there ; divide.. Original and answer key ; divide radicals. get, 22/5, and b 0..., it may be left blank whole, by, radicals, Step by Step 3 Addingsubtracting radicals )! Entire Q dividing radicals by whole numbers a library number unless the radical expression is 19â ( 3/2 ) copyrights the! Box for the whole number and radical by radical to simplify first app will solve it form there I this... Simple Fractions 2x² ) +4√8+3√ ( 2x² ) +4√8+3√ ( 2x² ) +4√8+3√ ( 2x² ) +√8 decimal one. As special cases of multiplying and dividing whole numbers, move the decimal points one space the! Problem as one radical & amp ; Worksheet – dividing radical expressions use! Escape room is a third root of 64 radical to it 's decimal! The boxes do not require an answer that contained no radicals left in the previous pages, we the. Multiply two conjugates, your result is always an integer or a whole or a whole or a number... These problems 0, then keywords: whole, by, radicals, Step by.. A ≥ 0, then please visit our lesson page way of dividing radical...: divide out front and divide under the radicals. a library no. Of simplifying an expression containing a radical by radical to simplify first 3/2 ) =2 /eq. Denominator that contain no radicals. by mass ( Max point do this: 19 * the square.. Fractions and whole numbers how would I do this: 19 * the root... Equation into calculator, and b greater than or equal to the number the..., a and b, b ≠ 0, then = the second one throws in a little geometry this! Addingsubtracting radicals. dividing radicals by a whole number and radical by a whole number always an integer a... ) has the same base, we simplified square roots out of the 30-60-90 triangle and the Math way will... Conjugate ( KAHN-juh-ghitt ) has the same base, we simplify √ 2x²! With the same numbers but the opposite sign in the middle completely digital the. Some real number plus some imaginary number about solving Math problems, let., it may be left blank dividing radicals Worksheet, source: guillermotull.com exactly do you:! - Math 20-2 Unit 11 radicals Homework 3 Addingsubtracting radicals. let 's divide the is! Video and our entire Q & a library may want to have the denominator that contain variables works the... A common way of dividing the numbers as you can simplify the fraction inside and then take the answer pass. In this example, the rules governing quotients are similar: is odd, and it. N is even, and b, b > 0, then please visit our lesson.. Reduced the same as in simple Fractions denominator by that conjugate and simplify ( quiz. Divide radicals. Step 3: simplify the expression and... our can... Our answers with rational denominators ( no radicals left in the last example above how I up..., pass your mouse over the colored area is not simplified 11 radicals Homework 3 Addingsubtracting radicals. need two! Respective owners always an integer or a whole number a matrix by adding mult.. to! Real number plus some imaginary number with rational denominators ( no radicals. two radicals, you agree our. Agree to our Cookie Policy thing when you 're trying to get another complex number a! Numbers and reduce always an integer or a whole or a whole number by whole number divide rational expressions a!
|
# Second Order Circuit - Equation Decisions
Discussion in 'Homework Help' started by ghoti, Jun 13, 2010.
1. ### ghoti Thread Starter New Member
Jun 13, 2010
8
0
Hi,
I am having difficulty understanding the decisions of variable selection in a second order circuit. (Time Domain)
This is the problem I am working with.
As this will be solved by hand I need to ensure an efficient solution.
I am looking for a nodal equation with the cap and a mesh equation for the inductor.
If the horizontal resistor is R1 and the vertical resistor R2
My first equation is the nodal for the cap. node Vc.
Current through the cap + current through R1 and
the current through R1 is the current through R2 + current through the inductor
So
$C_1\frac{dv_c}{dt} + i_l + L_1\frac{di_l}{dt}\frac{1}{R_2} = 0$
Now I am stuck with producing a satisfactory mesh equation that still allows me to solve for vc.
The resultant characteristic equation will be m^2 + 2m + 2 = 0.
Any help would be MUCH appreciated.
Thanks,
Alex
2. ### t_n_k AAC Fanatic!
Mar 6, 2009
5,448
784
Why not go for nodal analysis?
You require a voltage answer so that would make more sense.
Make the top node of the inductor V2 and that will give you the two nodes for setting up your equations in V1 & V2.
3. ### ghoti Thread Starter New Member
Jun 13, 2010
8
0
Hi,
I am on the edge of my understanding here but.... if I describe V2 via nodal I end up with a solution for V2 which is fine however to solve for V1 I need to fully solve the V2 DE before I can sub it back into V1. I cannot however solve the V2 fully because V2(0-) and V2(0+) are not definable due to the discontinues nature of the voltage across the inductor.
Essentially;
(1) $v_1' + 2v_1 = v_2$
(2) $v1 = 2v_2 + \frac{1}{L}\int{v_2}$
Am I completely up the wrong tree somewhere?
4. ### ghoti Thread Starter New Member
Jun 13, 2010
8
0
I now believe my characteristic equation is also incorrect.
LTspice confirms that ic(0+) is 1A (upward) so $\frac{dv_c}{dt}$ must be equal to 2.
5. ### t_n_k AAC Fanatic!
Mar 6, 2009
5,448
784
It's a little unclear what the current source value is - is it 1.u(t). In other words a constant 1A source from t=0+ ...?
6. ### ghoti Thread Starter New Member
Jun 13, 2010
8
0
on all t; it is 1 + a -negative step function at 0.
f(-t) = 1
f(0) = undef
f(t) = 0
7. ### ghoti Thread Starter New Member
Jun 13, 2010
8
0
Just to clarify the result is also defined on all t;
the 1+ outside the u(t) is the -t part
and then the (f(t) -1)u(t) is +t part.
8. ### Ghar Active Member
Mar 8, 2010
655
73
This problem has 2 nodes and 2 meshes so it doesn't really matter which method you use but do stick to one. Doing both is a waste of time.
V2(0-) is very well defined; it's 0.
V2(0+) is also well defined; it's also 0. The reason is that the capacitor fixes the voltage across the resistors and the inductor fixes the current through it.
Let's work through the initial conditions...
We're looking at the DC solution so the inductor is a short and the capacitor is open. This means R2 is shorted by the inductor, you can ignore it. That also means V2(0-) is zero.
If the capacitor is open all the current is going through R1, and since R2 has zero current all of it goes through L. That gives you IL(0-) = 1A.
Since V2(0-) is 0, then V1(0-) must be 1A * 1 Ohm = 1V. This is equal to Vc(0-).
Now the current source shuts off (t = 0+). You get that Vc(0+) = V1(0+) must still be 1V and IL(0+) must still be 1A.
Solve the equations at t = 0+, by superposition you get:
$V_2(0^+) = V_c(0^+)\frac{R_2}{R_1 + R_2}- (R_1||R_2)I_L(0^+)
\\=1\frac{1}{1+1} - (1||1) (1) = 0$
Edit:
I should note that the discontinuity is not a problem. It happens exactly at t = 0 and only at t = 0.
Discontinuities are only an issue if you try to differentiate them and you're not; in a capacitor you differentiate voltage which must be continuous and in an inductor it's current. The discontinuity is exactly how math says they must be continuous functions.
Last edited: Jun 14, 2010
9. ### t_n_k AAC Fanatic!
Mar 6, 2009
5,448
784
OK - understood.
So for the purposes of analysis one can remove the current source from t=0+ and treat the circuit as if the capacitor was initially charged to 1V. This can be justified with some simple reasoning.
As to the voltage on the inductor at t=0+ it would be a reasonable assumption that this is still zero - as it was at t=0-
10. ### ghoti Thread Starter New Member
Jun 13, 2010
8
0
Ghar,
Thanks for your reply. I follow your derivation of the initial conditions and your note on discontinuity however I am wondering what your method is for producing the final equation. Perhaps I am very lost right now but;
If I decide to select nodal and produce my two equations
$V_c' + \frac{1}{R_1C}V_c = \frac{1}{R_1C}V_l \\
V_l'(\frac{1}{R_1} + \frac{1}{R_2}) + \frac{1}{L}V_l = \frac{1}{R_1}V_c'$
Getting rid of some of the variables to keep things clean.
$\frac{1}{2}(V_c' + 2V_c) = V_l \\
2V_l' + 2V_l = V_c'$
Once sub one into the other I end up with a second order differential requiring I define v(0+) as well as v'(0+)
$V_c'' + V_c' + 2V_c = 0$
Now the only way I can think of to get Vc' is via $ic = C\frac{dV_c}{dt}$ ?
t_n_k:
The voltage across an inductor can change instantaneously. It has been suggested to me that this is infact, the problem with my solution.
In my initial nodal equation for VL, I included a $\int{v}$. As v is not a CTS function in an inductor this integral is in-fact undefined.
Whilst my gut understands the logic of this I am a little lost as to its ramifications for this problem and how to get around it.
Last edited: Jun 14, 2010
11. ### t_n_k AAC Fanatic!
Mar 6, 2009
5,448
784
I am able to find the following 2nd order DE with V1 as variable ....
$V_1^{''}+$\frac{1}{(R_1+R_2)C}+\frac{R_1R_2}{(R_1+R_2)L}$V_1^{'}+\frac{R_2}{LC(R_1+R_2)}V_1=0$
which leads to the same characteristic equation m^2+2m+2=0
12. ### ghoti Thread Starter New Member
Jun 13, 2010
8
0
That characteristic equation works but only if vc'(0) = 0 and vc(0) = 1
if ic = c dvc/dt
then for vc'(0) to be 0 ic must also be zero.
I built the circuit in LTSpice and confirmed that the current in C rises instantly to 1A!
To get the final result I am using:
$
\sqrt{C^2 + D^2}e^{-\alpha} cos(\omega t - tan^{-1} \frac{D}{C})\\
e^{-\alpha}[Csin(\omega t) + Dcos(\omega t)]
$
I am so very lost! appreciate your help on this one!
Cheers,
13. ### t_n_k AAC Fanatic!
Mar 6, 2009
5,448
784
Consider what happens at t=0 for V1 (i.e. Vc)
At t=0- the capacitor current will be zero and the voltage V1 will be 1V.
At t=0 the current source jumps from 1A to zero, so the 1A current must then be supplied from the capacitor at that instant.
Hence, at t=0 we have .....
$V_1(0)=1V$
$i_c(0)=C\frac{dV_1}{dt}=-1A$
$\frac{dV_1}{dt}=V_1^'=\frac{i_c(0)}{C}=\frac{-1}{0.5}=-2 \ Volt/sec$
ghoti likes this.
14. ### ghoti Thread Starter New Member
Jun 13, 2010
8
0
t_n_k,
Brilliant, I see it now and have reproduced it myself.
A massive thankyou to both yourself and Ghar.
I only just found this resource but plan on sticking around, maybe help out where I can as well.
Once again,
Thanks
/passes t_n_k a beer.
Mar 6, 2009
5,448
784
Cheers mate!
16. ### Ghar Active Member
Mar 8, 2010
655
73
My final equation is simply an instantaneous solution at t = 0+ and looks just like DC in terms of the math, since there's no more time dependence (because I made t = 0+)
Integrals do work for finite discontinuous functions... since integrals are linear you just break the range apart. The discontinuity doesn't contribute anything because it exists for 0 time.
i.e. if there's a discontinuity at 5 you can do the integral from 0 to 5- and from 5+ to +inf
I'm sure there's some exceptions where it doesn't work but if you can see the area under the curve then the integral shouldn't have a problem.
Last edited: Jun 14, 2010
|
Loren on the Art of MATLABTurn ideas into MATLAB
Note
Loren on the Art of MATLAB has been retired and will not be updated.
Considering Performance in Object-Oriented MATLAB Code
I’m pleased to have Dave Foti back for a look at objects and performance. Dave manages the group responsible for object-oriented programming features in MATLAB.
I often get questions along the lines of what performance penalty is paid for using objects or how fast will an object-oriented implementation perform compared with some other implementation. As with many performance questions, the answer is that it very much depends on what the program is trying to do and what parts of its work are most performance-intensive. I see many cases where most of the real work is inside methods of an object that do math on ordinary matrices. Such applications won’t really see much difference with or without objects. I also realize that many MATLAB users aren’t nearly as concerned with run-time performance as with how long it takes to write the program. However, for those applications where performance matters and objects might be used in performance critical parts of the application, let’s look at what can we say about how MATLAB works that might be helpful to consider. We’ll also look at what has changed in recent MATLAB versions.
How objects spend their time
Let’s start with some basics – some of the places where objects spend time and how to minimize it. Objects will spend time in four basic places – object construction, property access, method invocation, and object deletion.
Object Construction
Object construction time is mostly spent copying the default values of properties from the class definition to the object and then calling the object’s constructor function(s). Generally speaking, there isn’t much to consider about default values in terms of performance since the expressions that create the default values are executed once when the class is first used, but then each object is just given a copy of the values from the class. Generally more superclasses will mean more constructor function calls for each object creation so this is a factor to consider in performance critical code.
Property Access
Property access is one of the most important factors in object performance. While changes over the past several releases have made property performance in R2012a more uniform and closer to struct performance, there is still some additional overhead for properties and the potential for aliasing with handle objects means that handle objects don’t get the same level of optimization that structs and value objects get from the MATLAB JIT. The simpler a property can be, the faster it will be. For example, making a property observable by listeners (using SetObservable/GetObservable) will turn off many optimizations and make property access slower. Using a set or get function will turn off most optimizations and also introduce the extra time to call the set or get function which is generally much greater than the time to just access the property. MATLAB doesn’t currently inline functions including set/get functions and so these are always executed as function calls. MATLAB optimizes property reading separately from property writing so it is important not to add a get-function just because the property needs a set-function.
Consider the following class:
type SimpleCylinder
classdef SimpleCylinder
properties
R
Height
end
methods
function V = volume(C)
V = pi .* [C.R].^2 .* [C.Height];
end
end
end
We can measure the time to create 1000 cylinders and compute their volumes:
tic
C1 = SimpleCylinder;
for k = 1:1000,
C1(k).R = 1;
C1(k).Height = k;
end
V = volume(C1);
toc
Elapsed time is 0.112309 seconds.
Now consider a slightly different version of the above class where the class checks all the property values:
type SlowCylinder
classdef SlowCylinder
properties
R
Height
end
methods
function V = volume(C)
V = pi .* [C.R].^2 .* [C.Height];
end
function C = set.R(C, R)
checkValue(R);
C.R = R;
end
function C = set.Height(C, Height)
checkValue(Height);
C.Height = Height;
end
end
end
function checkValue(x)
if ~isa(x, 'double') || ~isscalar(x)
error('value must be a scalar double.');
end
end
We can measure the same operations on this class:
tic
C2 = SlowCylinder;
for k = 1:1000,
C2(k).R = 1;
C2(k).Height = k;
end
A = volume(C2);
toc
Elapsed time is 0.174094 seconds.
Optimizing for Property Usage Inside the Class
If much of the performance critical code is inside methods of the class, it might make sense to consider using two property definitions for properties accessed in such performance critical code. One property is private to the class and doesn’t define any set or get functions. A second dependent property is public and passes through to the private property but adds error checking in its set-function. This allows the class to check values coming from outside the class, but not check values inside the class. Set functions always execute except when setting the default value during object creation and this allows the class to use its public interface if it is more convenient to do so. Set functions may do convenient transformations or other work in addition to just checking that the input value is legal for the property. However, if a performance-critical method doesn’t need this work, it can be helpful to use two properties.
For example, consider a new version of the cylinder class that checks its inputs but is designed to keep loops inside the class methods and use unchecked properties inside those methods.
type NewCylinder
classdef NewCylinder
properties(Dependent)
R
Height
end
properties(Access=private)
R_
Height_
end
methods
function C = NewCylinder(R, Height)
if nargin > 0
if ~isa(R, 'double') || ~isa(Height, 'double')
error('R and Height must be double.');
end
if ~isequal(size(R), size(Height))
error('Dimensions of R and Height must match.');
end
for k = numel(R):-1:1
C(k).R_ = R(k);
C(k).Height_ = Height(k);
end
end
end
function V = volume(C)
V = pi .* [C.R_].^2 .* [C.Height_];
end
function C = set.R(C, R)
checkValue(R);
C.R_ = R;
end
function R = get.R(C)
R = C.R_;
end
function C = set.Height(C, Height)
checkValue(Height);
C.Height_ = Height;
end
function Height = get.Height(C)
Height = C.Height_;
end
end
end
function checkValue(x)
if ~isa(x, 'double') || ~isscalar(x)
error('value must be a scalar double.');
end
end
Here we measure the same operations as above.
tic
C3 = NewCylinder(ones(1,1000), 1:1000);
A = volume(C3);
toc
Elapsed time is 0.006654 seconds.
Method Invocation
Method invocation using function call notation e.g. f(obj, data) is generally faster than using obj.f(data). Method invocation, like function calls on structs, cells, and function handles will not benefit from JIT optimization of the function call and can be many times slower than function calls on purely numeric arguments. Because of the overhead for calling a method, it is always better to have a loop inside of a method rather than outside of a method. Inside the method, if there is a loop, it will be faster if the loop just does indexing operations on the object and makes calls to functions that are passed numbers and strings from the object rather than method or function calls that take the whole object. If function calls on the object can be factored outside of loops, that will generally improve performance.
Calling a method on an object:
C4 = NewCylinder(10, 20);
tic
for k = 1:1000
volume(C4);
end
toc
Elapsed time is 0.013509 seconds.
Calling a method on the object vector:
C5 = NewCylinder(ones(1,1000), 1:1000);
tic
volume(C5);
toc
Elapsed time is 0.001903 seconds.
Calling a function on a similar struct and struct array First calling the function inside a loop:
CS1 = struct('R', 10, 'Height', 20);
tic
for k = 1:1000
cylinderVolume(CS1);
end
Next, we call the function on a struct array:
toc
CS2 = struct('R', num2cell(ones(1,1000)), ...
'Height', num2cell(1:1000));
tic
cylinderVolume(CS2);
toc
Elapsed time is 0.008510 seconds.
Elapsed time is 0.000705 seconds.
Deleting Handle Objects
MATLAB automatically deletes handle objects when they are no longer in use. MATLAB doesn't use garbage collection to clean up objects periodically but instead destroys objects when they first become unreachable by any program. This means that MATLAB destructors (the delete method) are called more deterministically than in environments using garbage collection, but it also means that MATLAB has to do more work whenever a program potentially changes the reachability of a handle object. For example, when a variable that contains a handle goes out of scope, MATLAB has to determine whether or not that was the last reference to that variable. This is not as simple as checking a reference count since MATLAB has to account for cycles of objects. Changes in R2011b and R2012a have made this process much faster and more uniform. However, there is one aspect of object destruction that we are still working on and that has to do with recursive destruction. As of R2012a, if a MATLAB object is destroyed, any handle objects referenced by its properties will also be destroyed if no longer reachable and this can in turn lead to destroying objects in properties of those objects and so on. This can lead to very deep recursion for something like a very long linked list. Too much recursion can cause MATLAB to run out of system stack space and crash. To avoid such an issue, you can explicitly destroy elements in a list rather than letting MATLAB discover that the whole list can be destroyed.
Consider a doubly linked list of nodes using this node class:
type dlnode
classdef dlnode < handle
properties
Data
end
properties(SetAccess = private)
Next
Prev
end
methods
function node = dlnode(Data)
node.Data = Data;
end
function delete(node)
disconnect(node);
end
function disconnect(node)
prev = node.Prev;
next = node.Next;
if ~isempty(prev)
prev.Next = next;
end
if ~isempty(next)
next.Prev = prev;
end
node.Next = [];
node.Prev = [];
end
function insertAfter(newNode, nodeBefore)
disconnect(newNode);
newNode.Next = nodeBefore.Next;
newNode.Prev = nodeBefore;
if ~isempty(nodeBefore.Next)
nodeBefore.Next.Prev = newNode;
end
nodeBefore.Next = newNode;
end
function insertBefore(newNode, nodeAfter)
disconnect(newNode);
newNode.Next = nodeAfter;
newNode.Prev = nodeAfter.Prev;
if ~isempty(nodeAfter.Prev)
nodeAfter.Prev.Next = newNode;
end
nodeAfter.Prev = newNode;
end
end
end
Create a list of 1000 elements:
top = dlnode(0);
tic
for i = 1:1000
insertBefore(dlnode(i), top);
top = top.Prev;
end
toc
Elapsed time is 0.123879 seconds.
Destroy the list explicitly to avoid exhausting the system stack:
tic
while ~isempty(top)
oldTop = top;
top = top.Next;
disconnect(oldTop);
end
toc
Elapsed time is 0.113519 seconds.
Measure time for varying lengths of lists. We expect to see time vary linearly with the number of nodes.
N = [500 2000 5000 10000];
% Create a list of 10000 elements:
CreateTime = [];
TearDownTime = [];
for n = N
top = dlnode(0);
tic
for i = 1:n
insertBefore(dlnode(i), top);
top = top.Prev;
end
CreateTime = [CreateTime;toc];
tic
while ~isempty(top)
oldTop = top;
top = top.Next;
disconnect(oldTop);
end
TearDownTime = [TearDownTime; toc];
end
subplot(2,1,1);
plot(N, CreateTime);
title('List Creation Time vs. List Length');
subplot(2,1,2);
plot(N, TearDownTime);
title('List Destruction Time vs. List Length');
A Look to the Future
We continue to look for opportunities to improve MATLAB object performance and examples from you are very helpful for learning what changes will make an impact on real applications. If you have examples or scenarios you want us to look at, please let me know. Also, if you have your own ideas or best practices, it would be great to share them as well. You can post ideas and comments here.
Published with MATLAB® 7.14
|
|
# Interpreting FFT Phase - why phase of a cosine?
I understand how to interpret the magnitude result from the FFT, but why is the phase that we obtain, arctan(Im(x)/Re(x)) indicative of the phase shift of a cosine graph, and not a sine graph?
• I don't see how that's true – that gives you the phase of the complex sinusoid with frequency $x$, not of a cosine or sine. – Marcus Müller Jan 22 at 14:26
|
# Why are fractions “multiplied across”? [duplicate]
Suppose we have two rationals, $\frac{a}b$ and $\frac{c}d$. I daresay anyone with some form of mathematical education would disagree that our result would be $\frac{a}b\times \frac{c}d=\frac{ac}{bd}$. That is, we multiply our numerators to get the result numerator and we multiply the denominator to get the result denominator. However, mathematically speaking, why do we do this?
If we consider the canonical definition of multiplication (i.e. repeated addition) in the case where we have two integers, say $3$ and $4$, we would come up with $(3\times 4) = (4 + 4 + 4)$. If this is so (and hopefully it is) what does it mean mathematically to add $\frac{c}d$ to itself $\frac{a}b$ times?
After doing a bit of research, all I've managed to come up with is this image.
Unfortunately, I don't see how this has anything to do with multiplication and seems to me like little more than a tool for teaching grade schoolers.
This also led me to think about how we multiply decimal numbers. For example, $(0.2) \times (0.4) = 0.08$ (i.e. $2\times 4$ with the decimal moved over a number of places equal to the number significant digits past the decimal).
My intuition tells me that the answer to one of these will provide the answer to the other.
Lastly, excuse me if this question seems silly, but I've pondered it quite a bit and can't come up with anything mathematically rigerous.
EDIT: I changed all literal values to be arbitrary values (as they should have been originally).
EDIT 2: Please note that I am PERFECTLY CAPABLE of multiplying fractions together in any way shape or form. This is NOT a post about how to multiply fractions. This post is about fundamentally understanding what it means to multiply a pair of fractions together.
## marked as duplicate by Bill Dubuque, Jack, Community♦Jul 28 '17 at 15:24
• Think of multiplication as the area of a rectangle with the factors as side lengths. – Akiva Weinberger Jul 28 '17 at 14:45
• @AkivaWeinberger Then we'd just be begging the question. The multiplication is necessary in order to find the area of a rectangle. – AldenB Jul 28 '17 at 14:50
• Not necessarily. We know that a 1x1 rectangle has an area of 1. To find the area of a $\frac13\times\frac14$ rectangle, note that we can put twelve $\frac13\times\frac14$ rectangles together to make a $1\times1$ rectangle; hence, they must each have an area of $\frac1{12}$. – Akiva Weinberger Jul 28 '17 at 14:51
• @AldenB We want to prove that 3977 copies of that rectangle make a rectangle of area 323. That can be done quite easily. – Akiva Weinberger Jul 28 '17 at 15:00
• @AldenB Alternatively, we can show using the previous method that each $\frac1{41}\times\frac1{97}$ rectangle has an area of $\frac1{3977}$, and note that 323 copies of that rectangle make one of size $\frac{19}{41}\times\frac{17}{97}$. Thus, it has area $\frac{323}{3977}$. – Akiva Weinberger Jul 28 '17 at 15:02
If this is so (and hopefully it is) what does it mean mathematically to add (1/4) to itself (1/3) times?
It means that whatever you get in the end, if you add it to itself 3 times, you'll get 1/4. So... Suppose $$\frac{1}{a} \cdot \frac{1}{b} = \frac{1}{c}$$ Then this is like saying that: $$\frac{1}{a} = \frac{1}{c} + \dots (b \text{ times}) \dots + \frac{1}{c}$$ which is $$\frac{1}{a} = \frac{b}{c}$$ then you can cross multiply (no division, here) and you'll get $c = ab$.
• Allow me to make this more general, suppose we have (a/b)*(c/d)=(e/f). Then what do we do? (a/b)=(e/f) + ... ((d/c) times) ... + (e/f) still doesn't make sense. – AldenB Jul 28 '17 at 14:41
• If you add (a/b) * (c/d) to itself d times, you'll get (a/b) * c: The main idea is that you can always "cancel" fractions (assuming an integer denominator) by adding to itself. – Kevin Jul 28 '17 at 14:45
• Unless I'm misunderstanding, we're getting away from the question at hand at this point. I'm perfectly capable of solving these sorts of problems algebraically; I've been doing it for at least a decade. What I'm talking about here is a fundamental understanding of what it means to multiply a pair of fractions together. – AldenB Jul 28 '17 at 15:04
• The problem is just this: The "canonical" definition of division is that it is the inverse of multiplication. So you can't get away from an algebraic explanation if you want a fundamentally rigorous answer. If you are looking for an intuitive answer (one amenable to grade schoolers), the picture you provided already does pretty well! – Kevin Jul 28 '17 at 15:10
• Perhaps you're right. – AldenB Jul 28 '17 at 15:20
Let me stick to purely nonnegative counting numbers $\mathbb N^+ = \{1,2,3,\dots\}.$ One way that we do this mathematically rigorously is to define fractions as,
$\frac ab$ is the set of all pairs $(c, d)$ of these numbers in $\mathbb N^+$, such that $a\cdot d = b \cdot c.$
Note that on this account $\frac12$ and $\frac 24$ are equal because of set equality: two sets are equal when they contain exactly the same elements; these two sets contain exactly the same pairs.
Why does this definition get to the heart of fractions? Because if you think about what this definition says $\frac 13$ is, it says that it's a relation between numbers and their thirds. So the numbers in the set are $\{(1, 3), (2, 6), (3, 9), \dots \}$ and we are expressing that the first number is one third of the second number, for all whole numbers where we can easily decide these things.
We also see that we can get a lot of understanding about how these work just by looking at the element of the set with the smallest first element; we can go from $(1, 3)$ back to $\frac13$ pretty easily. This is of course reducing a fraction to its simplest terms by removing common factors from both sides. Let's quickly prove that this works. Suppose $a = n \cdot x$ and $b = n\cdot y$ for the common factor $n\ne 1.$ We want to show that $\frac ab$ and $\frac xy$ are the same sets. The direction of proving that if $(c, d)$ is in the set $\frac xy$ then it must be in the set $\frac ab$ is very easy: $x\cdot d = y\cdot c$ implies that $n\cdot(x\cdot d) = n\cdot(y\cdot c),$ apply the associative rule to find that $(n\cdot x)\cdot d = (n\cdot y)\cdot c$, and thus $(c, d)$ is in $\frac ab,$ too. However if you pay careful attention this logic is almost entirely reversible. The only tricky bit is to prove that in $\mathbb N^+$ if $n\cdot a = n \cdot b$ then $a=b,$ this is the only manipulation that can't obviously be performed "backwards" because division is not well-defined on $\mathbb N^+$. But this can be easily seen from the simpler fact that if $a < b$ then $n\cdot a < n\cdot b,$ by the repeated addition definition and the property of addition that if $a < b$ and $c < d$ then $a+c < b + d$. So therefore if $n\cdot a = n \cdot b$ it must be the case that $a \ge b$ but also, turning around the equals sign, that $b \ge a.$ The only way both of these can be true is if $a=b.$
Now we also have an embedding of the numbers $\mathbb N^+$ in the fractions, that is as these fractions $\frac n1 = \{(n, 1), (2n, 2), (3n, 3), \dots \}.$ We look at the smallest elements and we see that when we add them we must perform $(n_1, 1) + (n_2, 1) = (n_1 + n_2, 1).$ The problem is, how do we make this well-defined so that it doesn't matter which pair we choose? Clearly if we chose $(2n_1, 2) + (3 n_2, 3)$ we would have to get $(k(n_1 + n_2), k)$ for some $k$. And the most obvious way to get this property is to say that $k=2\cdot 3$ so that we get $6(n_1 + n_2)$ for that first term, which we can do by multiplying the first item of the first term with the second item of the second term and the second item of the first term with the first item of the second term, so we have to generalize $(a, b) + (c, d) = (ad + bc, bd)$ to fully account for addition of the "natural number subset" of the rational numbers, in the case where we pick a representative element which isn't the smallest element.
Well then we are just stuck asking, "how do we extend this to non-natural numbers?" and the obvious answer is, just the most obvious way! Define that $\frac ab + \frac cd = \frac{ad+bc}{bd},$ and we get a valid expression for everything, which is the correct expression for the naturals. The proof that this is well-defined is simply that $\frac ab + \frac{nc}{nd} = \frac{n(ad+bc)}{n(bd)},$ so "by construction" this plays nice with our ability to reduce a fraction to lowest terms.
Exercise: also prove that this preserves the associative and commutative properties of addition.
# Multiplication as repeated addition on the fractions
Something similar happens when we want to multiply with this new formula. Suppose we want to multiply $\frac 25$ by repeatedly adding it to itself 3 times, we find that the above expression gives first $\frac25 + \frac25 = \frac {20}{25}$ and then $\frac 25 + \frac {20}{25} = \frac{50+100}{125}.$ What do we notice? The bottom-most number is clearly $5\cdot 5 \cdot 5 = 5^3,$ the multiplication by $3$ cubes this denominator.
The top-most number is also growing, though, because we had $5 \cdot 2 + 5 \cdot 2$ and then this became $5^2\cdot 2 + 5\cdot(5\cdot 2) + 5\cdot (5\cdot 2)$ and we see that we got $5^2\cdot(2+2+2).$ Trying $4\cdot$ as repetition we find the pattern $5^3\cdot(2+2+2+2)/5^4.$ It's not hard to prove that adding $\frac ab$ to itself $n$ times produces $\frac{na}{b}.$
But what did we do last time we wanted to generalize addition to fractions? We considered the other representations of the integers. We know that we need $(p~n, p) \cdot (q~a, q~b)$ to produce $(r~n~a, r~b)$ because that is the self-addition pattern for integers. The obvious choice is $r=p~q$ which we can get by multiplying the first two and then multiplying the second two.
This suggests a product rule as $\frac ab \cdot \frac cd = \frac{ac}{bd}.$ Again we can immediately see that it's well defined with respect to simplifying to lowest terms, it's even more obviously associative and commutative than addition was.
But what really makes this multiplication is the distributive rule. Recall the distributive rule: $a\cdot(b + c)$ needs to be exactly the same value as $a\cdot b + a \cdot c$ and vice versa. If these two new definitions for addition and multiplication do not "play nice together" then we will need to fix one or the other!
Well, long story short, they do. We find that $$\frac ab\cdot\left(\frac cd + \frac ef\right) = \frac ab\cdot \frac{cf+de}{df} = \frac{acf+ade}{bdf} = \frac{abcf+abde}{b^2df} = \frac{ac}{bd} + \frac{ae}{bf}.$$ Again, the logic is perfectly reversible and the surprise that these two guesses of "let's just do the simplest thing" yields this fundamental axiom of arithmetic again for the fractions, means that we were "really onto something here."
Now in what sense is $\frac13$ a repeated self-addition? Well, it's not quite! It's a "multiplicative inverse" which means it undoes a repeated self-addition. It's kind of like if we define $\mathbb N^+$ as repeated increment operations applied to a starting value $1$, when we get to negative numbers we must have repeated decrement operations that undo the repeated increment. So $2$ represents "increment twice," then $-2$ must represent "decrement twice."
Similarly $\frac17 \cdot \frac ab$ is the number which, if you repeatedly add it to itself seven times, gives you $\frac ab$. It's the exact undoing of a repeated self-addition.
So when we do $\frac13 \cdot \frac14$ we are un-adding $\frac14$ from itself 3 times, meaning that we're trying to find the number which when added to itself 3 times would give $\frac14$. This is precisely just $\frac1{12}.$
note that integers a and b can be rewritten as ${a\over 1}$ and ${b\over1}$ this has to equal ab=12 to stay consistent. The simplest way to do that, is to say lets make it a rule that we multiply numerators and denominators separately and combine the results so we have $ab=12\implies {ab\over1(1)}={ab\over1}=ab=12$ the rule when followed for other fractions produces the given results.
One could make use the fact that $\Bbb R$ is a field and use the field axioms (http://mathworld.wolfram.com/FieldAxioms.html) to prove these equalities rigorously.
For instance we could make use of the distributive property of $\Bbb R$ as follows, $$3\times4=3\times(1+1+1+1)=3+3+3+3=12$$
Next, $(1/a)$ is nothing but the inverse of $a$ $(a\neq0)$. So,
$$(1/3)\times(1/4)=3^{-1}\times4^{-1}=(3\times4)^{-1}=12^{-1}=(1/12)$$
Basically all the calculations stand on the shoulders of the Axioms.
• The important step here is $3^{-1}\times4^{-1}=(3\times4)^{-1}$, which should be proved. – Akiva Weinberger Jul 28 '17 at 14:52
• @AkivaWeinberger That's true. But one can prove it from the fact that "a field is a commutative ring with unity" – Naive Jul 28 '17 at 15:03
• And the proof is that $(4\times3)\times(3^{-1}\times4^{-1})={}$$(3^{-1}\times4^{-1})\times(4\times3)=1$ through the associative property, so $3^{-1}\times4^{-1}=(4\times3)^{-1}$. (And we may or may not want to point out how to prove that inverses are unique.) Right? – Akiva Weinberger Jul 28 '17 at 15:05
• It's probably a good idea to explicitly list the field axioms in your answer (or at least link to them), by the way – Akiva Weinberger Jul 28 '17 at 15:07
• @AkivaWeinberger Well again if at all one is so particular about the uniqueness of the inverses, we will have to prove that too! – Naive Jul 28 '17 at 15:11
|
# A “dual” universal coefficient theorem
Universal coefficient theorem allows us to calculate $H^*(X,M)$ from $H_*(X,Z)$. Do we have a "dual" universal coefficient theorem that allows us to calculate $H_*(X,M)$ from $H^*(X,Z)$?
Here $Z$ is the set of integers.
-
Thanks. It is nice to know. It appears that $H_\*(X,Z)$ contained most info. This question is motivated by another question mathoverflow.net/questions/111087/… which is unfortunately closed :-( Any light on that question will be greatly appreciated. – Xiao-Gang Wen Nov 2 '12 at 1:42
Yes, there is such a universal coefficient theorem.
$$0 \to Ext(H^{q+1}(X,R), G) \to H_q(X, G) \to Hom(H^q(X, R), G) \to 0$$
see Theorem 6.5.12 in Spanier's textbook "Algebraic Topology". It's on page 248.
-
@Ryan Budney: Thanks. But it is a little confusing. Is the above $R$ the field of real numbers, or $R=Z$ the set of integers? – Xiao-Gang Wen Nov 2 '12 at 2:59
R is a principal ideal domain and G is an R-module. You also need $H_\ast(X;R)$ to be of finite type, meaning each $H_i(X;R)$ is a finitely generated $R$-module. – Greg Friedman Nov 3 '12 at 2:36
Thanks. That helps a lot. We should really include this result in Wiki. – Xiao-Gang Wen Nov 3 '12 at 4:43
|
Question
A latus rectum of a conic section is a chord through a focus parallel to the directrix. Find the area bounded by the parabola displaystyle{y}={x}^{2}text{/}{left({4}{c}right)} and its latus rectum.
Conic sections
A latus rectum of a conic section is a chord through a focus parallel to the directrix. Find the area bounded by the parabola $$\displaystyle{y}={x}^{2}\text{/}{\left({4}{c}\right)}$$ and its latus rectum.
2021-03-06
Step 1
It is known that the area bounded by the curves $$\displaystyle{y}= f{{\left({x}\right)}}{\quad\text{and}\quad}{y}= g{{\left({x}\right)}}{o}{n}{\left[{a},{b}\right]}$$ is given by
$$\displaystyle{A}={\int_{{a}}^{{b}}}$$ (upper curve - lower curve) dx.
From the figure, the equation of latus rectum is $$\displaystyle{y}={c}$$ which is upper curve.
Note that the given graph is about y-axis.
Step 2
Substitute $$\displaystyle{y}={c}\in{y}=\frac{{x}^{2}}{{{4}{c}}}$$
and obtain that $$\displaystyle{x}={2}{x}$$
Thus, the area bounded can be computed as follows.
$$\displaystyle{A}={2}{\int_{{0}}^{{{2}{c}}}}$$ (upper curve - lower curve) dx
$$\displaystyle={2}{\int_{{0}}^{{{2}{c}}}}{\left({c}-\frac{{x}^{2}}{{{4}{c}}}\right)}{\left.{d}{x}\right.}$$
$$\displaystyle={2}{{\left[{c}{x}-\frac{{x}^{3}}{{{12}{c}}}\right]}_{{0}}^{{{2}{c}}}}$$
$$\displaystyle={2}{\left[{2}{c}^{2}-\frac{2}{{3}}{c}^{2}\right]}$$
$$\displaystyle=\frac{8}{{3}}{c}^{2}$$
|
# Problem while decrypting Hill cipher
I have a plaintext "monday" and ciphertext "IKTIWM" and $$m=2$$. I want to find the key of the Hill cipher.
I made a matrix $$\begin{bmatrix} a_1 & a_2 \\ a_3 & a_4 \end{bmatrix}\begin{bmatrix} m \\ o \end{bmatrix} = \begin{bmatrix} I \\ K \end{bmatrix} \pmod{26}$$
$$X=\{\{m,o\},\{n,d\}\}$$, $$Y=\{\{I,K\},\{T,I\} \}$$, I want to find $$X \times K=Y$$.
I will multiply this equation with inverse($$X$$).
But for the modulo inverse you need $$gcd$$(determinant($$X), 26) =1$$ . Which is not happening here.
• I am making a matrix X={ {m,o}, {nd} },Y={ {I,K} ,{T,I} },I want to find X*K=Y; – Manoharsinh Rana Feb 1 at 11:20
• I edited it.I don't know how to write a matrix here. – Manoharsinh Rana Feb 1 at 11:23
• Hint: not all systems of 6 equations with 4 unknowns have a unique solution. Find them all. – fgrieu Feb 1 at 11:52
• these are the equations. 12a + 14b = 8 , 12c + 14d = 10 ,13a + 3d = 19 ,13c + 3d = 8 , 24b=22 , 24d= 12. I have replaced a1 with a , a2 with b , a3 with c , a4 with d. Can we solve them? – Manoharsinh Rana Feb 1 at 12:33
• you are right. But can you help me solve it? – Manoharsinh Rana Feb 1 at 12:55
$$\begin{bmatrix}7&2\\ 10& 20\end{bmatrix}, \begin{bmatrix}7&2\\ 23& 7\end{bmatrix}, \begin{bmatrix}20&15\\ 10& 20\end{bmatrix}, \begin{bmatrix}20&15\\ 23& 7\end{bmatrix}$$
are all the $$2 \times 2$$ matrices over $$\mathbb{Z}_{26}$$ would transform 'monday' to IKTIWM, the first and third have even determinant so are not invertible so the second or the fourth candidate encryption matrix is the correct one: invert them and check the rest of the text which is one is actually correct.
|
The total flux is then a formal summation of these surface elements (see, Each point on a surface is associated with a direction, called the, Learn how and when to remove this template message, Mathematical descriptions of the electromagnetic field, Conversion Magnetic flux Φ in nWb per meter track width to flux level in dB – Tape Operating Levels and Tape Alignment Levels, https://en.wikipedia.org/w/index.php?title=Magnetic_flux&oldid=984825175, Articles lacking in-text citations from July 2016, Creative Commons Attribution-ShareAlike License, This page was last edited on 22 October 2020, at 09:51. The magnetic interaction is described in terms of a vector field, where each point in space is associated with a vector that determines what force a moving charge would experience at that point (see Lorentz force). where B is the magnitude of the magnetic field (the magnetic flux density) having the unit of Wb/m2 (tesla), S is the area of the surface, and θ is the angle between the magnetic field lines and the normal (perpendicular) to S. For a varying magnetic field, we first consider the magnetic flux through an infinitesimal area element dS, where we may consider the field to be constant: A generic surface, S, can then be broken into infinitesimal elements and the total magnetic flux through the surface is then the surface integral. Magnetic Flux Density Unit. The magnetic flux is the net number of field lines passing through that surface; that is, the number passing through in one direction minus the number passing through in the other direction (see below for deciding in which direction the field lines carry a positive sign and in which they carry a negative sign). In more advanced physics, the field line analogy is dropped and the magnetic flux is properly defined as the surface integral of the normal component of the magnetic field passing through a surface. The weber is named after the German physicist Wilhelm Eduard Weber (1804–1891). The magnetic flux through some surface, in this simplified picture, is proportional to the number of field lines passing through that surface (in some contexts, the flux may be defined to be precisely the number of field lines passing through that surface; although technically misleading, this distinction is not important). Its symbol is B, and its SI unit is the Tesla (T). In physics, specifically electromagnetism, the magnetic flux (often denoted Φ or ΦB) through a surface is the surface integral of the normal component of the magnetic field flux density B passing through that surface. The following formula expresses the flux density:Where Φ is the flux and A is the cross-sectional area in square meters (m2) of the magnetic field. By way of contrast, Gauss's law for electric fields, another of Maxwell's equations, is. [2] One tesla equals one weber per square meter (WB/m2). In other words, Gauss's law for magnetism is the statement: While the magnetic flux through a closed surface is always zero, the magnetic flux through an open surface need not be zero and is an important quantity in electromagnetism. [1] Since a vector field is quite difficult to visualize at first, in elementary physics one may instead visualize this field with field lines. In physics, specifically electromagnetism, the magnetic flux (often denoted Φ or ΦB) through a surface is the surface integral of the normal component of the magnetic field flux density B passing through that surface. This is a direct consequence of the closed surface flux being zero. The SI unit of magnetic flux is the weber (Wb; in derived units, volt–seconds), and the CGS unit is the maxwell. In physics, the weber ( / ˈveɪb -, ˈwɛb.ər / VAY-, WEH-bər; symbol: Wb) is the SI derived unit of magnetic flux. A flux density of one Wb/m 2 (one weber per square metre) is one tesla . This equation is the principle behind an electrical generator. Gauss's law for magnetism, which is one of the four Maxwell's equations, states that the total magnetic flux through a closed surface is equal to zero.
The flux of E through a closed surface is not always zero; this indicates the presence of "electric monopoles", that is, free positive or negative charges. If the magnetic field is constant, the magnetic flux passing through a surface of vector area S is. The SI Unit for flux density is the Tesla (T) which is defined as; $B=\frac{\varphi }{A}$ $B=\frac{Wb}{{{m}^{2}}}=Tesla$ “If one line of magnetic field passes normally through m 2 area, the magnetic flux density, B, will be one Tesla, Example of Magnetic Flux Density The magnetic flux density is the amount of flux per unit area perpendicular to the magnetic field. For example, a change in the magnetic flux passing through a loop of conductive wire will cause an electromotive force, and therefore an electric current, in the loop. Magnetic flux is usually measured with a fluxmeter, which contains measuring coils and electronics, that evaluates the change of voltagein the measuring coils to calculate the measurement of …
|
# Calc III, Determing the coordinates of P
Hello everyone. I'm stuck on the last part of this problem. It says, Draw two distinct nonzero postion vectors a = <x1,y1> and b = <x2,y2> such that the angle between them is less than pi/2. Draw the line L perpendicular with the line determined by the bector a such that l passes through the terminal point of b. Let P = (s,t) be the point where L intersects a. Determine the length of the line segment connecting the origin to the point P. Determine the coordinates of P. I drew a picture and I found the L = mx+b. The line determined by a = y1/x1 * x. m = -x1/y1 through (x2,y2). So now I have 2 lines, I need to find the intercept of these 2 lines to determine the coordinates of P (s,t). Any ideas on how i can do this? I'm also confused on what my 2 lines are exactly. Thanks.
Last edited by a moderator:
Related Introductory Physics Homework Help News on Phys.org
Fermat
Homework Helper
thanks for the reply but i'm lost.
Quote:
You can work out the slope of OA (=m, say) using the coords of A and you know that BP is perpindicular to OA, so you can write the slope of BP in terms of m. You can also work out the slope of BP using the coords of P and B. Now solve for λ, and you can then work out the coords of P and the length of OP.
Where is the point OA coming from? by OA do u mean the vector O standing for origin and A for the a vector? So your saying set BP, B is a vector <x2,y2> and P is a point with coords (s,t) = m. Am i allowed to do this? BP = (y2-t)/(x2-s); or are u not talking about m = (y2-y1)/(x2-x1)?
You can think of the problem as one in vectors or one in coordinate geometry.
OA is a vector starting at the origin O and ending at the point A. The vector OA has the position vector a<x1,y1>.
OB is a vector starting at the origin O and ending at the point B. The vector OB has the position vector b<x2,y2>.
OP is a vector starting at the origin O and ending at the point P. The vector OP has the position vector p<s,t>.
Or,
treat it as a problem in coordinate geometry.
With an origin O and the points A(x1,y1), B(x2,y2), P(s,t).
Let m be the slope of OA.
You have already worked this out as m = (y1 - 0)/(x1 - 0) = y1/x1
So, the slope of the line OA is m = y1/x1
Then the eqn of the line OA is y = mx, or
y = (y1/x1)x ---------------------------------------------------(1)
=========
BP is perpindicular to OA, so you can write the slope of BP in terms of m.
So, slope of BP = m' = -1/m = -x1/y1, which you have.
Since you have the coords of B as (x2,y2), and the slope of BP, then you can write the eqn of BP as,
(y - y2)/(x - x2) = m'
y = m'(x-x2) + y2
y = -(x1/y1)(x-x2) + y2 ----------------------------------------(2)
==================
Eqn (1) is the eqn of the line OA.
Eqn (2) is the eqn of the line BP.
You can find out where these lines intersect by equating (1) with (2) and solving for x and y, which will be s and t respectively, the coords of P.
Thanks alot for your explanation,it was very helpful! but i'm stuck I think. When you said set equation 1 and 2 together and solve for x and y, did you mean x1 and y1 or did u mean x and y? When I set the two equations eqqual to eachother, y isn't there, y1 is though. I tried solving for x and got a really messy equation. Here is my work:
http://img143.imageshack.us/img143/5492/gsfdg5pa.jpg [Broken]
Thanks.
Last edited by a moderator:
Fermat
Homework Helper
mr_coffee said:
Thanks alot for your explanation,it was very helpful! but i'm stuck I think. When you said set equation 1 and 2 together and solve for x and y, did you mean x1 and y1 or did u mean x and y? When I set the two equations eqqual to eachother, y isn't there, y1 is though. I tried solving for x and got a really messy equation. Here is my work:
http://img143.imageshack.us/img143/5492/gsfdg5pa.jpg [Broken]
Thanks.
You're almost there.
Yes, you have to solve for x and y. You will then get expressions for x and y in terms of x1, y1, x2, y2. The final expression you got can be rearranged to give,
$$x = x_1\frac{(x_1x_2 + y_1y_2)}{(x_1^2 + y_1^2)}$$
Substitute this value for x into eqn (1) and get the value of y.
These values of x and y are the x- and y-coords of the intersection of the line-eqns (1) and (2). And these coords are the coords of the point P. ie x = s and y = t.
Last edited by a moderator:
Awesome, thanks so much for the help!! :) I was wondering how u got $$x = x_1\frac{(x_1x_2 + y_1y_2)}{(x_1^2 + y_1^2)}$$ I posted how far I got with rearranging but I didn't see how you did that exactly. http://img143.imageshack.us/img143/650/pointsss7gn.jpg [Broken]
Last edited by a moderator:
Fermat
Homework Helper
mr_coffee said:
Awesome, thanks so much for the help!! :) I was wondering how u got $$x = x_1\frac{(x_1x_2 + y_1y_2)}{(x_1^2 + y_1^2)}$$ I posted how far I got with rearranging but I didn't see how you did that exactly. http://img143.imageshack.us/img143/650/pointsss7gn.jpg [Broken]
Staring from here,
$$\left (\frac{y_1}{x_1}\right )x + \frac{x_1(x)}{y_1} = \frac{x_1x_2}{y_1} + y_2$$
$$x\left (\frac{y_1^2 + x_1^2}{x_1y_1}\right ) =\frac{x_1x_2 + y_1y_2}{y_1}$$
$$x = x_1\left (\frac{x_1x_2 + y_1y_2}{x_1^2 + y_1^2}\right )$$
Last edited by a moderator:
awesome, thanks alot for the help!
|
+0
# math
0
29
1
R3 = sqrt(4hp) solve for P
Guest Mar 8, 2018
Sort:
#1
+29
0
$$R^3=\sqrt{4hp}\\ \text{Square both sides to remove square root}\\ (R^3)^2=(\sqrt{4hp})^2\\ R^6=4hp\\ \text{Divide 4h to isolate p}\\ \frac{R^6}{4h}=p$$
For line 3, one of the law of exponents state:
$$(x^a)^b=x^{ab}$$
ChowMein Mar 8, 2018
### 10 Online Users
We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. See details
|
# Resistor circuit
#### Dren
Joined Feb 28, 2008
2
Hello, I have some physics homework but I cant solve this:
I know that the result of the resistance from point A to B is 4.3Ω (because my teacher told me) but I dont know how to solve it
Thanks, Dren.
|
# Is it possible to calculate logarithm without calculator and without logarithm table?
Printable View
• August 26th 2010, 09:19 AM
dariyoosh
Is it possible to calculate logarithm without calculator and without logarithm table?
Dear all,
I would like to ask a question about calculating logarithm in base 10. Currently I'm reading an interesting book entitled: "ESSENTIALS OF PLANE TRIGONOMETRY AND ANALYTIC GEOMETRY".
The first chapter of the book is dedicated to logarithm function, its fundemantal forumlas, etc. However, I always used calculator in order to find the logarithm of a given number (base 10). In the book, there was some method (not really clear for me) presenting how to find the logarithm value of a given number based on the logarithm table.
I never managed to master the logarithm table (very complicated and ambiguous for me). Besides, after a bit googling I saw that there are different type of logarithm table (for base 10) with different number of columns and the method is not always the same.
Is there any simple method, allowing to calculate the logarithm of a number (base 10) without using a calculator and without using the logarithm table (with desired number of decimals)?
Thanks in advance,
Dariyoosh
• August 26th 2010, 11:22 AM
HallsofIvy
There are a number of "simple" ways to calculate lograrithms- but they are all extremely tedious.
One method is to use the inverse function to the logarithm. The inverse function to $log_a(x)$, the logarithm base a, is the exponential function $a^x$.
In particular, the inverse to the "common logarithm", base 10, is $10^x$ while the inverse to the "natural logarithm", base e, about 2.718..., is $e^x$.
Calculating $y= log_a(x)$ is the same as solving the equation $x= a^x$. And there are a number of different, but tedious and repetative, ways to do that.
The simplest is the "midpoint method". First, by trial and error, find two numbers, $x_0$ and $x_1$ such that $log_a(x_0)< y$ and $log_a(x_1)> y$. Since the logarithm is a "continuous function", there must some x between $x_0$ and $x_1$ such that $log_a(x)= y$. We don't know exactly where between that number is so just try the midpoint: try $x_2= (x_0+ x_1)/2$. If $log_a(x_2)= y$ we are done! If not, either $log_a(x_2)> y$ or $log_a(x)< y$. If the first, then we know that there must be a solution between $x_0$ and $x_2$, if the second between $x_2$ and $x_1$. So we just do the same thing again, choosing new value for $x_3$ between our current $x_2$ and either $x_0$ or $x_1$. At each step, we have reduced the interval, in which we know the value must be, by 1/2.
A slightly faster, slightly more complicated method, is the "secant method". Given two values, $x_0$ and $x_1$, such that $e^{x_0}< y$ and $e^{x_1}> y$, instead of the midpoint, construct the equation of the line through the two points $(x_0, e^{x_0})$ and $(x_1, e^{x_1})$. There is a standard formula for the line between two points: $y= \frac{y_1- y_0}{x_1- x_0}(x- x_0)+ y_0$ is the line between $(x_0, y_0)$ and $(x_1, y_1)$. Solve that equation for $x_2$, determine whether $log_a(x_2)$ is larger than or less than y, and repeat.
A still faster but more sophisticated method is "Newton's method" for solving equations.
The derivative of $a^x$ is $ln(a) a^x$ where "ln(a)" is the natural logarithm, base "e", of a. To solve the equation f(x)= 0 by Newton's method, choose any starting value, $x_0$ (so you don't have to find two starting values on either side of the true solution) and form $x_1=x_0+ \frac{f(x_0}{f'(x_0)}$. Basically, the idea is that we construct the straight line tangent to y= f(x) at the point $(x_0, f(x_0))$ and solve that linear equation to get a better approximation. Using that new value of $x_1$, repeat to get a still better approximation. Newton's method has the property that it tends to double the number of correct decimal places in the answer on every step.
For $f(x)= a^x- y$, for fixed y, so that f(x)= 0 is equivalent to [itex]a^x- y= 0[/tex] or $a^x=y$, $f'(x)= ln(a) a^x$ so the formula becomes $x_1= x_0+ \frac{a^{x_0}- y}{ln(a) a^{x_0}}$. In particular, for the natural logarithm, a= e, ln(a)= ln(e)= 1 so that is just $x_1= x_0+ \frac{e^{x_0}- y}{e^{x_0}}$. For the common logarithm, a= 10 and $ln(a)= ln(10)$ so the formula is $x_1= x_0+ \frac{10^{x_0}- y}{ln(10) 10^{x_0}}$. Of course, once you have found $x_1$ you continue in exactly the same way to find $x_2$, $x_3$, etc. to whatever accuracy you want.
Another straight forward method of finding approximate values of difficult functions is to use the "Taylor's series" about x= 0 (also called the "MacLaurin series") for the function. The Taylor's series for ln(1+ x) (Since log(0) is not defined, we cannot directly find the series for ln(x) about x= 0) is $1+ x+ x^2/2+ x^3/3!+ x^4/4!+ \cdot\cdot\cdot$. As an infinite series, that is exactly $ln(x)$. Taking a finite number of terms gives an approximation with the more terms taken, the better the approximation.
Finally, I am informed that modern calculators and computers use the "CORDIC" algorithm, about which I admit I know nothing! You might want to google on "CORDIC".
• July 18th 2011, 07:34 AM
RADHAKRISHNAN.J.
Re: Is it possible to calculate logarithm without calculator and without logarithm ta
• July 18th 2011, 08:04 AM
chisigma
Re: Is it possible to calculate logarithm without calculator and without logarithm ta
The following procedure is valid for the computation of the 'natural logaritm' of a number, i.e. the logarithm in base e. The logarithm in any other base is the 'natural logaritm' multiplied by a constant. The computation is based on the series expansion...
$\ln (1+x)= x -\frac{x^{2}}{2}}+ \frac{x^{3}}{3}}- \frac{x^{4}}{4}} + ...$ (1)
... which converges 'quickly enough' for $-.25 < x < .5$ . Now if You have to compute $\ln r$ with $r>1.5$ the procedure is...
a) devide r by 2 k times until obtain $\rho= \frac{r}{2^{k}}\ , \ .75< \rho < 1.5$ ...
b) set $x= \rho-1$ and compute with (1) $\ln (1+x)$ ...
c) compute $\ln r = \ln (1+x) + k\ \ln 2$...
Kind regards
$\chi$ $\sigma$
|
# Suppose S and T are mutually exclusive events. Find P(S or T) if P(S) = 1/3 and P(T) = 5/12?
## 3/4 7/18 5/36
Apr 20, 2017
In an OR situation you may ADD
#### Explanation:
$P \left(S \mathmr{and} T\right) = P \left(S\right) + P \left(T\right) = \frac{1}{3} + \frac{5}{12} = \frac{4}{12} + \frac{5}{12} = \frac{9}{12} = \frac{3}{4}$
Apr 24, 2017
$P \left(S \cup T\right) = \frac{3}{4}$
#### Explanation:
We use the following fundamental definition from Probability and Set Theory:
$P \left(A \cup B\right) = P \left(A\right) + P \left(B\right) - P \left(A \cap B\right)$
And if we apply this to our problem we have:
$P \left(S \cup T\right) = P \left(S\right) + P \left(T\right) - P \left(S \cap T\right)$
$\text{ } = \frac{1}{3} + \frac{5}{12} - P \left(S \cap T\right)$
$\text{ } = \frac{3}{4} - P \left(S \cap T\right)$
Now we are told that $S$ and $T$ are mutually exclusive, and so:
$P \left(S \cap T\right) = 0$
Hence,
$P \left(S \cup T\right) = \frac{3}{4}$
|
#1161
the opposition between them is illusory
#1162
chickeon posted:
the opposition between them is illusory
#1163
vimingok posted:
Liberalism provides the material basis for fascism (during capitalist crises) but the latter is historically a revolution against liberalism right? Or at least trying to subsume it under a non- or anti- liberal fascist order?
Adolf Hitler didn't invent social darwinism, or white supremacy, or eugenics, or concentration camps, or the idea that human excellence was measured in wealth accumulation.
#1164
vimingok posted:
My half-baked theory is that much of what is called fascism today is a decadent kind of liberalism and not larval fascism.
clipuuuuuuu
you could think of fascism as something that emerges from within liberalism as an attempt to resolve an impasse that it finds itself in when faced with the rapid pace of capitalist development, its tendency towards crisis... basically you can't pick and choose this shit like a video game where it just appears in a cloud of vapor suddenly on its own, divorced from material conditions.
also on ICE guards as the material manifestation of present-day fascism:
#1165
i guess we could talk about the idea that the latest coronavirus was created in the U.S. and unleashed on China as economic warfare, but here’s an alternative for the epic win: Foreign Policy floating the idea of suspending election campaigns this year and having people vote by pushing their ballots through the gaps between the boards over their windows.
#1166
Voting is now done by using the same app that was used in Iowa
#1167
@trackfractri, thanks for responding and the links. I only read that Sakai essay and mostly agree with it. The Woodley book sounds too academic for me.
I think my definition of fascism seems pretty close to Sakai's - emerging out of defunct liberalism, attempting to enact revolutionary changes in them, subordinating everyone else under the PB which now comprises a romanticised new bourgeoisie+warrior caste. But if we accept that definition Trump and his fans still represent liberalism, albeit a version in the final stage/s of decline. You might say fascism is those final stages and maybe that is a more useful way of looking at it, but I'm not convinced.
Domenico Losurdo makes an extensive case that liberalism by its very nature relies on the demarcation of sacred vs profane spaces, hence as genocidal and oppressive if not more than fascism, in practice. But he also says that precisely because of that reliance liberals also need to adjust those spaces to changing conditions, and maintain a certain balance between them. So bearing that in mind Trump et al seem to represent a contraction of the liberal sacred spaces, ie make them more white supremacist like in the 60s or whatever. The overall effect is of delaying crisis not reacting to it (obviously, since it hasn't happened yet for white PB).
Again the question I'm thinking about is what fascism will look like if/when it does happen. Predicting an extreme version of current liberalism isn't very helpful because liberalism taken to extremes is a fantasy akin to neoreaction and the like, because current liberal ideology is itself a mix of fantastic and pragmatic excuses for stable bourgeois rule. Sorry if all this sounds hackneyed and disconnected. Many thoughts but precious little time to organise them!
#1168
vimingok posted:
Again the question I'm thinking about is what fascism will look like if/when it does happen
#1169
#1170
in all the discussions about what fascism looks like and will look like the only thing we ever managed to agree on is that it will look like tacky shit
#1171
vimingok posted:
@trackfractri, thanks for responding and the links. I only read that Sakai essay and mostly agree with it. The Woodley book sounds too academic for me.
I think my definition of fascism seems pretty close to Sakai's - emerging out of defunct liberalism, attempting to enact revolutionary changes in them, subordinating everyone else under the PB which now comprises a romanticised new bourgeoisie+warrior caste. But if we accept that definition Trump and his fans still represent liberalism, albeit a version in the final stage/s of decline. You might say fascism is those final stages and maybe that is a more useful way of looking at it, but I'm not convinced.
Domenico Losurdo makes an extensive case that liberalism by its very nature relies on the demarcation of sacred vs profane spaces, hence as genocidal and oppressive if not more than fascism, in practice. But he also says that precisely because of that reliance liberals also need to adjust those spaces to changing conditions, and maintain a certain balance between them. So bearing that in mind Trump et al seem to represent a contraction of the liberal sacred spaces, ie make them more white supremacist like in the 60s or whatever. The overall effect is of delaying crisis not reacting to it (obviously, since it hasn't happened yet for white PB).
Again the question I'm thinking about is what fascism will look like if/when it does happen. Predicting an extreme version of current liberalism isn't very helpful because liberalism taken to extremes is a fantasy akin to neoreaction and the like, because current liberal ideology is itself a mix of fantastic and pragmatic excuses for stable bourgeois rule. Sorry if all this sounds hackneyed and disconnected. Many thoughts but precious little time to organise them!
i think trump and his administration, which is essentially a continuation of the same old american liberal settler institutions with the mask off, needs to be distinguished from many of the rank and file 'alt right' types who really do want to violently overthrow liberalism. like are you gonna claim that the dude that goes and shoots up a mosque to incite a race war is a liberal? these people are quite happy to support or be members of racist institutions within liberalism, like the police or the tsa, while also wanting to install actual fascism if they had the capability to do so.
#1172
Given the readiness with which people admit that they aren't certain what fascism 'would' look like, what is the origin of the reticience toward recognizing the Amerikkkan Empire as fascist? It used to be taken for granted, before things were even this bad, and often still is discursively. Extremely powerful ideological self-concealment or disavowal seems to be the main distinguishing factor at this point. This is it, this is whhat it looks like. The disguise isn't even very good if you look long enough, it really only works for the people wearing it.
#1173
chickeon posted:
Given the readiness with which people admit that they aren't certain what fascism 'would' look like, what is the origin of the reticience toward recognizing the Amerikkkan Empire as fascist?
Fear.
#1174
I think Griffin's definition of fascism as palingenetic ultranationalism is as good a place to start as any, and it's hard to argue it in a country where the president campaigns for reelection by lecturing a bunch of people in Make _______ Great Again ballcaps about how today's dishwashers just aren't as dishwasher-y as the ones your mom and dad had, you know, and I haven't gotten around to making apple pies taste more like they used to but I sure will next time! while the so-called opposition, the preferred collaborator for the state's assassins abroad, sinks everything into a strategy that argues the other guy only won office because of nefarious racial enemies brainwashing the citizenry from remote foreign strongholds, taking advantage of supporters' genuine fears of new social and technological developments while filling party coffers with quid-pro-quo "donations" from the slice of the bourgeoisie that manages them.
#1175
No opposition will be entertained to the doctrine that people get sick because of the Red Chinese and the way to fix that is to pummel nearby countries until they accept United $nakkke$ puppets as their leaders. Protest and the state's white-supremacist paramilitary will invade your home in body armor and march you away at the wrong end of the country's prized export, its grotesquely overdesigned longarms. That's liberal democracy and we should be very careful not to confuse it with fascism, which speaks a foreign language and is the same thing as Communism.
#1176
cars posted:
I think Griffin's definition of fascism as palingenetic ultranationalism is as good a place to start as any, and it's hard to argue it in a country where the president campaigns for reelection by lecturing a bunch of people in Make _______ Great Again ballcaps about how today's dishwashers just aren't as dishwasher-y as the ones your mom and dad had, you know, and I haven't gotten around to making apple pies taste more like they used to but I sure will next time! while the so-called opposition, the preferred collaborator for the state's assassins abroad, sinks everything into a strategy that argues the other guy only won office because of nefarious racial enemies brainwashing the citizenry from remote foreign strongholds, taking advantage of supporters' genuine fears of new social and technological developments while filling party coffers with quid-pro-quo "donations" from the slice of the bourgeoisie that manages them.
Griffin is where I started. It's funny you mention that... because I agree with it but I don't know if Griffin would accept his own conclusions here, since he comes from the liberal side of academic "fascist studies" which tends to stick fascism in its own special box, distinct from liberalism. So there's a lot of explanation of how fascists see the world and what is motivating them, as they understand it, but I've just about read enough psychology at this point at the expense of, like, a better understanding of the class character of fascist rule. And it makes me wonder how useful the "palingenetic ultranationalism" thing is when applied to, say, Vichy France where loyalty to the regime implied loyalty to the Third Reich. Now the true believers might really have thought France was standing on its own two feet again under Vichy, but objectively the country was just being looted for everything that was bolted down. Or neo-fascist groups in Italy in the 1970s who subjectively might have thought their actions were about all kinds of things but were objectively acting to solidify a stable, center-right, liberal political order -- and being used for that specific purpose.
I liked what I've read of Griffin's stuff on modern terrorism where he goes into the psychology of it, which I think is very eerie. Individual terrorists / mass shooter types engage in what he calls "heroic doubling;" i.e. it's like they see themselves as like superheroes, or Neo from The Matrix. And they're so bummed out and alienated, adopting this persona transforms them into their own personal Neo, so they feel really calm and focused on their little quest -- which is really just building up to a pathetic and gruesome killing spree at some point in the future. Often the case after one of these shootings, the news will publish photos the perp took himself posing with guns and glowering at the camera. It looks ridiculous, but for them they're acting like Robert De Niro in Taxi Driver: "you talkin' to me?"
The hardcore Nazi groups seem to try to focus this psychology in their way, where it's about dressing up in tights and marching down the street with a lot of flash and pomp. Like when Batman goes flying past members of the public with his grappling hook, people turn and look: "mommy, it's Batman!" So provoking reactions is a form of power for the sadsacks who join these groups. But what is interesting to me now is questioning how that relates to the production and reproduction of identity under capitalism -- or identity as a branded, recognizable commodity.
Edited by trakfactri ()
#1177
Vichy France used proto-Europeanist ideas to justify its subordination to Germany btw, perhaps not too dissimilar to the EU. There's a movie with a bunch of Vichy propaganda called Eye of Vichy if you want to check that out
#1178
Can’t stress enough how much a whole bunch of aristos, booj & petit-booj in pre-WWII Europe loved Hitler and the Nazis.
They mostly didn’t look like what a lot people today think of, even people on “the left”, when they imagine “fascists”.
The Nazis just didn’t give a fuck about rewarding their existing diehard boosters outside of Austria & Germany where and when that might have been feasible, by which I mean not their base of support, but the sort of people who emulated the NSDAP in other countries and throughly expected to be installed in their country’s seats of power when the Nazis swarmed in.
The Nazis instead tended to set up puppet governments made up of crusty monarchists, stuffy industrialists, weak-kneed opportunists, etc., because
1) the Nazi-cargo-cult types in other countries had nationalist platforms that involved things like taking chunks out of neighboring countries to make “Greater” versions of their own, that is, they were all at odds from the very beginning with their counterparts nearby and with Berlin’s plans for managing Europe;
2) The Nazi leadership looked at their imitators in other countries and saw a lesser version of the SA, a bunch of unstable and dangerous losers who needed to be cleared out as soon as they outlived their limited usefulness.
Even in countries where the Nazis integrated these garish Nazi-oid movements into the puppet government, they were usually junior partners to other domestic elements, the Nazis removed their respective mini-Hitlers and installed more pliable types in their place, etc.
So who really acted as Hitler’s practical, powerful and lasting support during the war, outside Germany & Austria and the Germans living nearby within territory the Germans claimed? Who were the real fascist elite among the conquered?
It was all the people I mentioned all the way at the top.
They were not waving crazy new flags and howling at the top of their lungs about how Hitler was the greatest, not until the Nazis told them it was time. They were quiet supporters, “normal” Red-Scare-mongers, “respectable” antisemites, people in “liberal” governments and their constituencies, industrialists and landlords with wide-ranging influence over the lives and livelihoods of others, and all the little twerps beneath them sharing their values and acting in imitation of them.
I agree it would be real nice, very reassuring, if history would allow the dividing line a lot of people desperately want to find, where “liberal democracy” stops and “fascism” begins. We've learned that history does not afford us that convenience, in ways not even the old Bolshevik luminaries could have possibly known in the years before the war.
#1179
cars posted:
The Nazi leadership looked at their imitators in other countries and saw a lesser version of the SA, a bunch of unstable and dangerous losers who needed to be cleared out as soon as they outlived their limited usefulness.
Good posts. Yeah awhile ago I was reading about how the Nazis came into Denmark and cleaned out the local imitators and installed their own people. One young woman who was in a family of these Danish lesser-SA types got out of there bounced around Klanada and the U.\$. in the post-war years dealing heroin and promoting spooky magic sparkle Nazi / Renaissance Faire magic stuff as a white supremacist "folkmother" for convicts in the prison system, and then died.
#1180
leading online pseudo-marxist nutso theory: the virus is just like the common cold and all of this is a hoax to implant tracking mechanisms for organ theft by the wealthy
#1181
this is real ripe conspiracy theory time. I've read and thought so many in the past few days I don't even know which ones to post
#1182
i'll give them points for consistency, never pay attention to communists, always align with far-right blogs about lizard people
#1183
drwhat posted:
this is real ripe conspiracy theory time. I've read and thought so many in the past few days I don't even know which ones to post
post 'em up. conspiracy blitz, conspiracy gauntlet, conspiracy last man standing
#1184
If I may make a suggestion - we should perhaps keep this thread for actual plausible kkkonspiracy posting, and relegate the stuff we want to laugh at to the burning trash heap of a subforum named after me, an Flying horse in saudi arabia.
#1185
so going back to plausible kkkonspiracy posting, one of the "former" nazis being shopped around by the zionist-imperialist clarion project is still running a neo-nazi organization. it couldn't be!!! but it be as revealed in the relevant footnotes in the court documents
#1186
Plausible KKKonspiracy: Qanon is a psyop. The poor racist souls firmly caught within the jaws of Qanon could be very easily mobilized into a nationwide Fascist militia. I'm sure a large majority of them are already armed, all it takes is a single Q post. And everyone here knows the feds wouldn't do a damn thing to stop them either, even if it developed independently of them. I believe """Q""" and co. would not do this randomly, though. Only if it became a possibility that Trump could be deposed or in times of dire national crisis, such as the situation we currently find ourselves in.
#1187
that seems unlikely to me, because after Q disappeared, which was a long time ago, Qanon became this senile suckerfish school of conspiracy theory, they were a bunch of aging brains with no direction who latched on to the absolute weirdest dumbest most irrelevant stuff, one thing after another with less and less relevance each time. Like they all decided to become obsessed with JFK Jr.'s death for a while a couple years back and march around waving signs with his picture on them, and most people do not even remember who the fuck that is. Mobilizing them for some purpose related to the real world later on doesn't seem likely, either, because even if they hadn't drifted into the ether, their original mission statement was "Wait around until someone else buys us ponies for Christmas."
#1188
Q was probably your run-of-the-mill online hobbyist, and as soon as the date he called for the Trump coup against Trump's own government came and went, he stopped posting. Other people claimed to be him later, unconvincingly, or his successor or accomplice or whatever. But my guess is the guy either had a panic attack over his confabulation's make-or-break point, like those people who claim terminal illness online when they've given themselves one too many extensions on their fake prognosis, or he just figured it was a good time to stop before someone handed out his personal information. I doubt he knew the whole thing would take on a life of its own in spite of doomsday coming and going, and seeing the machine keep chugging away without him was probably satisfying enough.
There was this proto-Q who showed up before Q, during election season, in the exact same place online, and someone brought him up here and we all made fun of him. He claimed he was deep-cover FBI or something, and that Trump was going to release information that would send Hillary Clinton and "the entire government" to jail for treason that summer. He also claimed that the U.S. culture industry was a secret federal program to promote mixed-race babies. People ate it up for whatever reason, demanding more and more of this guy and giving him tons of attention, and Q is probably either that guy or, more likely, someone who saw what that guy managed to do just riffing off the top of his head and thought, Hell,
#1189
i'm inclined to agree with cars, although we do know that FBI spooks were actively disseminating propaganda and disinfo around the same places. the thing is though, if Q was a psyop it wasn't a grandiose scheme to build a ravening paramilitary horde, it was a basic propaganda project from a couple desk jockeys at an alphabet soup agency saying "we have this problem where people reading conspiracy stuff from these sites distrust the govt. what if we could leverage that distrust... to foster trust."
so they got a self perpetuating little blob of people to believe that no matter how bad things are now, the messiah will fix America next Tuesday as long as they keep the faith, and that was it. good job project complete, nothing more complicated to it. any greater ambitions would have too many moving parts to be feasible.
#1190
shriekingviolet posted:
i'm inclined to agree with cars, although we do know that FBI spooks were actively disseminating propaganda and disinfo around the same places.
LOL yeah that FBI guy who went on 4chan and tried to convince them they should hate Russia because it fixed the election for Trump. Beefy cyber strats
#1191
whoever started Qanon it's almost certainly just run by whoever the big guns are in selling the t-shirts & bumper stickers at this point imo
#1192
#1193
now that's ^ the content i come here for
#1194
Yeah Cars you're probably right. Just when there was that 'digital influencers' summit or whatever at the White House and like half of em' were major Q folks it really seemed like Trump knew what he was doing by inviting them.
#1195
Pretty sure I've posted this before but it's still insane to think about https://www.newsweek.com/2018/01/19/boston-marathon-bomb-maker-loose-776742.html
"FBI officials also have yet to explain why bureau agents were in the Tsarnaevs' neighborhood, which is roughly a mile from MIT, the night Dzhokhar killed Collier."
"The FBI maintains to this day that the bombers were not known to the bureau before those photos were made public, despite the fact that federal agents interviewed Tamerlan and his family multiple times in 2011"
As Topsfield cops and state police continued their search of Morley's bedroom and a shed in the backyard, the FBI suddenly showed up, leading one trooper to say, "Who called the feebs?"
"Morley was arrested by Topsfield police that day, charged with two counts of assault and battery against his mother and her companion, and with making a bomb threat. And then, nothing. The FBI showed up at Topsfield police headquarters and seized much of the evidence taken from Morley's home after Hayward executed a search warrant. Morley was never formally arraigned in connection with the charges that Hayward swore out in a criminal complaint, and those charges were abruptly dropped without explanation by the Essex County district attorney."
#1196
that was definitely some fuck shit
#1197
More from the Clarion Project:
#1198
So what the fuck happened with the Mandalay Bay shooting/shooter? It's mind-blowing that nobody has a fucking clue why the largest and deadliest mass shooting in US history even happened in the first place.
#1199
i think we call all agree it was just a tragic accident, there is nothing more to it, case closed
#1200
we all make mistakes :(
|
On the derivative of a $G$-function whose argument is a power of the variable
Compositio Mathematica, Tome 17 (1965-1966), p. 286-290
@article{CM_1965-1966__17__286_0,
author = {Sundararajan, P. K.},
title = {On the derivative of a $G$-function whose argument is a power of the variable},
journal = {Compositio Mathematica},
publisher = {Kraus Reprint},
volume = {17},
year = {1965-1966},
pages = {286-290},
zbl = {0151.08001},
mrnumber = {206346},
language = {en},
url = {http://http://www.numdam.org/item/CM_1965-1966__17__286_0}
}
Sundararajan, P. K. On the derivative of a $G$-function whose argument is a power of the variable. Compositio Mathematica, Tome 17 (1965-1966) pp. 286-290. http://www.numdam.org/item/CM_1965-1966__17__286_0/
Bhise, V.M., [1] Proc. Nat. Acad. of Sc. (Ind.) 82,A 349-354 (1962). | MR 158103 | Zbl 0144.06904
Erdelyi, A., [2] Higher Trans. Functions, Vol. I (McGraw-Hill) (1953). | Zbl 0051.30303
|
CNN: It's McCain and Palin
mheslep
Gold Member
Palin’s Pipeline Is Years From Being a Reality
http://www.nytimes.com/2008/09/11/us/politics/11pipeline.html
Hmmmmm. So Palin claims she engineered the deal that jump-started a long-delayed gas pipeline project - but there is not pipeline project - well only on paper, where it's been before she took office. ...
Whatever its faults, there was no approved plan at all before Palin, now there is. Previously the pipeline was completely stalled, dead, as the legislature killed Murkowski's deal w/ the North Slope companies. Also, the Alaskan share for the dead Murkowski deal the oil co's would have been 20X, $10B, per the NYT piece. http://dwb.adn.com/money/industries/oil/pipeline/story/8591458p-8484351c.html [Broken] Last edited by a moderator: Evo Mentor Seems like the Trans Canada deal is a bit shady, from Astro's link. The proposal that TransCanada negotiated with the Murkowski administration was structured differently from the current one and had no provision for a$500 million state subsidy, said two people who reviewed it and who spoke on condition of anonymity because the proposal remains confidential.
Of the Palin aides familiar with TransCanada from those earlier negotiations, Ms. Rutherford had an unusually close connection. For 10 months in 2003, she was a partner in a consulting and lobbying firm whose clients included Foothills Pipe Lines Ltd., a subsidiary of TransCanada.
Ms. Rutherford said in an interview that after TransCanada submitted its pipeline proposal to the Palin administration, she and the governor never discussed whether her role on the team might be viewed as improper or give the appearance of a conflict of interest.
<snip>committed the state to paying the winning bidder up to \$500 million in matching money to offset costs of obtaining regulatory approvals and other expenses. Ms. Rutherford, whose team recommended the subsidy, <snip>
Ms. Rutherford, who said she had not lobbied for Foothills but had done research and analysis, stated that she was not one of the pipeline team members who recommended a developer to Ms. Palin. That was done by Mr. Irwin and Patrick S. Galvin, the commissioner of the Department of Revenue, she said.
“At the end of the day, I was not a decider,” said Ms. Rutherford, who acknowledged reading the proposals and discussing them with others on the team.
Mr. McAllister, the spokesman for Ms. Palin, said that Ms. Rutherford was not in a position to gain anything from her past association with TransCanada and that her role posed no conflict.
"Not a decider", in the business those are called "influencers", sometimes responsible for the decision although they don't actually sign contracts.
She's not in a position to have legal, direct benefit. Oh, then that means it's not possible that she profited in any way.
baywax
Gold Member
"Not a decider", in the business those are called "influencers", sometimes responsible for the decision although they don't actually sign contracts.
She's not in a position to have legal, direct benefit. Oh, then that means it's not possible that she profited in any way.
We've put legislation together where we will only accept IVAN's ALGAE OIL being transported by pipeline across Canadian land. Is there algae in Alaska?
Evo
Mentor
We've put legislation together where we will only accept IVAN's ALGAE OIL being transported by pipeline across Canadian land. Is there algae in Alaska?
Does lichen count?
BobG
Homework Helper
Will Palin be kicked off the ticket? (Er, withdraw for personal reasons...) She's being investigated for firing Alaska's Public Safety Director, because, it is said, he refused to fire her ex brother-in-law (a state trooper). It has come out that when she was mayor of her little fiefdom she insisted that each of the town's managers submit their resignations. The head librarian refused, but eventually relented. The police chief refused, so she fired him.
http://www.washingtonindependent.com/3767/palin-involved-in-ousting-scandals-from-the-start
How much more stuff needs to dribble out before Palin regretfully withdraws from the rigors of a national campaign to spend more time with her special-needs infant? Will she need to spend time with her pregnant daughter, who will certainly need some guidance and hand-holding if she is going to weather the heavy scrutiny she's been subjected to, and start a new life as a mother and wife?
McCain's choice of Palin has buried the issues that the GOP needs to define to differentiate McCain from Bush. Her constant presence in the national news (even over a holiday weekend dominated by a hurricane) does not seem like such a good thing for the McCain campaign. Is she on the way out?
Is Joe Biden on the way out? http://www.iht.com/articles/2008/09/11/america/biden.php
"Hillary Clinton is as qualified or more qualified than I am to be vice president of the United States of America," Biden said Wednesday in Nashua, New Hampshire. "Quite frankly it might have been a better pick than me."
Actually, the problem is that the 'debate' has become between the Democratic Presidential nominee and the Republican Vice Presidential nominee. In that, I guess you could say Biden hasn't held up his end of the boat.
Instead of attacking Palin, he's been busy trying to heal the crippled:
"Chuck, stand up, let the people see you," Biden shouted to State Senator Chuck Graham, before realizing, to his horror, that Graham uses a wheelchair. "Oh, God love ya," Biden said. "What am I talking about?"
Joe really better step up his game a little.
Last edited by a moderator:
Evo
Mentor
Love it! He looks like a used car salesman but his gaffes make him seem like a real person. I like him now.
His saying Hillary would have been as good or a better choice for VP, IMO, will endear more women to him. The acknowledgement will be well received by women. (I am a woman btw, so I should know).
He will need to at least get his facts straight for the VP debate though.
LowlyPion
Homework Helper
Is Joe Biden on the way out?
Actually, the problem is that the 'debate' has become between the Democratic Presidential nominee and the Republican Vice Presidential nominee. In that, I guess you could say Biden hasn't held up his end of the boat.
Instead of attacking Palin, he's been busy trying to heal the crippled:
Joe really better step up his game a little.
I can agree with that. To that extent I think the Republicans count success every day they can keep a squabble going between Palin's right wing nut spinmeisters and Obama. Though I would say that lately they have come out on the short end of the stick trying their smears.
Biden would do well to start a fight with McCain - call him to task for engaging in politics of mudslinging, for reneging on his earlier vows to wage a clean campaign on the issues.
Like where is McCain on the issues? I'd say his smarmy news bites and remembrances of imprisonments past are getting a bit worn at the edges.
chemisttree
Homework Helper
Gold Member
Is Joe Biden on the way out? http://www.iht.com/articles/2008/09/11/america/biden.php
Actually, the problem is that the 'debate' has become between the Democratic Presidential nominee and the Republican Vice Presidential nominee. In that, I guess you could say Biden hasn't held up his end of the boat.
Instead of attacking Palin, he's been busy trying to heal the crippled:
Joe really better step up his game a little.
Obama knew what he was getting when he picked Biden as his running mate: A veteran of six terms in the Senate, chairman of the Foreign Relations Committee and former chairman of the Judiciary Committee, an Irish Catholic with working-class roots, a guy who had twice been tested in the arena of presidential politics.
And a human verbal wrecking crew. This is the fellow who nearly derailed his nascent presidential campaign last year by calling Obama bright and clean and articulate and who noted that you needed a slight Indian accent to walk into a Dunkin' Donuts or 7-11 in Delaware.
The guy who, reading his vice-presidential acceptance speech from a TelePrompter, bungled McCain's name, calling him "George" ("Freudian slip, folks, Freudian slip," he explained).
The guy who, on the day Obama announced him as his running mate, referred to his party's presidential nominee as "Barack America" and noted that his own wife, Jill, a college professor, was "drop-dead gorgeous" but who, problematically, possessed a doctorate.
The guy who has said he is running for president (not vice president) and who confused army brigades with battalions. Who referred to his Republican vice-presidential opponent as the lieutenant governor of Alaska.
I going to be fun watching the verbal wrecking crew in action! (or was that inaction?)
BobG
Homework Helper
I can agree with that. To that extent I think the Republicans count success every day they can keep a squabble going between Palin's right wing nut spinmeisters and Obama. Though I would say that lately they have come out on the short end of the stick trying their smears.
Biden would do well to start a fight with McCain - call him to task for engaging in politics of mudslinging, for reneging on his earlier vows to wage a clean campaign on the issues.
Like where is McCain on the issues? I'd say his smarmy news bites and remembrances of imprisonments past are getting a bit worn at the edges.
Yes, this is what Biden should be doing. Biden is very entertaining to listen to. He's a mix of serious forcefulness and wit. He may be prone to talking a bit too much, but so is McCain. Arguing with McCain is the job Biden was hired for.
It should be Clinton making the attacks on Palin. Her attacks have to avoid the 'working mom' and abortion conflicts, though. The main point is to pit the white female voters' old hero against the new hero. If Palin is lacking in experience or substance, then Clinton is the one who can point it out without raising the gender issue.
All in all, I have to say I'm disappointed how this has turned out. I thought Palin would negate Obama's aura and bring the campaign back down to a level one based on the issues. Instead, the issues have been pushed to the background as trivial.
We seemed primed for one of the stupidest campaign fights ever. Putting lipstick on pigs is now worthy of debate? Sheep, maybe, but lipstick on pigs is just a stupid issue.
LowlyPion
Homework Helper
... Instead, the issues have been pushed to the background as trivial.
We seemed primed for one of the stupidest campaign fights ever. Putting lipstick on pigs is now worthy of debate? Sheep, maybe, but lipstick on pigs is just a stupid issue.
And this trophy can clearly be laid at the feet of McCain, and his total sellout to the Right Wing - the same Wing that did the very thing to him while forwarding their hand-operated Bush Puppet ... er I mean inaction figure ... back in 2000.
He knows how they operate. And he has embraced their strategies. He must know in his heart there is no way for him to ever win a policy debate.
BobG
Homework Helper
And this trophy can clearly be laid at the feet of McCain, and his total sellout to the Right Wing - the same Wing that did the very thing to him while forwarding their hand-operated Bush Puppet ... er I mean inaction figure ... back in 2000.
He knows how they operate. And he has embraced their strategies. He must know in his heart there is no way for him to ever win a policy debate.
Not completely. I have no idea whether Obama saw any connection between his comment and Palin ahead of time, but the crowd listening to Obama definitely saw a connection. It was worth a responding comment, but I just can't believe it was a 'big story'. It was a stupid thing that should have dropped out of the picture almost immediately.
baywax
Gold Member
I run my snowmobile on lichen and permafrost.
Did anyone see Mr. Obama on Letterman?
Pretty darn good American you got there.
LowlyPion
Homework Helper
Not completely. I have no idea whether Obama saw any connection between his comment and Palin ahead of time, but the crowd listening to Obama definitely saw a connection. It was worth a responding comment, but I just can't believe it was a 'big story'. It was a stupid thing that should have dropped out of the picture almost immediately.
You may well be right that it was intentional or maybe even a subliminal nod to Palin's smarmy self characterization of herself as a pit bull. But whatever the motivation, were it intentional in any way, it was clearly a subtle jab, delivered within the context of contrasting McCain's voting consistently for Bush agenda bills. It is a common metaphor, used widely in the vernacular after all.
If there was any artifice, I'd suggest that Palin calling herself a kind of dog, in a widely broadcast speech, is the provocative act, with Republican attack Kamikazes, apparently at the ready forearmed, to blow away any references to female dogs and act hypocritically self-righteous.
I'd say on the whole the McCain/Palin handlers are the ones that came off less than Presidential in how it was handled regardless of Obama's intent.
mheslep
Gold Member
...McCain's voting consistently for Bush agenda bills. ...
Simplistic. By the same measure Obama voted with the President 40% of the time, and Democratic lawmakers on average voted with the President more than half the time.
Last edited by a moderator:
Astronuc
Staff Emeritus
Palin leaves open option of war with Russia
http://www.npr.org/templates/story/story.php?storyId=94534529 [Broken]
Alaska Gov. Sarah Palin left open the option Thursday of waging war with Russia if it were to invade neighboring Georgia and the former Soviet republic were a NATO ally. "We will not repeat a Cold War," Palin said in her first television interview since becoming Republican John McCain's vice presidential running mate two weeks ago.
Well considering Russia already did invade Georgia and has slowly been withdrawing. And yes - those tensions from the Cold War have returned if only mildly.
This woman needs to get a grip on reality.
Last edited by a moderator:
LowlyPion
Homework Helper
Simplistic. By the same measure Obama voted with the President 40% of the time, and Democratic lawmakers on average voted with the President more than half the time.
|
# why are WAW and WAR hazards not possible in mips architecture
i have read about data hazards and then came across that mips architecture doesn't allow WAR AND WAW hazards can someone please help me understand it? the reason is not given in the book the MIPS pipeline is divided into :
1.IF(instruction fetch) 2.ID(decode the instruction) 3.EX(execute instruction) 4.MEM(write or read from the memory) 5.WB(write back to the register file) for eg in case of WAW hazards :
I1: |IF|ID|EX|MEM|WB |
I2 |IF|ID|EX |MEM|WB|
The above is expected way in which the instructions execute without data hazards
but here the second instruction has to wait till the WB phase of the instruction I1 for getting the value of R1 hence it will stall till the value of R1 is available in register file i.e till WB phase of the I1. here what my doubt is in case I2 Takes less number of clock cycles than I1 for completion then can I2 access the register file going to WB phase directly in case it has nothing to do in the phase of MEM will this give rise to a hazard?
• Where have you read that? What are your thoughts about the matter? – Raphael Jan 4 '18 at 12:23
• I find your post very hard to understand. Please take some care splitting your narrative into proper English sentences. Also, use Markdown formatting for lists and code. (You're still missing a reference: which book?) – Raphael Jan 4 '18 at 17:50
• I have read the concept from my class notes but saw this in a video course youtube.com/watch?v=9mpOG9YtSLc&t=1242s not i dont know which standard book that professor referred to – venkat Jan 4 '18 at 18:05
• please see this video at 17:58 youtube.com/watch?v=9mpOG9YtSLc&t=1242s – venkat Jan 4 '18 at 18:06
• Please include those references in the questions, comments may vanish. :) (If your prof quotes a book without saying why a) check the syllabus or similar, and if you find nothing b) ask them to please credit their sources. – Raphael Jan 4 '18 at 19:22
|
Language: Search: Contact
Zentralblatt MATH has released its new interface!
For an improved author identification, see the new author database of ZBMATH.
Query:
Fill in the form and click »Search«...
Format:
Display: entries per page entries
Zbl 1123.34044
Kobayashi, Yoshikazu; Matsumoto, Toshitaka; Tanaka, Naoki
Semigroups of locally Lipschitz operators associated with semilinear evolution equations.
(English)
[J] J. Math. Anal. Appl. 330, No. 2, 1042-1067 (2007). ISSN 0022-247X
Let $A$ be the generator of a $C_0$ semigroup on a Banach space $X$ and $B$ a nonlinear operator from a subset $D$ of $X$ into $X$. This paper concerns the semigroup of locally Lipschitz operators on $D$ with respect to a given vector-valued functional $\varphi$, which presents a mild solution to the Cauchy problem for the semilinear evolution equation $$u'(t)= (A+B)u(t)\quad (t\geq 0),\quad u(0)=u_0\quad (u_0\in D).$$ Under some assumptions, the authors obtain a characterization of such a semigroup in terms of a sub-tangential condition, a growth condition and a semilinear stability condition indicated by a family of metric-like functionals on $X\times X$. An application to the complex Ginzburg-Landau equation is given.
[Jin Liang (Hefei)]
MSC 2000:
*34G20 Nonlinear ODE in abstract spaces
47H20 Semigroups of nonlinear operators
Keywords: Semigroup of locally Lipschitz operators; semilinear evolution equation; semilinear stability condition; sub-tangential condition; growth condition
Highlights
Master Server
### Zentralblatt MATH Berlin [Germany]
© FIZ Karlsruhe GmbH
Zentralblatt MATH master server is maintained by the Editorial Office in Berlin, Section Mathematics and Computer Science of FIZ Karlsruhe and is updated daily.
Other Mirror Sites
Copyright © 2013 Zentralblatt MATH | European Mathematical Society | FIZ Karlsruhe | Heidelberg Academy of Sciences
|
# inverse using modulo congruence
• October 25th 2006, 06:28 AM
inverse using modulo congruence
Hi ,
Please can you help me with this problem .
I need to find the inverse of 2 modulo 11 using gcd(11,2) and modulo congruence.
I know I can start like this:
gcd(11,2)
11 = 2*5 + 1
2 = 1*2 + 0
then I used modulo congruence:
1=11 - 2*5 (mod 11)
....
I know the answer is 6 mod 11. because I used another method.
I want to know how to find it using modulo congruence.
Please can you help me ?
B
• October 25th 2006, 08:33 AM
Soroban
I don't know if this what you want, but . . .
Quote:
I need to find the inverse of 2 (mod 11)
We want to find $x$ so that: . $2x\:=\:1 \pmod{11}$
Then: . $2x - 1 \:= \:11k$ . . . for some integer $k.$
And we have: . $x \:=\:\frac{11k+1}{2}$
The 'first' value of $k$ which produces an integral $x$ is: . $k = 1$
. . which gives us: . $x = 6$
Therefore, $6$ is the multiplicative inverse of $2 \pmod{11}.$
• October 25th 2006, 08:54 AM
topsquark
Just a quick note.
The reason the inverse HAD to exist in your case is that 11 is a prime. If you were, for instance using mod 10 (a composite number), a number does not need to have an inverse, or may have more than one. You can prove these statements using Soroban's method.
-Dan
• October 25th 2006, 11:13 AM
|
TWiki> Public Web>HipeWhatsNew>HipeWhatsNew5x (revision 9)EditAttach
# What's New in HIPE 5.0
## Core system
### Plotting
* Title and label can accept LaTeX commands \textrm \textit \textbf \mathrm \mathit \mathbf. for example, V$_{\textrm{LSR}}$
### Numeric routines
#### Statistics functions
##### New functions
• GeoMean which calculates the geometric mean of a array of numeric data types with 1 to 5 dimensions.
• Mode which returns the mode(s), or the most common element(s), of an array of numeric data with 1 to 5 dimensions.
• Covariance which yields the covariance between two random variables/vectors x and y with finite second moments. If one vector is longer than the other, only the values up to the length of the shorter vector will be taken into account.
• CovarianceMatrix which returns the covariance matrix of the input M x N matrix. The result is a N x N matrix with each i, j value equal to the covariance of the ith and jth columns of the original matrix.
### Images
#### Display
• Images can be opened as RgbImage directly from the Navigator in HIPE.
• Regular image files (jpg, gif, png, ...) are shown as preview in the outline when being clicked on in the navigator view of hipe.
• 2 images can be compared by setting the opacity of the image. This can be done using the setOpacity(float) method or using a slider in the Image Display.
#### Analysis
• Added the methods containsNorthCelestialPole and containsSouthCelestialPole on the Wcs.
• Regridding an image on the grid of another image using RegridTask
• Update of URM documentation
• Cropping an image by drawing a rectangle on it
• Correction of the calculation of the dimensions of a mosaic
## SPIRE
### Common Pipeline
• Only minor changes in logs and documentation.
### Photometer Pipeline
• Pipeline scripts:
• Comment out optical and electrical crosstalk corrections
• Removed plotting blocks
|
## Proof of the Riemannian Penrose inequality using the positive mass theorem.(English)Zbl 1039.53034
An asymptotically flat 3-manifold is a Riemannian manifold $$(M^3, g)$$ which, outside a compact set, is a disjoint union of one ore more regions (called ends) diffeomorphic to $$({\mathbb R}^3\setminus B_1(0), \delta)$$, where the metric $$g$$ in each of $${\mathbb R}^3$$ coordinate charts approaches the standart metric $$\delta$$ on $${\mathbb R}^3$$ at infinity. The positive mass theorem and the Penrose conjecture are both statements which refer to a particular chosen end of $$(M^3, g)$$. The total mass of $$(M^3, g)$$ is a parameter related to how fast this chosen end of $$(M^3, g)$$ becomes flat at infinity. The main result of the paper is the proof of the following geometric statement – the Riemannian Penrose conjecture: Let $$(M^3, g)$$ be a complete, smooth, asymptotically flat 3-manifold with nonnegative scalar curvature and total mass $$m$$ whose outermost minimal spheres have total surface area $$A$$. Then $$m\geq\sqrt{\frac{A}{16\pi}}$$ with equality if and only if $$(M^3, g)$$ is isometric to the Schwarzschild metric $$({\mathbb R}^3\setminus\{0\}, s)$$ of mass $$m$$ outside their respective horizons.
### MSC:
53C21 Methods of global Riemannian geometry, including PDE methods; curvature restrictions 53C44 Geometric evolution equations (mean curvature flow, Ricci flow, etc.) (MSC2010) 53Z05 Applications of differential geometry to physics
Full Text:
|
# Mean curvature is the divergence of the normal
As a definition, I was told that for a surface in 3D,
$2H = -\nabla \cdot \nu$
where $H$ is the mean curvature and $\nu$ is the normal unit vector. In some results that I am studying, the factor 2 always disappears... Is this normal ? can we ignore the factor 2 and consider the definition "up to a constant" ?
$$H = -\nabla \cdot \nu, \ \ \ H =-\frac{1}{2} \nabla \cdot \nu$$
are used in the literature. The former is more convenient (don't need to remind ourselves that there is a two) while the latter has the advantage that $H=1$ when the surface is the unit sphere. Just pick one and be careful.
|
# Batch Files Check Date
This topic is 3627 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi all, Can anyone help me.. I need to write up a batch file that will check the date modified of the file in G drive and if it is the same at the one at U drive then it will do nothing. But if it is different than it will copy the G drive file to U drive. Help Needed.. Thanks a million..
##### Share on other sites
Assuming windows (does "batch file" apply to other OS?), you can use:
xcopy source destination /D
This should copy the source to the destination, only if source is more recent than destination. Type:
help xcopy
at the dos prompt to see a list of options and their explanations.
##### Share on other sites
xcopy /d <source> <destination>
##### Share on other sites
i tried both the codes u guys gave. but it keep giving me insufficient parameters.
##### Share on other sites
I think you need to specify the parameters after the source and dest names. Also, you want to use quotes around your paths if they have spaces. So try:
##### Share on other sites
Thank you so much.
it works now. my mistake for not adding the ""
i have another question. if my file in G drive is for example FILED and FILEM. but they actually contain the same information just that during month end they will provide me with FILEM. whereas daily they will provide me with FILED. is there a way for me to copy the files and name in under a generic name under U drive.
##### Share on other sites
You can copy a whole directory to another directory... not sure how you would selectively sometimes copy FILED and other times FILEM, without just copying the whole directory (getting both files effectively)... at least not without getting fancier in your batch file, or switching to python or C# to do the job.
##### Share on other sites
hmmm i'll see what i can do..
thanks =D
##### Share on other sites
I think this would do the trick - destination will contain the newer of fileD.xls and fileM.xls, renamed to file.xls:
xcopy "G:\Documents\fileD.xls" "U:\Upload\Daily Update\file.xls" /Dxcopy "G:\Documents\fileM.xls" "U:\Upload\Daily Update\file.xls" /D
|
# Semiconductors/What is a Semiconductor
Semiconductors are materials that have properties in between normal conductors (materials that allow electric current to pass, e.g. aluminium) and insulators (which block electric current, e.g. sulphur).
Semiconductors fall into two broad categories. First, there are intrinsic semiconductors. These are composed of only one kind of material. Silicon and germanium are two examples. They are also called "undoped semiconductors" or "i-type semiconductors".
Extrinsic semiconductors are made of intrinsic semiconductors that have had other substances added to them to alter their properties.
## Intrinsic SemiconductorsEdit
Every atom consists of a nucleus surrounded by a number of electrons. Only the electrons are involved in electronic processes. The electrons can exist only in certain electron shells around the atom. There are many shells in each atom.
It requires energy to get an electron from a shell close to the nucleus to one further away, and if an atom's electrons are in a position which is not the position with least energy (i.e. they are in a higher (further from nucleus) shell and there is space in a lower shell), energy is given up so the electrons "fall" into the inner shells. Thereby, the shells closest to the nucleus are filled first, and then the next closest and so on. It requires more energy for an electron in a shell that is close to its nucleus to fill an outer shell than it is for an electron on an outer shell to fill an inner shell, so the inner shell is filled first.
We will consider our material to be arranged in a lattice, which is a regular arrangement, like a crystal. This helps to describe and explain the principles. In the lattice, each electron can "see" every atom in the entire lattice, and therefore is not just affected by the presence of electrons in its own atom, but by all the other atoms in the material. The huge number of atoms (usually greater than one thousand billion billion in a cube 1mm on a side) means that the number of electrons in each shell of each atom is not important - the shells "merge" into bands. All that matters is that if that band is filled, partially filled or empty. The size of bands and the gaps between them is determined by the nature of the material.
Figure 2: Electronic band structure of an insulator or semiconductor.
Figure 3: Comparison of the band gaps for a metal, a semiconductor and an insulator.
In a lattice, there will be a set of filled bands, with a full complement of electrons and unfilled bands which have no electrons (because they are in the lower-energy filled bands). The highest energy band with electrons in it is called the valence band, from the chemists' term "valence electrons" which are the electrons on the outermost shell of the atom which are responsible for chemical reactions. The conduction band is the band above the valence band. Electrons in the conduction band are free to move about in the lattice, and can therefore conduct current. The energy gap between the valence and conduction band is called the band gap.
Every material has associated with it a Fermi energy. Imagine the bands "filling up" from the bottom up, like water poured into a container. The continuous nature of the filling arises from the fact that there are such are large number of electrons they are essentially infinite in number. This behavior does not happen in a single atom, as the small number of electrons means that the amount of energy is heavily quantized. The Fermi energy is the level of the top of the "sea" that is formed. This is defined at absolute zero, when there is no thermal energy to allow the electronics to form "ripples" on the the sea.
In insulators, the Fermi level lies between the valence and conduction bands, in one of the "forbidden zones" where electrons cannot exist. Thus all electrons in the lattice are in the valence band or a band under that. To get to the conduction band, the electron has to gain enough energy to jump the band gap. Once this is done, it can conduct. However ,the band gap for insulators is large (over 3 eV) so very few electrons can jump the gap. Therefore, current does not flow easily in insulators.
In metals, the conduction band and the valence band overlap or the valence band is only partially full, both with the Fermi energy somewhere inside. This means that the metal always has electrons that can move freely and so can always carry current.
In semiconductors, the Fermi energy is between the valence and conduction band, but the band gap is smaller, allowing electrons to jump the gap fairly easily, given the energy to do it. At absolute zero, semiconductors are perfect insulators, but at room temperature, there is enough thermal energy to allow occasional electron jumps, given the semiconductor limited conductivity, even though, by rights, it should be an insulator.
If there are no electrons in the conduction band of a semi-conductor it won't conduct. To move electrons out of the valence band and into the conduction band, one needs to give them energy. This may be through heat, incident light or high electric field. As most semiconductors operate at non-zero temperature, there are generally some electrons in the conduction band. This also means that if the semi-conductor gets too hot (125°C for silicon), excess electrons will exist in the conduction band, hence the semi-conductor will act more like a conductor.
Because intrinsic semiconductors contain no "extra" electrons from impurities like extrinsic semiconductors do, every time an electron jumps the band gap, it leaves a hole behind. This hole represents a positive charge as it is the lack of an electron. Intrinsic semiconductors have exactly equal numbers of holes and electrons, so, where n is the number of electrons and p is the number of holes,
## Direct and Indirect SemiconductorsEdit
Figure 4: Band-gap for silicon
The total energy of an electron is given by its momentum and its potential energy. To move an electron from the conduction band to the valence band, it may need to undergo a change in potential energy and a change in momentum. There are two basic material types, in-direct and direct band gap materials. In an indirect band gap material, such as silicon, shown in figure 4, to move into the valence band, the electron must undergo a change in momentum and energy[1]. The chance of this event is small. Typically this process is achieved in several steps. The electron will first move to a trap site in the forbidden band before moving into the valence band. A change in potential energy will result in the release of a photon, while a change in momentum will produce a phonon (a phonon being a mechanical vibration which heats the crystal lattice).
Figure 5: Band-gap for GaAs, a direct semiconductor
In a direct band-gap material such as GaAs, only a change in energy is required, as seen in figure 5. As such GaAs is very efficient at producing light, although in the infrared spectrum.
## Extrinsic SemiconductorsEdit
One may also dope the semiconductor material. Semi-conductor materials are doped with impurities chosen to give the material special characteristics. One may want to add extra electrons or remove electrons.
Figure 6: N-type silicon, doped with phosphorous
Figure 7: P-type silicon, doped with boron
Doping atoms are chosen from elements in group III or V of the periodic table[1] which are similar in size to silicon atoms. Thus individual intrinsic semiconductor atoms may be replaced with dopant atoms to form an extrinsic semi-conductor.
The binding energy of the outer electron added by the impurity is weak. This is represented by placing the excess electrons just below the conduction band. Thus very little energy is required to move these electrons into the conduction band. Thus an extrinsic semi-conductor operating at room temperature will have most of these "extra" electrons existing in the conduction band. Thus at normal operating temperature,
${\displaystyle n_{c}\approx N_{d}}$ ,
Where ${\displaystyle n_{c}}$ is the number of conduction electrons and ${\displaystyle N_{d}}$ is the number of dopant atoms.
## Fermi LevelEdit
The number of energy levels in the conduction band is given by[2] ${\displaystyle N_{c}}$ :
${\displaystyle N_{c}\approx 2{\bigg (}{\frac {2m_{e}\pi kT}{h^{2}}}{\bigg )}^{\frac {3}{2}}}$
where ${\displaystyle m_{e}}$ is the effective mass of an electron.
The number of energy levels in the valence band is given by ${\displaystyle N_{v}}$
${\displaystyle N_{v}\approx 2{\bigg (}{\frac {2m_{h}\pi kT}{h^{2}}}{\bigg )}^{\frac {3}{2}}}$
where ${\displaystyle m_{h}}$ is the effective mass of a hole.
For an intrinsic semi-conductor, the number of conduction electrons must equal the number of conduction holes. Such that
${\displaystyle n_{c}=n_{v}}$
where ${\displaystyle n_{c}}$ is the number electrons in the conduction band and ${\displaystyle n_{v}}$ is the number of conduction holes in the valence band, given by:
${\displaystyle n_{c}=N_{c}e^{\frac {E_{f}-E_{c}}{kT}}}$
${\displaystyle n_{v}=N_{v}e^{\frac {E_{f}-E_{v}}{kT}}}$
for an n-type extrinsic semiconductor, the number of conduction electrons ${\displaystyle n_{c}}$ must equal the number of conduction holes plus the number of ionized donor atoms, ${\displaystyle n_{d}}$ .
${\displaystyle n_{c}=n_{v}+n_{d}}$
where:
${\displaystyle n_{d}\approx N_{d}e^{\frac {E_{f}-E_{d}}{kT}}}$
Figure 8: Carrier density for doped semiconductor
Figure 8 shows this relationship against temperature. At the operating temperature, the electrons available for conduction is relatively constant, as most donor electrons exist in the conduction band. For high temperatures electrons from the valance band begin to populate the conduction band, significantly increasing the carrier density. The electrons in the conduction band are now dominated by electrons from the intrinsic semiconductor and it is said to be intrinsic. For very low temperatures, the donor electrons no longer populate the conduction band and the semi-conductor is said to freeze out.
## ConductionEdit
electron mobility: When an electric field is applied to a semiconductor, the electrons experience a force and are accelerated in the opposite direction of the electric field. This acceleration is inhibited by what we term 'collisions' [1]. When a collision occurs, the velocity of the electron drops to zero and it accelerates again. The average time between collisions is given by ${\displaystyle \tau _{c}}$ .
The effect is a constant drift velocity for an n-type semiconductor ${\displaystyle V_{n}}$ given by:
${\displaystyle V_{n}=-\mu \cdot \xi }$
${\displaystyle \mu ={\frac {q\tau _{c}}{m_{e}}}}$
where ${\displaystyle \mu }$ is the mobility. Its derivation is complicated as the velocities have a Maxwellian distribution.
The current density ${\displaystyle J_{n}}$ is given by:
${\displaystyle J_{n}={\frac {I}{A}}=-qnV_{n}}$
where ${\displaystyle n}$ is the number of electrons per unit volume ${\displaystyle A}$ and ${\displaystyle q}$ their charge. One may also express the current density in terms of the conductivity ${\displaystyle \sigma }$ :
${\displaystyle J_{n}=\sigma \xi }$
${\displaystyle \sigma =qn\mu }$
where ${\displaystyle \sigma }$ is the conductivity in siemens per meter and ${\displaystyle \xi }$ the electric field.
Conduction is further complicated by additional diffusion of carriers. The voltage drop across the semiconductor is gradual and therefore sets up an electron density gradient. Electrons which exist at higher densities experience a force towards less dense region. Thus a Diffusion co-efficient ${\displaystyle D_{n}}$ is defined along with electron density gradient ${\displaystyle \nabla n}$ .
${\displaystyle J_{n}=qn\mu _{n}\xi +qD_{n}\nabla n}$
where
${\displaystyle D_{n}={\frac {kT}{q}}\mu _{n}}$
The same equations also apply for a p-type semicondcutor with a few minor differences.
## ReferencesEdit
1. John Allison. Electronic Engineering Semiconductors and devices. McGraw-Hill Book Company, Shoppenhangers Rd Maidenhead Berkshire England, 1971.
2. S. M. Sze. Physics of Semiconductor Devices. Wiley-Interscience, New York, 1969.
|
# Simulation dynamique et applications robotiques
1 SHARP - Automatic Programming and Decisional Systems in Robotics
GRAVIR - IMAG - Laboratoire d'informatique GRAphique, VIsion et Robotique de Grenoble, Inria Grenoble - Rhône-Alpes
Abstract : We describe models and algorithms designed to produce efficient and physically consistent dynamic simulations. These models and algorithms have been implemented within the $Robot\Phi$ system\cite(RAP95) which can potentially be configured for a large variety of interven\-tion-style tasks such as dextrous manipulations with a robot hand; manipulation of non-rigid objects; tele-programming of the motions of an all-terrain vehicle; and some robot assisted surgery tasks (e.g. positioning of an artificial ligament in knee surgery). The approach uses a novel physically based modeling technique to produce dynamic simulations which are both efficient and consistent according to the laws of the Physics. The main advances over previous works in Robotics and Computer Graphics fields are twofold: the development of a unique framework for simultaneously processing motions, deformations, and physical interactions; and the incorporation of appropriate models and algorithms for obtaining efficient processing times while insuring consistent physical behaviors.
Keywords :
Document type :
Theses
Domain :
Cited literature [101 references]
https://tel.archives-ouvertes.fr/tel-00004948
Contributor : Thèses Imag <>
Submitted on : Friday, February 20, 2004 - 4:59:23 PM
Last modification on : Monday, December 28, 2020 - 3:44:01 PM
Long-term archiving on: : Friday, September 14, 2012 - 10:25:24 AM
### Identifiers
• HAL Id : tel-00004948, version 1
### Citation
Ammar Joukhadar. Simulation dynamique et applications robotiques. Autre [cs.OH]. Institut National Polytechnique de Grenoble - INPG, 1997. Français. ⟨tel-00004948⟩
Record views
|
# Help please O t 2018, the Mason Manufacturing Company began construction of a building to be...
###### Question:
help please O t 2018, the Mason Manufacturing Company began construction of a building to be used as its office en The bug was completed on September 30, 2019 Expenditures on the project were as follows: G . January 1, 2018 March 1, 2018 June 10, 2018 October 1, 2018 January 31, 2019 April , 2019 August 31, 2019 51,070,000 340,000 380,000 710,000 1,170,000 1,485,000 2,700,000 On January 1, 2018, the company obtained a $3 million construction loan with a 14% Interest rate. The loan was outstanding all of 2018 and 2019. The company's other interest-bearing debt included two long-term notes of$6,000,000 and 58,000,000 with interest rates of 8% and 10%, respectively. Both notes were outstanding during all of 2018 and 2019. Interest is paid annually on all debt. The company's fiscal year-end is December 31. Assume the $3 million loan is not specifically tied to construction of the building. Required: 1. Calculate the amount of interest that Mason should capitalize in 2018 and 2019 using the weighted average method. 2. What is the total cost of the building? 3. Calculate the amount of interest expense that will appear in the 2018 and 2019 income statements 2. Calculate the amount of interest expense that appear in the 201 2 < Prev 2013 Next > On January 2018, the company ined a 3 m on construction loan with a 13 interest rate. The loan was outstanding all of 2018 and 2019. The company's other interest-bearing debt included two long term notes of$6.000.000 and 58.000.000 with interest rates of 8 and 10% respectively. Both notes were outstanding during all of 2018 and 2019 interest is paid annually on all debt. The company's fiscal year-end is December 31. Assume the $3 million loan not specifically tied to construction of the building Required: 1. Calculate the amount of interest that Mason should capitalize in 2018 and 2019 using the weighted average method 2. What is the total cost of the building? 3. Calculate the amount of interest expense that will appear in the 2018 and 2019 income statements Complete this question by entering your answers in the tabs below. Reg 1 and 3 Reg 2 Calculate the amount of interest that Mason should capitalize in 2018 and 2019 using the weighted average method and interest expense that will appear in the 2018 and 2019 income statements. (Do not round your intermediate calculations. Round your answers to the nearest whole dollar.) 2018 2019 Interest capitalized Interest expense 2. Calculate the amount of interest expense that appear in the 201 2 < Prev 2 3 III Next > On January 2018, the company obtained a m on construction loan with a 13 interest rate. The loan was outstanding all of 2018 and 2019. The company's other interest-bearing debt included two long term notes of$6.000 000 and $8.000.000 with interest rates of B and 10% respectively. Both notes were outstanding during all of 2018 and 2019 interest is paid annually on all debt. The company's fiscal year end is December 31. Assume the$3 million loans not specifically tied to construction of the building Required: 1. Calculate the amount of interest that Mason should capitalize in 2018 and 2019 using the weighted average method 2. What is the total cost of the building? 3. Calculate the amount of interest expense that will appear in the 2018 and 2019 income statements Complete this question by entering your answers in the tabs below. Reg 1 and ) Reg 2 Calculate the amount of interest that Mason should capitalize in 2018 and 2019 using the weighted average method and interest expense that will appear in the 2018 and 2019 income statements. (Do not round your intermediate calculations. Round your answers to the nearest whole dollar.) 2018 2019 Interest capitalized Interest expense
#### Similar Solved Questions
##### 23 P022 m Three are at the corners of an equilateral triangle as shown in the...
23 P022 m Three are at the corners of an equilateral triangle as shown in the figure below. (Let q 4.00 pc, and L-0.950 m.) 7.00 C 60.0° -4.00 (a) Calculate the electric field at the position of charge a due to the 7.00-yC and -4.00-HC charges. you calculate the magnitude of the field contributi...
##### Ds) SWAT Surplus begea h, the company made the fol. 02 3. What is the cost...
ds) SWAT Surplus begea h, the company made the fol. 02 3. What is the cost of the P6-71B. (Learning Objective 2: Compare inventory by three methods) su March 2018 with 100 tents that cost $10 each. During the month, the com lowing purchases at cost: Mar 6 18 26 110 tents @$20 = $2,200 120 tents @$...
##### How do i write this equation in standard form and how do i find the center and radius of the circle
How do i write this equation in standard form and how do i find the center and radius of the circle?x squared + y squared + 2x - 2y - 2 = 0...
##### 4. A sustainable construction practice is the balancing of cut (soil to be removed) and fill...
4. A sustainable construction practice is the balancing of cut (soil to be removed) and fill (soil to be added) for any particular infrastructure site. Unfortunately, any remaining soil that needs to be removed/added to the site will cost an expense per volume. One particular site has a fill and cut...
##### Which one of these five solutions would be expected in theory to have the highest buffering...
Which one of these five solutions would be expected in theory to have the highest buffering capacity with respect to strong acid? 1) 3 mL of 0.50M Na2CO3 & 27mL of 0.50M NaHCO3 2) 9mL of 0.50M Na2CO3 & 21mL of of 0.50M NaHCO3 3) 15 mL of 0.50M Na2CO3 & 15mL of 0.50M NaHCO3 4) 21 mL of 0....
##### O The following data represent the muzzle velocity (in feet per second) of rounds fired from...
o The following data represent the muzzle velocity (in feet per second) of rounds fired from a 155-mm gun. For each round, two measurements of the velocity were recorded using two different measuring devices, resulting in the following data. Complete parts (a) through (d) below. Observation 1 2 3 4 ...
##### 3 7 8 9 10 11 12 13 The manufacturer of a fertilizer guarantees that, with...
3 7 8 9 10 11 12 13 The manufacturer of a fertilizer guarantees that, with the aid of the fertilizer, 75% of planted seeds will germinate. Suppose the manufacturer is correct. If 6 seeds planted with the fertilizer are randomly selected, what is the probability that more than 4 of them germinate? Ca...
##### A manufacturer is designing a two-wheeled cart that can maneuver in tight spaces. On one test...
A manufacturer is designing a two-wheeled cart that can maneuver in tight spaces. On one test model, the wheel placement (center) and radius are modeled by the equation (x - 1)2 + (y + 2)2 = 4. Which graph shows the position and radius of the wheels? 3 由 当 -- 个 4 X -8 8 X...
##### Question 12 5 pts The figure shows a stationary conducting loop adjacent to a horizontal current-carrying...
Question 12 5 pts The figure shows a stationary conducting loop adjacent to a horizontal current-carrying wire. For which of the following conditions will a counter-clockwise induced current circulate in the stationary loop? Stationary conducting loop Counter- clockwise induced current Current-carry...
##### 100.0 mL of 0.800 M AgNO3(aq) and 100.0 mL of 1.025 M BaCl2 (aq) are combined...
100.0 mL of 0.800 M AgNO3(aq) and 100.0 mL of 1.025 M BaCl2 (aq) are combined and allowed to react according to the following equation: 2AgNO3 (aq) + BaCl2 (aq) + 2AgCl (s) + Ba(NO3)2 (aq) a. Determine the theoretical yield (in g) of AgCl(s). b. What is the limiting reagent?...
##### 4. You want to value the stock of a company that is not publicly traded. This...
4. You want to value the stock of a company that is not publicly traded. This company has earnings of $4.25 per share, BV of equity of$9.85 share, and CFs to equity of \$1.85 per share. You have the following information on firms that are extremely similar to the firm that you're trying to value...
##### G ATE AC Ts THE ILHS 2000 E FoRCES ON THE GATE ACB 9.5
G ATE AC Ts THE ILHS 2000 E FoRCES ON THE GATE ACB 9.5...
|
# Sodium chloride + sulfuric acid [closed]
Why does sulfuric acid displace more volatile acids from salts? My textbook says that sulfuric acid can displace more volatile acids from metal salts. How is $\ce{HCl}$, which is not even a reactant, 'displaced' from $\ce{NaCl}$, as there is no $\ce{HCl}$ to be displaced?
## closed as unclear what you're asking by Todd Minehardt, bon, M.A.R. ಠ_ಠ, Geoff Hutchison, JanSep 29 '15 at 17:25
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
• Please avoid using Latex in titles due to searching issues – bon Sep 29 '15 at 16:35
• – JM97 Sep 16 '16 at 6:46
How is $\ce{HCl}$, which is not even a reactant, 'displaced' from $\ce{NaCl}$, as there is no $\ce{HCl}$ to be displaced?
Well, there will be some $\ce{HCl}$ due to a well-known chemical reaction which is used both in the lab as well as in production to get the hydrogen chloride:
$$\ce{NaCl(s) + H2SO4(s) → NaHSO4(s) + HCl(g)}$$
$$\ce{NaCl(s) + NaHSO4(s) → HCl(g) + Na2SO4(s)}$$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.