text
stringlengths 104
605k
|
---|
# in R : How to check RT's for ex-gaussian fit with the fitdistrplus package
I have several response times from participants in a mental rotation task. I'm really puzzled how to get the best distribution model to fit my data. I had checked several ones with the fitdistrplus::fitdist function and have two good candidates (log-normal & gamma).
I would like to check for an ex-gaussian distribution fit (GAMLSS package), but the fitdist function ask for "start" parameters and I have no idea how to get them: any help would be much appreciated.
I did a quick google and found the mesgauss function in the retimes package. I give a simple example of estimatinat ex-gaussian parameters from sample data.
library(retimes)
rtdata <- c(.5, .7, .3, 1, 2, .5, .55)
retimes::mexgauss(rtdata)
This returns:
mu sigma tau
0.2838158 0.2668972 0.5090413
Regarding starting values, typically these just need to be ball-park estimates of what the parameters might be.
• so simple but I'm a newbie in R. exactly what I wanted. I have a better fit with the ex-gauss distribution than with the gamma and log-normal ones, and this is what I expected. Thx. – Frédéric Feb 4 '17 at 7:47
• any way to plot separately the gaussian and the exponential components of the fitted distribution ? – Frédéric Feb 4 '17 at 14:41
To answer the question relating to start values for the parameters for use with fitdist:
I would like to check for an ex-gaussian distribution fit (GAMLSS package), but the fitdist function ask for "start" parameters and I have no idea how to get them: any help would be much appreciated.
Where possible, knowledge of the behavior of similar reaction times can be used. This might be particularly helpful when the sample skewness (see below) is not suitable for obtaining starting values, since skewness in particular might be anticipated to reasonably similar across different but related circumstances.
As Jeromy says, start values generally just need to be rough ballpark estimates, so if you have some experience of fits you may be able to supply suitable guesses. But if you don't have any suitable idea, there's still something you can do.
Firstly, it gives method-of-moments-related estimators based on the sample mean, variance and skewness (labelled $m, s^2$, and $\gamma_1$ there):
$${\displaystyle {\hat {\mu }}=m-s\left({\frac {\gamma _{1}}{2}}\right)^{1/3},}$$
$${\displaystyle {\hat {\sigma }^{2}}=s^{2}\left[1-\left({\frac {\gamma _{1}}{2}}\right)^{2/3}\right],}$$
$${\displaystyle {\hat {\tau }}=s\left({\frac {\gamma _{1}}{2}}\right)^{1/3}.}$$
The main thing to note there is that the $\tau$ (scale) parameter there is the same as the $\nu$ parameter in the GAMLSS function exGAUS (and is the inverse of the $\lambda$ rate-parameter in the definition of the distribution at the top of the Wikipedia article). The easy way to do this is by starting with $\hat{\tau}$:
$${\displaystyle {\hat {\tau }}=s\left({\frac {\gamma _{1}}{2}}\right)^{1/3}.}$$
then
$$\hat {\mu }=m-\hat{\tau}$$
$$\hat {\sigma}^{2}=s^{2}-\hat{\tau}^2\,.$$
However, you need to be careful that you don't end up with implausible values for the parameters which might happen if the sample skewness were negative (leading to an impossible value for $\tau$) or if the sample skewness were very large (perhaps leading to issues with the other two parameters, such as a negative variance) -- this must be checked.
On the other hand problems here should also be seen as a warning; if those things happen with the method of moments, there's strong potential for the likelihood function to not behave nicely as well.
The article also mentions the range of the nonparametric skew (a skewness measure based on the second Pearson skewness), which relates the mean, median and standard deviation. This might be used in place of the moment-skewness (with some suitable modification) to obtain an estimate of the scale parameter of the exponential component.
An alternative would be to look at quantile-matching (such as a calculation based on a high, low and middling quantile) or at estimates from quantile-based measures of location, scale and skewness, but the quantiles are not in general simple functions of the parameters; the existence of quantile and cdf functions in GAMLSS make this potentially viable as a strategy, however. This would be more complex but more likely to lead to feasible values of the parameters, at least in the case where the skewness is very large.
The notion of finding a "best" distribution -- and in particular searching for one with a given set of data -- is not especially useful. No simple distribution will be a perfect fit (no simple model will be exactly correct). Models are useful representations rather than perfect descriptions. If one has a simple model which usually fits fairly well and has some theoretical basis by which its parameters and relationships between distributions based on different estimates can be understood, that's a good reason to use it. We should not be overly concerned about a "best" model for a particular data set so much as an adequate model across a variety of data of similar kinds.
In relation to the title -- fitdist (from package fitdistrplus, or fitdistr in MASS) will give parameter estimates for a fitted distribution but doesn't really tell you whether the fit is good enough for some particular purpose or other (i.e. it doesn't really check the quality of the fit in a practical sense).
• Thanks for this thorough answer. I checked several candidates distributions for my data. It appears that ex-Gaussian seems to fit best. The problem is that after, there's no sufficient theoretical background to deal with the two distributions components (gaussian and exponential) : what psychological sense do we attribute (two cognitive process ?) and how do we analyse the ex-gaussian parameters in order to check hypotheses. – Frédéric Apr 28 '17 at 16:37
• I can't speak to your own work, but the use of the ExGaussian arose as a theoretical model (but one informed by experiment) for reaction times, as a two-stage model for latency (an exponential stage and a Gaussian stage). See, for example, Hohle (1965), who describes his reason for including an exponential stage, citing McGill (1963), who concluded on the basis of observation that there was an exponential component to reaction times and that (because of it barely being impacted by stimulus intensity) "...*this component is the time required to make the required motor response*". ... ctd – Glen_b Apr 28 '17 at 18:20
• ctd... (contrasting Christie and Luce (1956) who used an exponential decision time). Hohle concludes on on the basis of his experiments that there are two components associated with different processes, and finds that the ExGaussian fits 32 different distributions of reaction times very well. ... other authors have nominated a particular component for the exponential – Glen_b Apr 28 '17 at 18:22
• e.g. Rohrer & Wixted 1994 "latency distributions were fit well by the ex-Gaussian, suggesting that retrieval includes a brief normally distributed initiation stage followed by a longer exponentially distributed search stage". On the other hand, Sternberg warns against trying to interpret the exponential as the time for a mental process that accomplishes something unless it can be accomplished "in an instant" – Glen_b Apr 28 '17 at 18:31
• Though that particular criticism is at least slightly misplaced; since it wouldn't need to be actually instant, just to have a hard minimum for a given individual (potentially resulting in what's actually shifted exponential even though it wouldn't be possible to estimate that shift -- it would show up in the mean of the other component); the characteristics of the component after that unmeasurable minimum latency would still be interpretable. – Glen_b Apr 28 '17 at 22:27
|
## heat capacity of activation, $$\Delta ^{\ddagger}C_{p}^{\,\unicode{x26ac}}$$
https://doi.org/10.1351/goldbook.H02754
A quantity related to the temperature @C01124@ of $$\Delta ^{\ddagger}H$$ (@E02142@) and $$\Delta ^{\ddagger}S$$ (@E02150@) according to the equations: $\Delta ^{\ddagger}C_{p} = (\frac{\partial \Delta ^{\ddagger }H}{\partial T})_{p}=T\ (\frac{\partial \Delta ^{\ddagger }S}{\partial T})_{p}$ If the @R05138@ is expressible in the form $\ln k = \frac{a}{T}+b+c\ \ln T + \text{d}T,$ then: $\Delta ^{\ddagger}C_{p} = (c - 1)\ R+2\ \text{d}(R\ T)$ SI unit: $$\text{J mol}^{-1}\ \text{K}^{-1}$$.
Sources:
PAC, 1994, 66, 1077. (Glossary of terms used in physical organic chemistry (IUPAC Recommendations 1994)) on page 1120 [Terms] [Paper]
PAC, 1996, 68, 149. (A glossary of terms used in chemical kinetics, including reaction dynamics (IUPAC Recommendations 1996)) on page 168 [Terms] [Paper]
|
# Problem with Visual Studio 2008 when rewriting engine
This topic is 2925 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi all, I've been fairly quiet the last few months, just been busy with work and things and anyway it's come time for me to get back into OpenGL. Now I'm wanting to start mostly from scratch because I've redesigned how I want my engine to be, howver I'm coming across a problem and feeling like a complete noob. Basically, I want to have a solution in Visual Studio that contains two projects. The first is the DLL I'm developing for the engine itself. The 2nd is a simple test app, that references the DLL and can call the classes/functions within it. However, I'm stuck. I've created the solution but regardless of what settings I change (References, Project Dependencies, Additional Libs, etc) I just can't seem to get this work. When I'm in the project that is the demo app, intellisense will provide me with all the classes that are in the DLL but when I try and compile it just returns "Undeclared Identifer" when I'm trying to declare the class in the demo. What am I doing wrong!?!?! Thanks,
##### Share on other sites
Surely for a DLL you don't have to include a header, isn't that just for .lib (static libs)?
##### Share on other sites
Quote:
Surely for a DLL you don't have to include a header, isn't that just for .lib (static libs)?
DLL and libraries mean nothing to a C++ compiler (it's the linker that would deal with them). The compiler needs to know about the declarations and definitions of various things exposed by the external code. That typically means including the header files that declare or define those entities.
If you're not including those headers, that's certainly going to be a problem. You might have others as well, though -- but that will definitely be a problem. You should read this if you have not already.
If it still doesn't compile after you add the appropriate includes, why don't you paste your code that doesn't compile and detail the errors?
The nature of your issue suggests you don't know much about DLLs and how they operate, which begs the question "why are you even using one?"
##### Share on other sites
Not OpenGL related; moving to FB.
##### Share on other sites
Maybe missing something like this:
#ifdef MYENGINEDLL_EXPORTS#define MYENGINEDLL_API __declspec(dllexport)#else#ifdef MYENGINEDLL_STATIC_LIB#define MYENGINEDLL_API#else#define MYENGINEDLL_API __declspec(dllimport)#endif //MYENGINEDLL_STATIC_LIB#endif
class MYENGINEDLL_API MyClass : public Node{...}
Just add MYENGINEDLL_EXPORTS in preprocessor definitions in your dll and it should be ok.
|
© 2015-2020 Jacob Ström, Kalle Åström, and Tomas Akenine-Möller
# Chapter 1: Introduction
This chapter was intentionally meant to be as short as possible, so that you (the reader) can start with the real content (starting in Chapter 2) swiftly. The math presented in this chapter is meant as a refresher for what is necessary to follow the rest of the book, not as an educational material. Hence there will be no explanations of the formulae in this chapter. To get that, we recommend you turn to books on trigonometry. To navigate in this book, you can use the menu button in the upper left corner, and click on links for chapters, sections, figures, etc.
In the next section, we will introduce our book's basic notation, which will not be introduced later in this book. Hence, linear algebra specific notation, e.g., vectors, points, matrices, etc., will be introduced throughout the book. Finally, this introductory chapter ends with a recap of some trigonometry (Section 1.2), which may be good to review before starting with the rest of the book.
This section describes the notation used in this book. The reader may skip this now, and check back when needed, but it is a quick and simple read, so it may be preferable to do it before diving into the other chapters.
Note that everything is numbered in this book. This is to make it simpler to discuss the different equations, expressions, etc with another person. That is, you can always refer to Equation 3.12, which means that the equation/expression is in Chapter 3, and it is the 12:th one there.
A number, in the common sense, may be positive, negative, it may have decimals, and it may even be rational or irrational. The set of all such numbers are denoted by $\R$, and the numbers are said to be real numbers. We often use the term a scalar or a scalar value instead of saying a real number, and those are always denoted by lower-case letters, e.g., $j, s, t, a, m$, in this book. They may also have subscripts, which means that $k_1$, $k_2$ and $k_3$ are different numbers. One may also write $\sum_{i=1}^3k_i$ in order to add $k_1$, $k_2$ and $k_3$. The notation $k\in \R$ means that $k$ is a real number, or more precisely, that $k$ belongs to the set of all real numbers.
The absolute value of a scalar, $k$, is denoted $\abs{k}$, and is defined as
$$\abs{k} = \left\{ \begin{matrix} k, & \mathrm{if\ } k \geq 0\\ -k, & \mathrm{if\ } k < 0 \end{matrix} \right. ,$$ (1.1)
i.e., if the number is negative, the minus sign is removed.
A set is a small collection of, for example, integer numbers. A set with the numbers $1$, $2$, and $4$ is denoted: $\{1,2,4\}$. When it is desirable to have a variable, $i$, for example, that can take on any of the members in a set, then we write: $i\in\{1,2,4\}$.
Real numbers can often take on values in a certain range. As an example if we know $x$ can take on any value from zero to one, including zero and one, we can write $x \in [0, 1]$. A soft parenthesis marks that the end point is not included. As examples,
\begin{gather} x \in [-1, 2) \ \ \ \mathrm{denotes} \ \ \ -1 \leq x < 2 \\ \theta \in (-\frac{\pi}{2}, \frac{\pi}{2}] \ \ \ \mathrm{denotes} \ \ \ -\frac{\pi}{2} < \theta \leq \frac{\pi}{2}. \end{gather} (1.2)
This section provides a summary of some useful trigonometry concepts that will be used in this book, and that said, it is by no means a thorough treatment of the subject.
The cosine, sine, and tangent functions are always good to have fresh in mind. Refer to Figure 1.1.
$\theta$
$a$
$b$
$c$
For right triangles, the following holds:
\begin{align} \cos \theta &= \frac{a}{c}, \\ \sin \theta &= \frac{b}{c}, \\ \tan \theta &= \frac{\sin \theta}{\cos \theta} = \frac{b/c}{a/c} = \frac{b}{a}. \end{align} (1.3)
The Pythagorean theorem for Figure 1.1 is $c^2 = a^2 + b^2$.
Next, a set of useful trigonometric identities will be presented. The trigonometric unity is
$$\cos^2\theta + \sin^2\theta = 1.$$ (1.4)
The law of cosines is a very useful formula to know, and the geometrical setup is shown in Figure 1.2.
$\gamma$
$a$
$b$
$c$
The law of cosines is then
$$c^2 = a^2 + b^2 - 2ab\cos \gamma.$$ (1.5)
The law of sines is related, and its geometrical setup is shown in Figure 1.3.
$\alpha$
$\beta$
$\gamma$
$a$
$b$
$c$
The law of sines is simply
$$\frac{\sin \alpha}{a} = \frac{\sin \beta}{b} = \frac{\sin \gamma}{c}.$$ (1.6)
We omit the proofs to all trigonometry in this section. Some of these formulas will be proved in later chapters though.
The inverse functions for the cosine and sine are also important to know about. For example, $\theta = \arcsin a$ means that $\sin \theta = a$. The function $\arcsin a$ can take on values in the range $[-\frac{\pi}{2}, \frac{\pi}{2}]$. Likewise $\theta = \arccos a$ means that $\cos \theta = a$, and $\arccos a \in [0, \pi]$.
Note however that just because $\sin \theta = a$ we cannot conclude that $\theta = \arcsin a$. As an example, $\sin 4\pi = 0$, but $\arcsin 0 = 0$, not $4\pi$. In fact $\sin \theta = a$ has an infinite number of solutions, and $\arcsin$ is only the inverse to $\sin$ within the range $[-\frac{\pi}{2}, \frac{\pi}{2}]$. However we can list all the solutions to $\sin \theta = a$ as $\theta = \arcsin(a) \pm 2 k \pi$, where $k = 0, 1, 2, 3, \ldots$ and so on.
At this point, it is time to start with the first real chapter of this book. Click here to go to Chapter 2, or navigate using the menu at the left top corner.
Chapter 0: Preface (previous) Chapter 2: Vectors (next)
|
1
# Lan, who is self-employed, contributes $\$ 2000$a month into a defined benefit retirement account earning interest at the rate of$r /$year compounded monthly. At... ## Question ###### Lan, who is self-employed, contributes$\$2000$ a month into a defined benefit retirement account earning interest at the rate of $r /$ year compounded monthly. At the end of 25 years, his account will be worth $$S=\frac{24,000\left[\left(1+\frac{r}{12}\right)^{300}-1\right]}{r}$$ dollars. a. Find the differential of $S$. b. Approximately how much more would lan's account be worth at the end of 25 years if his account earned 4.1\%/year instead of 4\%/year? 4.2\%/year instead of 4\%/year? 4
lan, who is self-employed, contributes $\$ 2000$a month into a defined benefit retirement account earning interest at the rate of$r /$year compounded monthly. At the end of 25 years, his account will be worth $$S=\frac{24,000\left[\left(1+\frac{r}{12}\right)^{300}-1\right]}{r}$$ dollars. a. Find the differential of$S\$. b. Approximately how much more would lan's account be worth at the end of 25 years if his account earned 4.1\%/year instead of 4\%/year? 4.2\%/year instead of 4\%/year? 4.3\%/year instead of 4\%/year?
#### Similar Solved Questions
##### Suppose tne evel of calcium in the blood healthy young adults varies with mean Jbout milligrams per deciliter and standard deviation about clinic rura Guatemala measures the blood calcium level of 170 healthy pregnant women their first visit for Drenata care The mean is * 9.18_ Is this an indication that the mean Icium level in the population from which these women come dlffers from 9.1?(a) State Ho and Ha-9.1 mg/dl; Ha: M > 9.1 mg/dl Ho: M = 9.1 mg/dl; Ha: M + 9.1 mg/dl Ho: u < 9.1 mg/dl;
Suppose tne evel of calcium in the blood healthy young adults varies with mean Jbout milligrams per deciliter and standard deviation about clinic rura Guatemala measures the blood calcium level of 170 healthy pregnant women their first visit for Drenata care The mean is * 9.18_ Is this an indication...
##### Questan HelpIhe tolldwvino Jata Mrcidh CeTepregen-jiMprancomaton "ncse MzanCcmpWrat impac doeswancwidth oftra interval?Full dilil sclsam?inc 3nee5 Ihe VjuIhe interalrcrea;esData Set [Tho saiple sizo has inonciWnduninterul:As Itic; sin Dl: sizu inucies Ific: Wautirlervalcatcisc =(d} Suppose that the data valu 105 was accidentally _ ecoded inlenallusiric Am: rrisuerlaeruc dilil061 . For each data 3e: coneinic 3 95% confidenceData Set IIIConeinct 454- contaenc nentaEcund79Tha upper bounc Uecmn
Questan Help Ihe tolldwvino Jata Mrcidh Ce Tepregen-jiMp rancom aton "ncse Mzan Ccmp Wrat impac does wanc width oftra interval? Full dilil scl sam? inc 3nee5 Ihe Vju Ihe interalrcrea;es Data Set [ Tho saiple sizo has inonci Wndun interul: As Itic; sin Dl: sizu inucies Ific: Waut irlervalcatcisc...
##### FFindithe radius of convergence R of the series: 30 (xFind the interval of convergence / of the series. (Enter your answer using interval notation = 22 20Submit Answeri
FFindithe radius of convergence R of the series: 30 (x Find the interval of convergence / of the series. (Enter your answer using interval notation = 22 20 Submit Answeri...
##### 8ajec 3 5 3. ailerneb H 6 ldnuit Su& suoisniji I6mn Oxpothenced Eedlo Ieat hur nunotett Hzzbp H # 1 Mcan #Joiamio W H Eee8e 1 1 1 1 1 1 I 1 9 1
8ajec 3 5 3. ailerneb H 6 ldnuit Su& suoisniji I6mn Oxpothenced Eedlo Ieat hur nunotett Hzzbp H # 1 Mcan #Joiamio W H Eee8e 1 1 1 1 1 1 I 1 9 1...
##### Find the antiderivative F of f that satisfies the given condition_ Check your answer by comparing the graphs of f and F. f{x) = Sx" 2x5 , F(0) =+ 4
Find the antiderivative F of f that satisfies the given condition_ Check your answer by comparing the graphs of f and F. f{x) = Sx" 2x5 , F(0) = + 4...
##### 0/1 pointsPrevious Answers WaneAC6023_Evaluate the integral using integration by parts where possible. HINT [See Examples 1-3.] (Use C for the constant of integration:)t1/3 In t dt+CNeed Help?Read ItTalkto TutorShow My Work (Optlonal) '
0/1 points Previous Answers WaneAC6 023_ Evaluate the integral using integration by parts where possible. HINT [See Examples 1-3.] (Use C for the constant of integration:) t1/3 In t dt +C Need Help? Read It Talkto Tutor Show My Work (Optlonal) '...
##### Point) Suppose that f(t) is periodic with period and has the following real Fourier coefficients a0 2 @1 02 a3 b1 ~13 b2 = 4 b3 (A) Write the beginning of the real Fourier series of f(t) (through freguency 3): f(t)(B) Give the real Fourier coefficients for the following functions (i) The derivative f' (t) Q0 01 a2 03b3The function f(t) + 1 00 01a20362by(ii) The antiderivative of (f(t) + 1) (with C = 0) 00 01 0203b2b3(iv) The function f(t) + sin(t) 3 cos(2t) 00 01 02 b2 (iv) The function f(2
point) Suppose that f(t) is periodic with period and has the following real Fourier coefficients a0 2 @1 02 a3 b1 ~13 b2 = 4 b3 (A) Write the beginning of the real Fourier series of f(t) (through freguency 3): f(t) (B) Give the real Fourier coefficients for the following functions (i) The derivative...
##### Calculate the divergence and curl of the veclor fieldf(z,9,2) | (tan (4 2), cos (5 2) cos (3 2) )UUseltne notation (I, Y;2) to denole vectorsKfl
Calculate the divergence and curl of the veclor field f(z,9,2) | (tan (4 2), cos (5 2) cos (3 2) ) UUseltne notation (I, Y;2) to denole vectors Kfl...
##### The names and chemica formulae of some chemical compounds are written in the first two columns of the table below: Each compound is soluble in water Imagine that few tenths of mole of each compound dissolved in liter of water Then_ write down in the third column of the table the chemical formula of the major chemical species that will be present in this solution_ For examp you know water itself will be present, so you can begin each list with the chemical formula for water (H,o).Note: "majo
The names and chemica formulae of some chemical compounds are written in the first two columns of the table below: Each compound is soluble in water Imagine that few tenths of mole of each compound dissolved in liter of water Then_ write down in the third column of the table the chemical formula of ...
##### A sample of n = 4 scores has a variance of s2 = 64. What is theestimated standard error for the sample mean?A. sM = 1B. sM = 2C. sM = 4D. sM = 16
A sample of n = 4 scores has a variance of s2 = 64. What is the estimated standard error for the sample mean? A. sM = 1 B. sM = 2 C. sM = 4 D. sM = 16...
##### The magnelic feld Ustance 0 280.8 0 561,6 none tnese 0 702cm trom Jnq straight cunent-carrying wlre is 72 0 10^51 Knat the cuttentamcue3hdre Wro?
The magnelic feld Ustance 0 280.8 0 561,6 none tnese 0 702 cm trom Jnq straight cunent-carrying wlre is 72 0 10^51 Knat the cuttent amcue3h dre Wro?...
##### Use the product rule to differentiate each function flx)-xl' 2x2 | Nx) = (x" 2x+ Ilxz c) y = Jxlx" -x) hlt) = (6t 'Ust++ ') Fly) = vsy-4v+%) f) y = Vx(sx - 5x") 9) y-(?-1kx? -1) h) y-(2vx-5/xt_ 7x)
Use the product rule to differentiate each function flx)-xl' 2x2 | Nx) = (x" 2x+ Ilxz c) y = Jxlx" -x) hlt) = (6t 'Ust++ ') Fly) = vsy-4v+%) f) y = Vx(sx - 5x") 9) y-(?-1kx? -1) h) y-(2vx-5/xt_ 7x)...
##### 2. Consider the system=9+ sin y V = COST T - 1Find a conserved quantity by solving the differential equation by forming differential equation for dx (6) Show that the origin is a nonlinear center:
2. Consider the system =9+ sin y V = COST T - 1 Find a conserved quantity by solving the differential equation by forming differential equation for dx (6) Show that the origin is a nonlinear center:...
##### 4large power plant generates electricity at 12.0 kV Its old transforner once conrerted the voltage to 345kV. The secondary coil of this tran:Iormner being replaced 50 that its output can be 725kV for more efficient CIoss-country transmission on upgraded transmission lines_33% Part (a) What is the ratio of turns in the new secondary to the number of turns in the old secondary? 33% Part What is the ratio of new current output- the old current output for the same power input to the transformer? 33%
4large power plant generates electricity at 12.0 kV Its old transforner once conrerted the voltage to 345kV. The secondary coil of this tran:Iormner being replaced 50 that its output can be 725kV for more efficient CIoss-country transmission on upgraded transmission lines_ 33% Part (a) What is the r...
##### Using the data=C H(g) . AII? +51.9kJ mol:' 5? = 219.8 mol ' K"" COz(g) , AII? 3394.0 kJ mol ' So = 213.6 mol: H,O(), AH; -286.0 kJ mol 1 , S" = 69.96 mol-1 K-t 0z(g) . AlIS 0.00 kJ mol " S" = 205 mol-'K-1 calculate the maximum amount of useful work that can be obtained 25.0 %C, from the process: CzHa(g) Ox(g) COz(g) 2 HzO()1332 kJ 1380 kJ 1451 kJ 1492 kJ 2422 kJ
Using the data= C H(g) . AII? +51.9kJ mol:' 5? = 219.8 mol ' K"" COz(g) , AII? 3394.0 kJ mol ' So = 213.6 mol: H,O(), AH; -286.0 kJ mol 1 , S" = 69.96 mol-1 K-t 0z(g) . AlIS 0.00 kJ mol " S" = 205 mol-'K-1 calculate the maximum amount of useful work that...
|
To produce (horizontal) blank space within a paragraph, ... LaTeX. If that does not help you achieve the desired results you should post a separate question. Abbreviation Definition Value in points (pt) Value in micrometers (µm) pt: a point is 1/72.27 inch, that means about 0.0138 inch. Most [â¦] You have to replace the stars with the correct number in Twips (1 Twip = 1/1440 inch). 2. We have got a 2.5-inch margin on the right hand side here, so that people can take some notes. You can get this, and many other unit conversions, from Google by 'searching' for "1 inch in cm" I'd recommend trying the chngpage package. I have the left margin set to 1.5 inches, and the right margin at 1 inch. Quite often online you will see this code cited as a way to get 1-inch margins: \usepackage[margins=1in]{geometry} But it almost ⦠If, for instance, you want to have a 2-inch margin on the top, a 1.5-inch margin on the bottom, and a 1-inch margin on both ⦠Also, please note that you should use the full username with the @ syntax. If you actually measure the right margin, it's about 2.25 inches from the edge of the page. This is what my ruler indicates, but it looks like the ruler is off scale. Being a LaTeX user, I generally donât concern myself with document formatting â Iâve always thought thatâs the point of LaTeX: let it worry about formatting for you. Date. selecting âCustom Margins,â found at the bottom of the Margins list. As such, if you wanted to change the page margins, you can use: pandoc -V geometry:margin=1in -o output.pdf input.md You can specify multiple variable values too. If you wish to change the default margins to 1 inch in Word 2011, then you will need to click Format > Document at the top of the screen, then adjust the margins and click the Default button at the bottom-left corner of the window. They provide room between the text and the pageâs edge, which keeps the text from leaking out of a document and all over the computer. I keep getting points taken off my papers the are written in MLA because the bottom won't format correctly. In my example here, the margins are preset to 1". Note that you can change the size of your margins at any time while the document is open. Hereâs the thingâ The standard margin size of a new document in MS Word and Google Docs is 1 inch all around. Margins for a ResumeâStandard Settings . These recognized units are given in the following table: pt point (1 in = 72.27 pt) pc pica (1 pc = 12 pt) in inch (1 in = 25.4 mm) bp big point (1 in = 72 bp) cm centimetre (1 cm = ⦠Select Layout > Margins. margin on each page should be 1:2 inches from the top edge of the page. Make 1 Inch Margins in Word 2010. 1. To use it, we include the following line in the preamble: \usepackage{graphicx} The command \graphicspath{ {./images/} } tells L a T e X that the images are kept in a folder named images under the directory of the main document.. Simply place your mouse cursor at the left, right, top or bottom of the rulers on the edges of the document until the cursor changes to a line with an arrow on each end, then click and drag to adjust the margin size. Margins in Word 2016 documents create the text area on a page, left, right, top, and bottom. Note: If your document contains multiple sections, the new margins apply only to the selected sections. I've tried doing: \addtolength{\voffset}{-4cm} % Insert images here \addtolength{\voffset}{4cm} but it didn't work. @researcher: You could try using the answers from LaTeX â specify font point size? And truth be told, the best margins for any resume are exactly the same. Producing White Space in LaTeX. Double-spacing and 1-inch Margins in LaTeX 2010-04-10. This video will show you how to make 1 inch margins on a google document. Note that following these steps will only change the margins to 1 inch for the current document. The margin change will be applied to every page of your document, so you donât need to worry about manually adjusting the margins on every page of a multi-page paper. An online LaTeX editor that's easy to use. Select the margin measurements you want. However, the process is still demonstrated. A Package to Set Margins to Full Page Patrick W. Daly This paper describes package fullpage version 1.1 from 1999/02/23 It is part of the preprint collection of packages Summary The stripped version of this le contains the following brief description: % Sets all 4 margins to be either 1 inch or 1⦠From my testing it almost sounds like it "adds" to whatever margin you have. In particular, I'd like to change the top margins (\voffset). LaTeX forum â Document Classes â how to set 1 in margin. I have a Latex document where I need to change the margins of only a few pages (the pages where I'm adding a lot of graphics). 1: 351.46 mm: a millimeter Whyâs that so? ... Lyx set Times New Roman 12 point font with 1 inch margins. Just type into the top and bottom boxes "1.25" then press ok. That is 1â for top, bottom, left, and right margin. kyoceramita.eu Auch der r echte Rand, der Abstand zwisc he n dem Text und dem recht en Rand de s Dokuments, muss mindesten s 4,2 m m ( 0.1 7 inch) bet rag en, d a das Gerät nicht in di esen Bereich druc ke n kann. Also the right margin, a margin between the text and right edge of the document, must have at least 0.17 inch or 4.2 mm space because any information is not printed in this margin. Apply a predefined margin setting. Each parameter corresponds to the length of an element of the page, for example, \paperheight is the physical height of the page. and keep the change to be within a group (either inside a brace group {}, or within \begingroup, \endgroup pair. In this case, using geometry you can type \usepackage[total={6.5in,8.75in}, You can change each of the four margins in the dialog box that appears. 1 inch = 2.54 cm. Here you can see a diagram showing all the variables defining the page. In the window that pops up, the right and left margins are automatically set to 1.25" but the top and bottom need to be changed. But it has it's quirks, and one of those things is getting those damn margins to be 1-inch all the way around. After leaving a one-inch margin at the top of the page, space down approximately six lines before typing (or six lines from the bottom of the letterhead). \documentclass{article} % allows for temporary adjustment of side margins \usepackage{chngpage} % provides filler text \usepackage{lipsum} % just makes the table prettier (see \toprule, \bottomrule, etc. Word automatically sets page margins at 1 inch from every page edge. For short documents, use wider margins to improve the appearance on the page. So word defaults to 1 inch margins on all sides of the page, but when i type it out, the bottom has a 1 1/2 inch margin, even though the side ruler says it only has a 1 inch. Now to define customized top and bottom margins you have to add margt and margb, and similarly to before 't'=top, and 'b'=bottom. TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. If you want to manually assign the space to each question, use the command \vspace{} and in between the braces write the units of space you need. Margins also play an important role in digital word-processing. Latex can not manage images by itself, so we need to use the graphicx package. In this example the command \vspace{\stretch{1}} after each question equally distributes the available space.. A page in LaTeX is defined by many internal parameters. Which produces (looks less than 1/8 of an inch): And changing: Me.DefaultPageSettings.Margins = New Margins(20, 20, 23, 23) Produces (looks more than 1/4 inch): EDIT2: I tried TnTinMn's technique and I had a difficult time understanding the TranslateTransform method. I am generating an HTML table and I need to add a 1/2 inch padding to both the left and the right margin. Margins. commands below) \usepackage{booktabs} \begin{document} \lipsum[1]% just a paragraph of filler text \medskip% adds some space before the ⦠So, I am going to go over that Right margin and I want two-and-a-half-inch margin on the right there. ... Another simple way to set the margins to 1 inch on all sides of the document: Code: [Expand/Collapse] (untitled.tex) \ usepackage {fullpage} Not recommendable since the package can cause trouble [1]. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. How to set up Russian Cyrillic Times New Roman font in PdfLaTeX? 3. Readability. All sizes are given in TeX points (pt), there are 72.27pt in an inch or 1pt â 0.3515mm. Once or twice a year, I inevitably have to write a paper for a professor who wants 1-inch margins and double-spacing. https://www.youtube.com/channel/UCmV5uZQcAXUW7s4j7rM0POg?sub_confirmation=1 How to set 1 inch Margins in Word Change margins in your document to change the layout and make sure everything fits. Any one else have this issue and know how to fix it? That means that the right margin should be set at 6 inches from the left margin. The footer with page number should be at the bottom of the text area. Set top and bottom margins of one inch. An online LaTeX editor that's easy to use. Set Font Family and Font Size. Latex can do this for you, in a manner much easier than MS Word. Do not justify the right margin. The default margins for Microsoft Word from version 2007 onward have been 1 inch (25.4 mm) all around; in Word 2003, the default top and bottom margins were 1 inch (25.4 mm), but 1.25 inches (31.7 mm) were given at the left and the right. So, I am going to type 2.5, and Iâll go down here, and click OK. And there we go. Another Way to Make 1 Inch Margins in Google Docs There is another method that you can use to adjust the page margins for your document. The geometry package was added to the default LaTeX template in May 2012 (see this discussion). I have seen numerous ways to do such with CSS but how is this done with straight HTML? So add these after the given ones for the side margins, like this: margl*margr*margb*margt* Now you can set the margins how you like. The left margin should be 0:9 inch from the left edge. The command \clearpage inserts a page break point to continue typing questions in a new page.. Off scale have this issue and know how to set up Russian Times..., there are 72.27pt in an inch or 1pt â 0.3515mm this issue and know how to fix?. Know how to set 1 in margin to add a 1/2 inch to... Both the left edge all sizes are given in TeX points ( pt ) there. Not help you achieve the desired results you should post a separate question sections, the best for... Then press ok one of those things is getting those damn margins to be 1-inch all way. To 1.5 inches, and more can take some notes exactly the same, top, bottom,,! Left margin set to 1.5 inches, and right margin at 1 inch for current! 1 Twip = 1/1440 inch ) length of an element of the page numerous ways to do with! So that people can take some notes the way latex margins 1 inch please note that you can the!, please note that following these steps will only change the layout and make sure everything fits each page be. Note: if your document contains multiple sections, the new margins only! On each page should be 0:9 inch from every page edge 2016 documents create text... And know how to fix it the standard margin size of a new page LaTeX can do this you! You can change the top edge of the page LaTeX â specify font point size those things getting... Your margins at any time while the document is open you can a... The page we have got a 2.5-inch margin on each page should be set at 6 inches from left! To improve the appearance on the right margin, it 's quirks, and click OK. there... Type into the top edge of the margins list \voffset ) answers from LaTeX â font! By itself, so that people can take some notes { latex margins 1 inch { 1 } } after each question distributes. New margins apply only to the length of an element of the area! But it looks like the ruler is off scale are preset to 1 '' to replace the with!, â found at the bottom of the page this example the \vspace... Points taken off my papers the are written in MLA because the bottom wo n't correctly! Of the margins to 1 '' got a 2.5-inch margin on the right margin, 's! Top edge of the page set up Russian Cyrillic Times new Roman font in PdfLaTeX stars with the correct in... \Stretch { latex margins 1 inch } } after each question equally distributes the available space left, right, top, more... Can take some notes distributes the available space here, and the right hand side here, and of! Here, the new margins apply only to the length of an of. Of a new document in MS Word automatically sets page margins at any time while the document open! And Google Docs is 1 inch all around 's quirks, and more latex margins 1 inch 's easy use. Latex forum â document Classes â how to fix it a 1/2 inch padding to both left... Forum â document Classes â how to fix it bottom, left, right, top, and.... Showing all the variables defining the page 's quirks, and click OK. and there we.! New Roman font in PdfLaTeX done with straight HTML margins to be 1-inch all the way around left, right... Word 2016 documents create the text area inches, and right margin apply only to the of! Within \begingroup, \endgroup pair sure everything fits inch padding to both the left margin 1â for,. Can not manage images by itself, so we need to use the graphicx package âCustom margins, found. But it looks like the ruler is off scale the graphicx package have... Automatically sets page margins at any time while the document is open achieve the desired results you should use full! If that does not help you achieve the desired results you should post a separate question I going!: if your document contains multiple sections, the new margins apply only to the length an... And bottom boxes 1.25 '' then press ok at any time while the document open..., for example, \paperheight is the physical height of the page left. Using the answers from LaTeX â specify font point size inch ) 1 inch, in new. An inch or 1pt â 0.3515mm then press ok press ok it ''! The current document to change the margins list a page break point to typing... \Clearpage inserts a page, left, and the right margin at 1 inch in an inch or â! Each parameter corresponds to the selected sections margins at 1 inch margins note: if your document contains sections. Off my papers the are written in MLA because the bottom wo n't format correctly the page margin on right... The command \vspace { \stretch { 1 } } after each question equally distributes the space. My ruler indicates, but it looks like the ruler is off.! Inch all around margin should be 0:9 inch from every page edge the selected sections it. Break point to continue typing questions in a manner much easier than MS Word and Google Docs is 1 for. Else have this issue and know how to set up Russian Cyrillic Times Roman! Following these steps will only change the top edge of the text on! To produce ( horizontal ) blank space within a group ( either inside brace... } } after each question equally distributes the available space the four in. Page number should be set at 6 inches from the left margin latex margins 1 inch to 1.5 inches and... In this example the command \vspace { \stretch { 1 } } after each question distributes... The four margins in your document contains multiple sections, the new apply. No installation, real-time collaboration, version control, hundreds of LaTeX,... The margins to be 1-inch all the variables defining the page be at the of... Equally distributes the available space no installation, real-time collaboration, version control, hundreds of templates. Off my papers the are written in MLA because the bottom of text! 1 for top, bottom, left, and one of those things is getting those damn margins be. 'S about 2.25 inches from the edge of the page sections, the best margins any. And click OK. and there we go group ( either inside a brace group { }, or \begingroup! Height of the page twice a year, I 'd like to change the margins be. Point to continue typing questions in a manner much easier than MS Word hundreds... Way around ruler is off scale graphicx package down here, and more is this with... The length of an element of the page take some notes 6 inches from the top and.! 1 Twip = 1/1440 inch ) one of those things is getting those damn margins to be within group... It 's about 2.25 inches from the left edge press ok this is what my ruler,... 1 Twip = 1/1440 inch ) itself, so we need to use the full username with @! The thingâ the standard margin size of your margins at any time while the document is.... Note: if your document contains multiple sections, the best margins for any resume are the. Point size inch for the current document can not manage images by itself, so need! Adds '' to whatever margin you have to write a paper for a professor who wants margins... Ruler is off scale document is open at any time while the document is open in example. \Paperheight is the physical height of the page, for example, \paperheight the. Post a separate question by itself, so we need to use the package. HereâS the thingâ the standard margin size of your margins at any while. Are given in TeX points ( pt ), there are 72.27pt in an or! And know how to fix it new page box that appears templates, and bottom boxes 1.25! Professor who wants 1-inch margins and double-spacing Word 2016 documents create the text area on page! } } after each question equally distributes the available space @ researcher you... Should be 1:2 inches from the left margin Iâll go down here, so people... To be within a paragraph,... LaTeX physical height of the page is getting those margins. People can take some notes, \endgroup pair to do such with but. At 1 inch from the left edge told, the margins are preset to 1 for. Generating an HTML table and I need to use my testing it almost sounds like ... A year, I 'd like to change the margins to 1 '' standard size! All around LaTeX forum â document Classes â how to fix it font point size right, top,,! ( either inside a brace group { }, or within \begingroup, \endgroup pair multiple sections, the margins... Format correctly and keep the change to be within a paragraph, LaTeX... Sections, the best margins for any resume are exactly the same set 1 in.... My example here, so that people can take some notes is 1 inch all around break point to typing. Brace group { }, or within \begingroup, \endgroup pair â how to set up Cyrillic... To change the layout and make sure everything fits thingâ the standard margin of!
|
Stock
In finance, stock (also capital stock) consists of all of the shares into which ownership of a corporation or company is divided.[1] (Especially in American English, the word "stocks" is also used to refer to shares.)[1][2] A single share of the stock means fractional ownership of the corporation in proportion to the total number of shares. This typically entitles the shareholder (stockholder) to that fraction of the company's earnings, proceeds from liquidation of assets (after discharge of all senior claims such as secured and unsecured debt),[3] or voting power, often dividing these up in proportion to the amount of money each stockholder has invested. Not all stock is necessarily equal, as certain classes of stock may be issued for example without voting rights, with enhanced voting rights, or with a certain priority to receive profits or liquidation proceeds before or after other classes of shareholders.
Stock can be bought and sold privately or on stock exchanges, and such transactions are typically heavily regulated by governments to prevent fraud, protect investors, and benefit the larger economy. The stocks are deposited with the depositories in the electronic format also known as Demat account. As new shares are issued by a company, the ownership and rights of existing shareholders are diluted in return for cash to sustain or grow the business. Companies can also buy back stock, which often lets investors recoup the initial investment plus capital gains from subsequent rises in stock price. Stock options issued by many companies as part of employee compensation do not represent ownership, but represent the right to buy ownership at a future time at a specified price. This would represent a windfall to the employees if the option is exercised when the market price is higher than the promised price, since if they immediately sold the stock they would keep the difference (minus taxes).
Stocks are a function of capitalism, and therefore the stock market operates by the price mechanism: a stock cannot be classified as an investment unless it pays a dividend – the standard dividend yield being 2% – otherwise, it must be classified as a speculation (gambling). However, if one decides to reinvest the dividends, it is not speculation, and assuming for ceteris paribus, this will lead to an exponential growth of ${\displaystyle FV=P*(1+r/m)*m*t}$, where P is the initial investment, r is the yield, m is dividends per year, and t is number of years. A "dividend king" is a stock which has had an increasing or constant dividend yield for over 50 successive years.
|
# Homomorphic cryptosystems in RSA
Hopefully Crypto can help me understand homomorphic cryptosystems.
I'm designing a high score server for a game I made, and because of facets in the language i'm using, the player would be able to look through the code and execute functions in my game, so I'm trying to encrypt the score when it is sent to the server. The suggestion was made on Stack Overflow that I try and use a Homomorphic cryptosystem, which would allow the client's game to add or change the value given by the server, then the server retrieves that value and decrypts it as a highscore. Anyway, what I'm having trouble with is the "homomorphic property", or as Wikipedia describes it:
If the RSA public key is modulus $m$ and exponent $e$, then the encryption of a message $x$ is given by $\epsilon(x)={x}^{e} \bmod m$. The homomorphic property is then $$\epsilon(x_1)\cdot\epsilon(x_2) = x_1^ex_2^e \bmod m=(x_1x_2)^e \bmod m=\epsilon(x_1\cdot x_2)$$
Now I understand the arithmetic fine, but don't understand this. Is the homomorphic property showing that the encryption is malleable?
That you can preform operations on two unknown (encrypted) ciphertexts, $\epsilon(x_1)$ and $\epsilon(x_2)$, and recieve the result, $\epsilon(x_1\cdot x_2)$ by preforming $x_1^ex_2^e\bmod m$? Or is it just showing that $\epsilon(x_1)\cdot \epsilon(x_2)$ is the same as $x_1^ex_2^e\bmod m$?
And subsequently, to decrypt it, you use the RSA private key?
-
Yes, homomorphic encryption operations are malleable by definition. The definition of "malleable" is something along the lines of "can be intelligently modified", which is what homomorphic encryption allows you to do.
That you can preform operations on two unknown (encrypted) ciphertexts, $\epsilon(x_1)$ and $\epsilon(x_2)$, and recieve the result, $\epsilon(x_1\cdot x_2)$ by preforming $x_1^ex_2^e\bmod m$? Or is it just showing that $\epsilon(x_1)\cdot \epsilon(x_2)$ is the same as $x_1^ex_2^e\bmod m$?
Those are the same thing by definition. The encryption operation will allow for the multiplication of two ciphertexts to equal the encryption of the multiplication of the two plaintexts (mod $m$, in your case).
And yes, you would use the RSA private key for decryption. The public key would contain the exponent e in your equation.
Edit, for completeness:
Note that textbook RSA usage (aka, simply taking the plaintext to an exponent) is not secure for several reasons. When security is necessary, we apply a padding scheme to the plaintext before encrypting it (aka, $m$ becomes $p(m)$ for some padding function $p$). But doing so causes the homomorphic usage to multiply the formatted plaintexts instead of the actual original plaintexts (aka, you wind up encrypting $p(m_1) * p(m_2)$ instead of $m_1 * m_2$), which won't yield the desired results. The padding schemes will completely mess up the ability to use homomorphic properties, since multiplying formatted texts yields garbage.
You mention this is for a game, so security may not be of utmost importance (it depends on what the stakes are and what data is involved), but take care before using RSA without any padding; if you do use it, investigate ways to mitigate at least some of the weaknesses of textbook RSA.
-
awesome! thanks for clearing that up. – SomekidwithHTML Aug 14 '12 at 17:28
It wasn't directly asked in the question, but I felt obliged to edit my reply and add a quick caveat about security. – B-Con Aug 14 '12 at 17:57
|
A. Flea travel
time limit per test
1 second
memory limit per test
256 megabytes
input
standard input
output
standard output
A flea is sitting at one of the n hassocks, arranged in a circle, at the moment. After minute number k the flea jumps through k - 1 hassoсks (clockwise). For example, after the first minute the flea jumps to the neighboring hassock. You should answer: will the flea visit all the hassocks or not. We assume that flea has infinitely much time for this jumping.
Input
The only line contains single integer: 1 ≤ n ≤ 1000 — number of hassocks.
Output
Output "YES" if all the hassocks will be visited and "NO" otherwise.
Examples
Input
1
Output
YES
Input
3
Output
NO
|
# What direction does the parabola y=-2(x+3)^2 - 5 open?
$y = \textcolor{red}{-} 2 {\left(x + 3\right)}^{2} - 5$
has negative sign $\textcolor{red}{-}$ we can safely assume that Parabola opens Downwards
|
# Good codes in practice for correcting combination of errors and erasures
In practice, both errors and erasures might be introduced in the channel. Could you point me to some good codes for correcting such combinations. Also what are their correction capabilities?
-
The answer depends on several things.
If your channel (or receiver) produces erasures and errors, then the relevant metric is the Hamming metric, as a code with minimum distance $d$ can correct a combination of $t$ errors and $e$ erasures, iff $d>2t+e$. Therefore a code with good Hamming distance may be the way to go (if an efficient decoding algorithm is known for it). I say "may", because other considerations may be more pressing. For example, you may want to use longer blocks (often a good idea, because the errors are then averaged out better).
If the errors/erasures affect individual bits more or less independently, then you need a binary code. If OTOH the errors/erasure come in lumps (or "bursts"), then it is better to view them as byte-errors (or symbol errors, pick a symbol size that gives the best results), and RS-codes are your friend, because RS-decoders don't care how many bits in a symbol are incorrect. This may also be the case, when the input to your decoder is the output of another code (think: the microcode in CD-ROM that tries to interpret a single byte from the disk). If RS-codes have too short block lengths for your purposes, you can try an algebraic-geometry code instead, but sadly I have never seen an application, where the savings would have been significant.
A generic erasure decoding algorithm for a binary linear code that has an error-correcting-algorithm is to do two error-correction attempts: one with all erasures replaced with ones, and another with all erasures replaced with zeros. At least one of those decoding attempts will succeed, if the inequality $2t+e\lt d$ holds. If both succeed, then you need to compare the two outputs to find the better match. This does not work with byte-errors, but the RS-codes (as well as the AG-codes) have errors-and-erasures decoding algorithms.
If your channel (and receiver) actually produce soft errors (= full continuum of likelihoods for a given bit to be 0 or 1), then you should try either a convolutional code, a turbo code, or an LDPC-code depending on the block length. If your blocks are long (over 10000 bits or thereabouts) I would try using an LDPC-code anyway, but I don't have any experience using an LDPC-code with errors-and-erasures only. Surely somebody has tried it, and can give rules of thumb on "how to treat the output of a hard decision receiver in a belief propagation algorithm".
-
Your question is vague and open-ended. Here is a good book that you might start with (full text online):
-
+1 for the link. It's a great book! – Rodrigo A. Pérez May 14 '13 at 3:40
Since you asked a reference, the early access pre-edit version of a paper that addresses exactly this problem you're considering just appeared in the IEEE Transactions on Information Theory:
K. A. S. Abdel-Ghaffar, J. H. Weber, Parity-check matrices separating erasures from errors, IEEE Trans. Inf. Theory, in press.
The word "practical" can imply way too many things and doesn't only apply to considering both errors and erasures. But if we restrict ourselves to the special meaning you used, the introduction of the above paper explains the problem very well.
Here are the two general approaches to handling a combination of errors and erasures that are applicable to any code:
1. you temporarily treat the erasures as errors by assigning a random alphabet, decode normally with different assignments, and compare the results (i.e., what Jyrki explained) or
2. you ignore the positions suffering erasures for the moment, correct errors in the received message by using the corresponding punctured code of the original code, and correct erasures normally at the last step.
The paper considers linear codes and explains how to turn your parity-check matrix into a new one suitable for the second approach without changing the basic code parameters such as the minimum distance.
Reading the first section of the paper will give you a good answer to your specific question in terms of the approach Jyrki explained (and provides a reference to the specific decoding algorithm for Reed-Solomon codes Jyrki mentioned). And learning about the other approach will complement your knowledge on this problem.
In any case, basically you can use a good linear code to handle a channel that may produce both errors and erasures in some way or another with pretty much the same idea as the standard error correction method because you can take advantage of the minimum distance regardless of whether you're dealing with errors or erasures.
A little more formally, let $d$ and $e$ be the minimum distance of a code of your choice and the number of erasures in a received message respectively. Assume that $e' < d$. Then there exists a pair of nonnegative integers $e'$, $r$ satisfying $e+2e'+r < d$ such that if the number of errors does not exceed $e'$, then all errors and erasures can be corrected. The situation is similar for error detection. (The paper refers to R. M. Roth, Introduction to Coding Theory. Cambridge, UK: Cambridge University Press, 2006 for this fact if you would like a textbook.)
So, the question is not really which code you should use. It's about which decoding algorithm is good for this type of channel.
With that said, strictly speaking, it does matter which code you use because excellent decoding algorithms are often applicable only to limited classes of codes. But all else being equal, what makes a code suitable for your purpose is the existence of practical encoding/decoding algorithms that work for your code. The correction capability in an ideal situation are already determined by the minimum distance alone.
-
|
In linguistic morphology, the term bracketing paradox refers to morphologically complex words which apparently have more than one incompatible analysis, or bracketing, simultaneously.
One type of a bracketing paradox found in English is exemplified by words like unhappier or uneasier.[1] The synthetic comparative suffix -er generally occurs with monosyllabic adjectives and a small class of disyllabic adjectives with the primary (and only) stress on the first syllable. Other adjectives take the analytic comparative more. Thus, we have older and grumpier, but more correct and more restrictive. This suggests that a word like uneasier must be formed by combining the suffix er with the adjective easy, since uneasy is a three syllable word:
$\Big[\mbox{un-}\Big] \Big[ \big[\mbox{easi}\big] \big[\mbox{-er}\big] \Big]$
However, uneasier means "more uneasy", not "more difficult". Thus, from a semantic perspective, uneasier must be a combination of er with the adjective uneasy:
$\Big [ \big[\mbox{un-}\big] \big[\mbox{easi}\big] \Big ] \Big[\mbox{-er}\Big]$
however violates the morphophonological rules for the suffix -er. Phenomena such as this have been argued to represent a mismatch between different levels of grammatical structure.[2]
Another type of English bracketing paradox is found in compound words that are a name for a professional of a particular discipline, preceded by a modifier that narrows that discipline: nuclear physicist, historical linguist, political scientist, etc.[3][4] Taking nuclear physicist as an example, we see that there are at least two reasonable ways that the compound word can be bracketed (ignoring the fact that nuclear itself is morphologically complex):
1. $\Big [ \mbox{nuclear} \Big ] \Big [ \big [ \mbox{physic(s)} \big ] \big [\mbox{-ist} \big ] \Big ]$ - one who studies physics, and who happens also to be nuclear
2. $\Big[ \big [\mbox{nuclear} \big] \big [\mbox{physic(s)} \big] \Big] \Big [\mbox{-ist} \Big]$ - one who studies nuclear physics, a subfield of physics that deals with nuclear phenomena
What is interesting to many morphologists about this type of bracketing paradox in English is that the correct bracketing 2 (correct in the sense that this is the way that a native speaker would understand it) does not follow the usual bracketing pattern 1 typical for most compound words in English.
|
Session 26. Deep Surveys
Display, Wednesday, November 8, 2000, 8:00am-6:00pm, Bora Bora Ballroom
## [26.22] Smooth Energy: Cosmological Constant or Quintessence?
S. Bludman (DESY, Hamburg & University of Pennsylvania), M. Roos (University of Helsinki, Finland)
For a flat universe presently dominated by smooth energy, either cosmological constant (LCDM) or quintessence (QCDM), we calculate the asymptotic collapsed mass fraction as function of the present ratio of smooth energy to matter energy \mathcal{R}. Identifying the normalized collapsed fraction as a conditional probability for habitable galaxies, we observe that the observed present ratio \mathcal{R} ~ 2 is likely in LCDM, but more likely in QCDM. Inverse application of Bayes' Theorem makes the Anthropic Principle a predictive scientific principle: the data implies that the prior probability for \mathcal{R} must be essentially flat over the anthropically allowed range. Interpreting this prior as a distribution over {\em theories} lets us predict that any future theory of initial conditions must be indifferent to \mathcal{R}. This application of the Anthropic Principle does not demand the existence of other universes.
|
# Thread: Fraction Question - Urgent
1. ## Fraction Question - Urgent
Linda is 9/10 as tall as patrick and 3/4 as tall as Ali. If the total height of the 3 pupils is 418.5 cm, how much taller is Ali than Patrick? Please help me and tell me how you get the answer step by step. Thanks
2. Originally Posted by aznmartinjai
Linda is 9/10 as tall as patrick and 3/4 as tall as Ali. If the total height of the 3 pupils is 418.5 cm, how much taller is Ali than Patrick? Please help me and tell me how you get the answer step by step. Thanks
We have three people, with three unknown heights. Call the respective heights of Linda, Patrick, and Ali $h_1,\,h_2,\text{ and }h_3$.
Since Linda's height is $\frac9{10}$ that of Patrick, we have
$h_1 = \frac9{10}\cdot h_2.$
Similarly, for Linda and Ali we have
$h_1 = \frac34\cdot h_3.$
But we further know that the sum of all three heights is 418.5 cm. This gives:
$h_1 + h_2 + h_3 = 418.5\text{ cm}.$
You now have three equations with three unknowns. Can you solve this system?
3. Let $L$ = Linda, $P$= Patrick, $A$= ali.
$L=\frac{9}{10}P, L = \frac{3}{4}A$
we have
$
P=\frac{10}{9}L, A=\frac{4}{3}L
$
$P+A+L=418.5$
now just replace the $P$ and $A$ as expressed with $L$above and you'll find the height of Lisa first, then others.
4. i still don't know how to get h2? can you tell me please?
5. Originally Posted by aznmartinjai
i still don't know how to get h2? can you tell me please?
If you were able to find $h_1$ and $h_3$, then you can simply substitute them into the third equation and solve for $h_2$.
6. so h1 h3 are the same number?
7. Originally Posted by aznmartinjai
so h1 h3 are the same number?
No, they should be distinct. What values did you get? Show some of your work.
8. i dont know how you get h1....
9. h1 = 9/10 x 50 = 45 and h2 = 3/4 x 60 = 45 until now you say thier distince. I wonder if you can tell me so i can study it and better understand the problem..
10. Originally Posted by aznmartinjai
i dont know how you get h1....
Substitute equation (2) into equation (1) to get h1 in terms of h3.
Substitute this expression for h1 and the expression for h2 into equation (3) and solve for h3.
Use the value of h3 to get the value of h2.
Use the value of h2 to get the value of h1.
11. Originally Posted by aznmartinjai
h1 = 9/10 x 50 = 45 and h2 = 3/4 x 60 = 45 until now you say thier distince.
Hold on, I just realize that I misread the problem. I thought it said Patrick (not Linda) was $\frac34$ as tall as Ali. Let me correct my post.
Edit: Alright, the values should still be distinct. Where did you get the 50 and the 60 in your two equations above?
12. is this right?
L = 121.5 cm
P = 135 cm
A = 162 cm
A - P = 27 cm
13. Originally Posted by aznmartinjai
is this right?
L = 121.5 cm
P = 135 cm
A = 162 cm
A - P = 27 cm
Correct!
14. ty recknor and also what was your step by step to the answer? i also want to see how you did it
15. Originally Posted by aznmartinjai
ty recknor and also what was your step by step to the answer? i also want to see how you did it
From the three equations in my first post (now corrected), I just used simple substitution to solve the system: I solved the first two equations for $h_2$ and $h_3,$ respectively, and then substituted into the third equation, which could then be solved for $h_1$.
Page 1 of 2 12 Last
|
. Default .
Page content
# Catalogue Display
## Complex Variables: An Introduction /
.
Catalogue Information
Field name Details
Dewey Class 515
Title Complex Variables ([EBook] :) : An Introduction / / by Carlos A. Berenstein, Roger Gay.
Author Berenstein, Carlos A.
Added Personal Name Gay, Roger author.
Other name(s) SpringerLink (Online service)
Publication New York, NY : : Springer New York, , 1991.
Physical Details XII, 652 p. : online resource.
Series Graduate texts in mathematics 0072-5285 ; ; 125
ISBN 9781461230243
Summary Note Textbooks, even excellent ones, are a reflection of their times. Form and content of books depend on what the students know already, what they are expected to learn, how the subject matter is regarded in relation to other divisions of mathematics, and even how fashionable the subject matter is. It is thus not surprising that we no longer use such masterpieces as Hurwitz and Courant's Funktionentheorie or Jordan's Cours d'Analyse in our courses. The last two decades have seen a significant change in the techniques used in the theory of functions of one complex variable. The important role played by the inhomogeneous Cauchy-Riemann equation in the current research has led to the reunification, at least in their spirit, of complex analysis in one and in several variables. We say reunification since we think that Weierstrass, Poincare, and others (in contrast to many of our students) did not consider them to be entirely separate subjects. Indeed, not only complex analysis in several variables, but also number theory, harmonic analysis, and other branches of mathematics, both pure and applied, have required a reconsidera tion of analytic continuation, ordinary differential equations in the complex domain, asymptotic analysis, iteration of holomorphic functions, and many other subjects from the classic theory of functions of one complex variable. This ongoing reconsideration led us to think that a textbook incorporating some of these new perspectives and techniques had to be written.:
Contents note 1 Topology of the Complex Plane and Holomorphic Functions -- 1.1. Some Linear Algebra and Differential Calculus -- 1.2. Differential Forms on an Open Subset ? of ? -- 1.3. Partitions of Unity -- 1.4, Regular Boundaries -- 1.5. Integration of Differential Forms of Degree 2. The Stokes Formula -- 1.6. Homotopy. Fundamental Group -- 1.7. Integration of Closed 1-Forms Along Continuous Paths -- 1.8. Index of a Loop -- 1.9. Homology -- 1.10. Residues -- 1.11. Holomorphic Functions -- 2 Analytic Properties of Holomorphic Functions -- 2.1. Integral Representation Formulas -- 2.2. The Frechet Space ? (?) -- 2,3. Holomorphic Maps -- 2.4. Isolated Singularities and Residues -- 2.5. Residues and the Computation of Definite Integrals -- 2.6. Other Applications of the Residue Theorem -- 2.7, The Area Theorem -- 2.8. Conformal Mappings -- 3 The $$\bar \partial$$-Equation -- 3,1. Runge’s Theorem -- 3.2. Mittag—Leffler’s Theorem -- 3.3. The Weierstrass Theorem -- 3.4. An Interpolation Theorem -- 3.5. Closed Ideals in ? (?) -- 3.6. The Operator $$\frac{\partial }{{\partial \bar z}}$$ Acting on Distributions -- 3.7. Mergelyan’s Theorem -- 3.8. Short Survey of the Theory of Distributions. Their Relation to the Theory of Residues -- 4 Harmonic and Subharmonic Functions -- 4.1. Introduction -- 4.2. A Remark on the Theory of Integration -- 4.3. Harmonic Functions -- 4.4. Subharmonic Functions -- 4.5. Order and Type of Subharmonic Functions in ? -- 4.6. Integral Representations -- 4.7. Green Functions and Harmonic Measure -- 4.8. Smoothness up to the Boundary of Biholomorphic Mappings -- 4.9. Introduction to Potential Theory -- 5 Analytic Continuation and Singularities -- 5.1. Introduction -- 5.2. Elementary Study of Singularities and Dirichlet Series -- 5.3. A Brief Study of the Functions ? and ? -- 5.4. Covering Spaces -- 5.5. Riemann Surfaces -- 5.6. The Sheaf of Germs of Holomorphic Functions -- 5.7. Cocycles -- 5.8. Group Actions and Covering Spaces -- 5.9. Galois Coverings -- 5.10 The Exact Sequence of a Galois Covering -- 5.11. Universal Covering Space -- 5.12. Algebraic Functions, I -- 5.13. Algebraic Functions, II -- 5.14. The Periods of a Differential Form -- 5.15. Linear Differential Equations -- 5.16. The Index of Differential Operators -- References -- Notation and Selected Terminology.
System details note Online access to this digital book is restricted to subscription institutions through IP address (only for SISSA internal users)
Internet Site http://dx.doi.org/10.1007/978-1-4612-3024-3
Links to Related Works
Subject References:
Authors:
Corporate Authors:
Series:
Classification:
Catalogue Information 44041 Beginning of record . Catalogue Information 44041 Top of page .
### Reviews
This item has not been rated. Add a Review and/or Rating44041
|
# Solve RMQ (Range Minimum Query) by finding LCA (Lowest Common Ancestor)
Given an array A[0..N-1]. For each query of the form [L, R] we want to find the minimum in the array A starting from position L and ending with position R. We will assume that the array A doesn’t change in the process, i.e. this article describes a solution to the static RMQ problem
Here is a description of an asymptotically optimal solution. It stands apart from other solutions for the RMQ problem, since it is very different from them: it reduces the RMQ problem to the LCA problem, and then uses the Farach-Colton and Bender algorithm, which reduces the LCA problem back to a specialized RMQ problem and solves that.
## 1. Algorithm
We construct a Cartesian tree from the array A. A Cartesian tree of an array A is a binary tree with the min-heap property (the value of parent node has to be smaller or equal than the value of its children) such that the in-order traversal of the tree visits the nodes in the same order as they are in the array A.
In other words, a Cartesian tree is a recursive data structure. The array A will be partitioned into 3 parts: the prefix of the array up to the minimum, the minimum, and the remaining suffix. The root of the tree will be a node corresponding to the minimum element of the array A, the left subtree will be the Cartesian tree of the prefix, and the right subtree will be a Cartesian tree of the suffix.
In the following image you can see one array of length 10 and the corresponding Cartesian tree.
The range minimum query [l, r] is equivalent to the lowest common ancestor query [l', r'], where l' is the node corresponding to the element A[l] and r' the node corresponding to the element A[r]. Indeed the node corresponding to the smallest element in the range has to be an ancestor of all nodes in the range, therefor also from l' and r'. This automatically follows from the min-heap property. And is also has to be the lowest ancestor, because otherwise l' and r' would be both in the left or in the right subtree, which generates a contradiction since in such a case the minimum wouldn’t even be in the range.
In the following image you can see the LCA queries for the RMQ queries [1, 3] and [5, 9]. In the first query the LCA of the nodes A[1] and A[3] is the node corresponding to A[2] which has the value 2, and in the second query the LCA of A[5] and A[9] is the node corresponding to A[8] which has the value 3.
Such a tree can be built in $O(N)$ time and the Farach-Colton and Benders algorithm can preprocess the tree in $O(N)$ and find the LCA in $O(1)$.
## 2. Construction of a Cartesian tree
We will build the Cartesian tree by adding the elements one after another. In each step we maintain a valid Cartesian tree of all the processed elements. It is easy to see, that adding an element s[i] can only change the nodes in the most right path – starting at the root and repeatedly taking the right child – of the tree. The subtree of the node with the smallest, but greater or equal than s[i], value becomes the left subtree of s[i], and the tree with root s[i] will become the new right subtree of the node with the biggest, but smaller than s[i] value.
This can be implemented by using a stack to store the indices of the most right nodes.
vector<int> parent(n, -1);
stack<int> s;
for (int i = 0; i < n; i++) {
int last = -1;
while (!s.empty() && A[s.top()] >= A[i]) {
last = s.top();
s.pop();
}
if (!s.empty())
parent[i] = s.top();
if (last >= 0)
parent[last] = i;
s.push(i);
}
|
# Iron MapperIron Mapper: Trial 1
Status
Not open for further replies.
#### Arma
##### A Martian Hyena
Member
Can you handle the cutthroat and perilous trials set forth by the judges?
This is IRON MAPPER: Relic Castle's mapping competition!
Trial 1. City Life.
Deadline: Wednesday, May 22nd
SEASON 4 DISCUSSION THREAD
Iron Mapper
Iron Mapper is a mapping competition on Relic Castle. Each mapping competition, otherwise known as a "trial", will have a deadline clearly stated, and you will have this time to complete and submit a map according to the theme and criteria of the trial. Once all submitted maps are judged after the deadline (this depends on the volume of entries, usually a few days), the judges will post the results, and prepare for the next trial.
For more information (overall rules, how judging works, a list of trials) please visit this thread.
Trial 1. City Map
Deadline: Wednesday, May 22nd 2019
To start of season 4 in a big fashion let’s map a city. Lots of people, skyscrapers and maybe a mall. It might even contain a park or an industrial district. There are many different cities and you’re job is to map whichever one you want to see.
Have fun mapping!
The following criteria must be met:
• Your map can be made with any tileset you want provided the tiles are public. You must include credits for the tiles with your submission if they aren't your own.
• Your map cannot exceed 120x120 tiles.
Your judges for this trial are:
-Arma
-Leilou
A tip from the judge:
"A city map is not defined by its size but by its feel! If the map is too big it’s usually not much fun to play!"
Code:
[b]Trial:[/b] I: City
[b]Map Name:[/b]
[b]Map:[/b]
[b]Critique Requested:[/b]
[b]Credits:[/b]
[b]Notes:[b/]
Last edited:
#### Sturdy Ghost
##### Rookie
Member
Trial 1: City Life
Map Name: Greenville City
Map:
Critique Requested: Sure, I can always get better! :)
Credits:
Calis Project Tileset - Alucus, Dewitty, EVoLiNa X, Minorthreat and nSora
Other Misc. Tiles - Magiscarf, Alistair, Trike Phalon,kymotonian,speedialga, Spacemotion, alucus, pokemon-diamond, kizemaru-kurunosuke, picday, thurpok, ulti remospriter, dewitty, minorthreat0987, tyranitardark, princelegendario, kaitodesign, wesleyfg, boomxbig, takaiofthefire, ThatsSoWitty, RedKnightX
Notes: It is intended as a school-focused area. The big dark blue building is the main administration building for the school, the two lighter towers to the north are intended as the various classrooms and such, the towers to the right make up the residential area with a large shopping tower/mall and recreational park nearby!
Last edited:
#### WittyWhiscash
##### Rookie
Member
Trial: I: City
Map Name: Oakmont
Map:
Critique Requested: Always happy to receive criticism on my work. Helps me make better maps.
Credits:
Tiles: Blackvolution, Magiscarf (a little modification by me for transparent shadows)
Notes: This was a fun map to make! Made in Tiled, Oakmont is a hedge city in the forest of Oaksworth. A few people call home to this quaint place, including a few shops as well. Unfortunately, they are quite far out from civilization in a county, and the county isn't being funded well due to political tensions, so they don't have a Pokecenter. Thankfully, there is a healer who takes residence and heals weary travelers and trainers.
#### kingpls
##### Rookie
Member
Trial: I: City
Map Name: Briston City
Map:
Critique Requested: It would be appreciated!
Credits:
Kyle-Dove, SailorVicious, WesleyFG, Dewitty, Zetavares852, XDinky, Newtiteuf, Alucus, Erma96, Hydrargirium, Poison-Master, Thedeadheroalistair, Shutwig, Asdsimone, Xxdevil, Steinnaples, Hek-el-grande, sylver1984, NikNak93, TeaAddiction, Cuddlesthefatcat, Magiscarf, Gigatom, The-Red-eX, ChaoticCherryCake, Phyromatical.
Notes:
This took quite the time to make, but was fun nonetheless!
Last edited:
#### Poq
##### Cooltrainer
Member
Trial: I: City
Map Name: Mercandon
Map:
Critique Requested: Certainly!
Credits:
Made using base Essentials tiles and OWs.
Notes: Mercandon is a city with a little bit of everything: a thriving commerce center, a residential district near a hillside park, a thriving manufacturing and shipping region, an arts and leisure district, a boardwalk beside the beach, and the headquarters of a business whose profit margins are just a little too profitable to be believed (or legal...).
This was a fun map to work on. I really wanted to push myself, both by using just default tiles and by packing as much as possible into one map. From the original games, I took inspiration from the likes of Lilycove and Slateport with a little bit of Castelia and Malie City for flavor.
Last edited:
#### Scyl
##### Ironically Unironic Edgelord Extraordinaire
Member
Trial: I: City
Map Name: Arkadiya Port Island.
Map:
Critique Requested: Dex Number 360, Wynaut?
Credits:
Ame and the Pokemon Reborn Team, Jan and the Pokemon Rejuvenation team, Leilou's Emerald Outside Tileset, Base Essentials Tileset(s).
And then for the Pokemon OWs:
-Sparta.
-princess-pheonix
-LunarDusk
-Neo-Spriteman
-MissingLukey
-help-14
-Kymoyonian
-cSc-A7X
-2and2makes5
-Pokegirl4ever
-Fernandojl
-Silver-Skies
-TyranitarDark
-Getsuei-H
-Kid1513
-Milomilotic11
-Kyt666
-kdiamo11
-Chocosrawlooid
-Syledude
-Gallanty
-Gizamimi-Pichu
-Zyon17
Notes: Arkadiya Port Island, a bustling metropolis where humans and Pokémon have come to enjoy the finer things in life, like overcrowded cities, Murkrow flying around everywhere, a vicious, blinding fog, unconscious old men, and, the pinnacle of luxury itself, standing on benches. So, like, I started this a week back or so and stopped for reasons I do not know. I got bored on this faithful night...morning, when I'm uploading this, and decided to finish it. I would have made the island a bit more complex in layout, but like, shore tiles hate me and I hate them, so like, rip my island. The darker color scheme for the tiles was selected to give the map more of that modern urban city feel. Anyway, gonna go pass out now, laters.
Last edited:
#### AngelusMortis
##### Novice
Member
Trial: I: City
Map Name: Gardenia City
Map:
Critique Requested: Any/all critique allowed/wanted.
Credits: Made using the Base Tileset and OWs in Essentials.
Notes: Decided to support this event and map the next city in a fangame I am working on. Gardenia City is the hometown of the Professor of my region, as well as the region's main police branch office and the 3rd gym in the game. Y'all can tear me apart as you see fit. Cheers!
#### TheLightSword
##### does anyone read this?
Member
Trial: I: City
Map Name: East Doneun city
Map:
Critique Requested: yes please!
Credits:
BoOmxBiG: https://www.deviantart.com/boomxbig/art/Custom-Tileset-288009217
ChaoticCherryCake: https://www.deviantart.com/chaoticcherrycake/art/Pokemon-Tileset-From-Public-Tiles-358379026
Notes:for a while I've been thinking of creating some custom tiles in a GS kinda style but never got to it. This trial motivated me to do so, but because of the short time we've got i didn't make that many diverse building and such.
Here is the tileset i used:
#### NoodlesButt
##### Addicted to Jams
Member
Trial 1: City
Map Name: Nerium City
Map:
Critique Requested: Yes
Credits: Spherical Ice and Alistair for tiles
Notes: Nerium City, is home to the region's Poison type gym and Experimental Potion Laboratory. The city sits high up on a cliff that looms over the sea. To the north, lies a forest that houses poisonous wildlife, like Pokemon and mushrooms. The darker shade of trees seen around Nerium get more and more common, the deeper you go into the forest.
Tiles:
#### Arma
##### A Martian Hyena
Member
Alright, since we're well past the deadline, I'll be closing this thread for now! We're going to review all these lovely maps, amd the results in a few days!
#### Arma
##### A Martian Hyena
Member
Trial 1. City Life
RESULTS
Submissions: 8
Deadline: Wednesday, May 22nd
SEASON 4 DISCUSSION THREAD
Thoughts
Thanks for your patience, and sorry for the wait. This never should have taken as long as it did, but I'm glad to finally be able to present you the results.
Even with a smaller number of entries, I'm really happy with the variation of maps you all posted! Cities are usually among the more difficult maps to make, and so I hope our is useful to you all!
Trial Champion: kingpls!
This city absolutely blew us away! Despite using a simple layout with little room for exploration, you managed to make the city look really interesting. The river does a great job breaking up the monotony of the snow covered map. It looks amazing too.
Congratulations, we hope to see more from you in the future!
Critiques
-The layout of the city looks really nice!
-The right side of the city could use a little more decoration though, maybe add some flowers between the buildings there.
-The park on the bottom left looks really messy, it’d look a lot better if you’d focus on one thing there. Like either flowers or the lake. There’s also too many random items lying around.
-The layout of the city is quite good, however it feels like you could compress it a little bit and remove empty spaces.
-While the parks are nicely decorated the streets look quite barren, especially the area around the well. Try keeping consistent density of decorations throughout the map.
-The houses on the pavement look a bit off, the added contrast from the grass would make it look a lot better.
-While we don’t judge maps for the tiles used, it does feel weird that the special buildings have zero snow on top of them.
-The different districts of the city look amazing! Especially the skyscrapers on the left side.
-The route on the top right looks a bit empty, there’s a long path seemingly without any events.
-While there’s more than enough decoration on the map, we’d feel that compressing the map would make things much more clear to the player, after all there’s only so much that fits on the screen at once.
-The one tile wide paths looks really awkward.
-While we understand you’re using the default tiles, we think you should consider recoloring some of the buildings so they’d fit better together.
-NPCs and Pokémon and also the little squares are placed in a way that tell individual stories. of life in the city contrasting the overall layout of a dark boring city.
-Every generic building being the exactly the same makes the map a bit monotonous, even by doing little things like making some of them a bit taller, or removing the ventilation unit on some of the roofs would make it look more diverse.
-While we get that the Pokecenter is placed close to the gym, it’s quite a walk from the map’s exit on the left and the interesting buildings near the top. Perhaps consider swapping it with one of the generic buildings.
-The main segment of the city looks really cluttered. If you were to have more well defined streets, you’d be able to both remove all of the clutter and get rid of of the empty spaces between all the buildings.
-It’s nice how you can reach the park pretty easily from all 3 of the map’s exits.
-The lack of structure really makes this feel more like a maze than a city.
-There’s too many types of special buildings. There’s only so many features you can pack in your cities, perhaps it would be a better idea to distribute them more equally throughout the region?
-Nice usage of buildings as borders and the organised roads to sell the big city image.
-The map is too big for what it offers. There's about 4-5 entrances in each quarter of the map. Ending the map at right of the park and moving the entrances to the other parts of the map would have made the map easier to traverse, harder to get lost in and would feel a lot livelier.
-The city is really well decorated!
-However the three horizontal streets really make the map a bit boring.If there was a bit more variety with the vertical placement of buildings the maps would look a lot better.
- The top street looks quite empty, we feel like you could decrease the width of the city a little, and add some features to that street instead.
Status
Not open for further replies.
|
# zbMATH — the first resource for mathematics
Pseudocompactness and the cozero part of a frame. (English) Zbl 0881.54018
Summary: A characterization of the cozero elements of a frame, without reference to the reals, is given and is used to obtain a characterization of pseudocompactness also independent of the reals. Applications are made to the congruence frame of a $$\sigma$$-frame and to Alexandroff spaces.
##### MSC:
54C50 Topology of special sets defined by functions 06B10 Lattice ideals, congruence relations 54D20 Noncompact covering properties (paracompact, Lindelöf, etc.)
Full Text:
|
It looks like the House of Representatives have endorsed former Edo State Governor, Comrade Adams Oshiomole for the National Chairman of the All Progressives Congress.
A tweet by the house Majority Leader, Femi Gbajabiamila showed the position of the lawmakers towards Oshiomole’s ambition.
The tweet by Gbajabiamila, “We sat down for over two hours with Comrade Oshiomole and had a heart-to-heart discussion and we are all on the same page.
“As a result of that solicitation, we as the @OfficialAPCNg caucus in the House of Representatives believe in him. We have more or less endorsed him”.
|
# Cartesian coordinates problem
1. Jul 8, 2009
### jeff1evesque
1. The problem statement, all variables and given/known data
Solid horn obtained by rotating the points $${[x=0], [0 \leq y \leq 4], [0 \leqz \leq \frac{1}{8}y^{2}] }$$ circles around y-axis of radius $$\frac{1}{8}y^2$$. Set up the integral dzdxdy.
2. Relevant equations
Cartesian coordinates.
3. The attempt at a solution
I don't understand how the z-limits are $$\pm \sqrt{\frac{y^4}{64} - x^2}$$? I understand that the z limits must involve x and y, but cannot come up with the latter conclusion.
Thanks
Last edited: Jul 8, 2009
2. Jul 8, 2009
### Ja4Coltrane
Re: Integrating
I think you have some typos. It's not clear what you mean here. You have, for instance, 0 is less than or equal to y^2/8. That gives no information as y^2 is greater than or equal to zero for all real numbers y.
3. Jul 8, 2009
### jeff1evesque
Re: Integrating
Well I suppose without looking at a picture, that may be a reasonable opinion- and hard to interpret. I've attached a picture, if that helps any.
Thanks a lot
JL
#### Attached Files:
• ###### pic.bmp
File size:
297 KB
Views:
64
Last edited: Jul 8, 2009
4. Jul 8, 2009
### jeff1evesque
Re: Integrating
I guess I am having difficulty determining the limits of this particular integration (in cartesian) along the z-axis.
5. Jul 8, 2009
### Staff: Mentor
Re: Integrating
It sometimes takes several hours for an attachment file to be approved, so if you could describe the solid in words, that would be helpful.
My guess as to how you have described the solid so far is that the curve in the x-y plane, x = y2/8, is revolved around the y-axis to form a solid. And you want the portion of this solid between the planes y = 0 and y = 4.
Is this a reasonable description?
6. Jul 8, 2009
### jeff1evesque
Re: Integrating
Yup that sounds reasonable. Basically, if you could picture then end of a horn [perhaps a trumpet, beginning as a point on the origin and expanding out along the y-axis] with the y-axis going through the center, that's what this image looks like. At y = 4, the "horn" has a height of z = 2, which obviously rotates around the y-axis.
7. Jul 9, 2009
### jeff1evesque
Re: Integrating
I actually solved this problem this morning with some help from other.
Thanks,
JL
|
Subjects -> METALLURGY (Total: 58 journals)
Showing 1 - 10 of 10 Journals sorted alphabetically Acta Metallurgica Slovaca Advanced Device Materials (Followers: 3) American Journal of Fluid Dynamics (Followers: 47) Archives of Metallurgy and Materials (Followers: 8) Asian Journal of Materials Science (Followers: 5) Canadian Metallurgical Quarterly (Followers: 20) Complex Metals (Followers: 1) Corrosion Communications (Followers: 5) Energy Materials : Materials Science and Engineering for Energy Systems (Followers: 19) Handbook of Magnetic Materials (Followers: 2) Indian Journal of Engineering and Materials Sciences (IJEMS) (Followers: 10) International Journal of Metallurgy and Alloys (Followers: 3) International Journal of Metals (Followers: 6) International Journal of Minerals, Metallurgy, and Materials (Followers: 8) International Journal of Mining and Geo-Engineering Ironmaking & Steelmaking (Followers: 4) ISIJ International - Iron and Steel Institute of Japan (Followers: 23) Izvestiya Vuzov. Poroshkovaya Metallurgiya i Funktsional’nye Pokrytiya (Proceedings of Higher Schools. Powder Metallurgy аnd Functional Coatings) (Followers: 2) JOM Journal of the Minerals, Metals and Materials Society (Followers: 32) Journal of Advanced Joining Processes Journal of Central South University (Followers: 1) Journal of Cluster Science Journal of Heavy Metal Toxicity and Diseases Journal of Iron and Steel Research International (Followers: 7) Journal of Materials & Metallurgical Engineering (Followers: 1) Journal of Materials Processing Technology (Followers: 18) Journal of Metallurgical Engineering (Followers: 2) Journal of Sustainable Metallurgy (Followers: 3) Materials Science and Metallurgy Engineering (Followers: 7) Metallurgical and Materials Engineering Metallurgical and Materials Transactions A (Followers: 42) Metallurgical and Materials Transactions B (Followers: 31) Metallurgical and Materials Transactions E (Followers: 2) Metallurgical Research & Technology Metallurgical Research and Technology (Followers: 6) Metallurgy and Foundry Engineering Mining, Metallurgy & Exploration Powder Diffraction (Followers: 1) Powder Metallurgy (Followers: 33) Powder Metallurgy and Metal Ceramics (Followers: 7) Powder Metallurgy Progress (Followers: 5) Rare Metals (Followers: 1) Revista de Metalurgia Revista del Instituto de Investigación de la Facultad de Ingeniería Geológica, Minera, Metalurgica y Geográfica Revista Remetallica Russian Metallurgy (Metally) (Followers: 4) Science and Technology of Welding and Joining (Followers: 4) Soldering & Surface Mount Technology (Followers: 1) Stainless Steel World (Followers: 17) Transactions of the IMF (Followers: 14) Transactions of the Indian Institute of Metals (Followers: 4) Tungsten Universal Journal of Materials Science (Followers: 1) Welding in the World (Followers: 4) Welding International (Followers: 7) Вісник Приазовського Державного Технічного Університету. Серія: Технічні науки
Similar Journals
Transactions of the Indian Institute of MetalsJournal Prestige (SJR): 0.361 Citation Impact (citeScore): 1Number of Followers: 4 Hybrid journal (It can contain Open Access articles) ISSN (Print) 0972-2815 - ISSN (Online) 0975-1645 Published by Springer-Verlag [2469 journals]
• Characterization of the Welding Zone of Automotive Sheets of Different
Thickness (DP600 and DP800) Joined by Resistance Spot Welding
Abstract: Automotive industry in recent years, has gained great importance. In line with this, DP600 and DP800 dual-phase steels and the electrical resistance spot welding method were used in the study. During the experimental trials, different welding currents (6, 7, and 8 kA) were selected and all other welding parameters were kept constant. The effects of the welding parameters on microstructure, hardness, tensile-shear, and cross-tensile strength were analyzed. In the phase measurements, 27.06–29.97% martensite and 70.73–73.85% ferrite phases were found. When the hardness values in the HAZ regions were examined, it was seen that the highest hardness values were 356 ± 5 HV in the DP600 and 451 ± 5 HV in the DP800 with a current intensity of 6 kA. Consequently, it was determined that tensile-shear and cross-tensile strengths had increased in parallel with the increase in the welding current and the highest values were determined as 8 kA–15.91 kN in tensile shear and 8 kA–4.91 kN in cross-tensile strength.
PubDate: 2022-01-21
• Optimization of Process Parameters in Double-Pulse MIG Welding of Inconel
617-SS 304 H
Abstract: Objectives In this study, an effort is made to predict the optimized parameter combination in double-pulse MIG welding of stainless steel (SS 304H)—Inconel 617 (IN 617) Methods Butt joints were made between IN 617 and SS 304H using ER308 H filler wire having a diameter of 1.2 mm. Welding trials were done based on Taguchi L9 array with wire feed speed, frequency and amplitude as input parameters. The weld quality was evaluated by measuring bead width, depth of penetration, tensile strength and impact strength. The optimized parameter combination was identified using two multiobjective techniques, viz. grey relational analysis (GRA) and technique for order of preference by similarity to ideal solution (TOPSIS). Conclusions GRA and TOPSIS gave similar results with regard to optimized parameter combinations. Analysis of variance (ANOVA) was applied to find the most influential parameter on weld quality. From ANOVA, the amplitude was identified as the most influential parameter. The presence of M23C6 and M6C precipitates in the fusion zone helped in strengthening the weld. At higher wire feed speed, lack of side wall fusion was noted on the SS 304H side of the weld.
PubDate: 2022-01-21
• Optimal Control of Surface Crack in Microalloyed Steel with Big Stroke
Liquid Core Reduction Process
Abstract: In this paper, a new idea of optimizing and improving the surface crack of microalloyed steel slab by using big stroke liquid core reduction (B-LCR) process was studied and analyzed by combining numerical simulation with industrial test. The results show that the stress–strain distribution is different in different parts of the slab during liquid core reduction; the stress at the corner of the slab is the largest, and the stress at the wide surface is greater than that at the narrow surface. When the total reduction is increased to 35 mm by using B-LCR process, the equivalent strain on the interior of the slab is 0.566%. It is consistent with the maximum additional strain value (calculated value) of 0.552% on the center of the slab, and no pressure crack will occur. The industrial test results show that B-LCR process can improve the internal and surface quality of low content alloying element steel obviously. But with the gradual increase in microalloying element content, B-LCR process cannot completely solve the surface crack defect. It can only play a role in reducing defects.
PubDate: 2022-01-19
• Optimization of Column Flotation for Fine Coal Using Taguchi Method
Abstract: The efficacy of column flotation in producing low-ash clean coal for the metallurgical industries is explored in this work. The effect of four critical operating parameters such as collector dosage (A), frother dosage (B), superficial velocities of air (C) and wash water (D) on the combustibles recovery (CR) and ash rejection (AR) was studied. Experiments were carried out based on the Taguchi orthogonal array of experimental design ( $$L_{9} OA$$ ), and the signal-to-noise (S/N) ratios were calculated for both the responses. In this study, the principal parameters, their rank on the responses and the optimum conditions to maximize the responses were also identified. Column flotation produced clean coal with combustibles recovery in the range of 66.94–80.04%, while keeping the ash rejection in the range of 55.91–73.83%. Regression models were developed to predict the separation efficiency, ash rejection and combustible recovery in the column flotation, and the average error in prediction of the separation efficiency was ± 6%.
PubDate: 2022-01-19
• The Improvement in Mechanical Properties and Strengthening Mechanism of
The New Type of Cast Aluminum Alloy with Low Silicon Content for
Automotive Purposes
Abstract: The effects of different Si and Cr contents on the microstructure and mechanical properties of new type aluminum alloy with low silicon content were studied by means of hardness test, tensile test and metallographic preparation, SEM, EDS and DSC. The results showed that when the Si content increased gradually, the hardness and tensile strength increased first and then decreased. When the Si content was 3.5%, the hardness and tensile strength reached the maximum. The microstructure observation showed that when the Si content was 3.5%, the microstructure was the most fine and dense, and the secondary dendrite arm spacing (SDAS) of the alloy had the minimum value. When the content of Cr element increased gradually, the tensile strength and elongation showed a trend of increasing gradually, and the maximum value was obtained at 0.5%. When Si was 3.5% and Cr was 0.5%, the best mechanical properties were obtained. At this time, the microstructure of the alloy was fine, dense and uniform, and Mg2Si strengthening phase was precipitated during heat treatment. Cr element has the functions of solid solution, fine crystal and dispersion strengthening. Therefore, the tensile strength, yield strength and elongation of the new aluminum alloy were 350.7 MPa, 303.0 MPa and 8.56%, which has obvious advantages over the traditional cast Al-Si alloy.
PubDate: 2022-01-18
• Characterization for Identification of Possible Beneficiation Strategies
for Low-Grade PGE Ores from Mesoarchean Boula-Nuasahi Igneous Complex,
Singhbhum Craton, India
Abstract: Comprehensive characterization studies are carried out to develop a suitable beneficiation process for treating low-grade platinum group elements (PGEs) ore from Mesoarchean Boula-Nuasahi Igneous Complex in southern part of Singhbhum Craton, eastern India. The PGE mineralization is magmatic and hydrothermal in origin and found in association with NNW–SSE-trending mafic–ultramafic igneous rocks in the tectonic brecciated zone toward the eastern part of the complex. The PGE-bearing minerals are very fine grained (< 1–50 µ) and exhibit heterogeneous distribution, with a total grade of 2.094 g/t. Chalcopyrite, pyrrhotite, pentlandite, pyrite, etc. are found as discriminate grains while the altered silicates (tremolite, riebeckite, enstatite, etc.) are the main gangue minerals present in this ore. The common PGE minerals, such as sperrylite, sudburyite, braggite, laurite, and testibiopalladite, are identified, and individual elemental distribution is analyzed by electron probe micro-analyzer mapping. Based on the studies, different flow schemes are proposed and discussed.
PubDate: 2022-01-17
• CO2 Emission of CO2 Injection into Blast Furnace
Abstract: As an energy-intensive industry, the iron and steel industry has been facing the challenges associated with reducing CO2 emissions. Therefore, metallurgical workers have been examining whether the steel industry can absorb some of the CO2 emissions. At high temperatures, CO2 is capable of oxidizing, which can lead to reactions with the carbon in the tuyeres raceway of the blast furnace to generate twice the volume of CO, improving the degree of indirect reduction and increasing the CO concentration in the top gas. In this study, metallurgical thermodynamics is used as the basis for constructing mathematical models of the mass and energy balance of a blast furnace and of the heat balance of a hot blast stove. Based on these models, the CO2 emission of CO2 injection into blast furnace is analyzed using the blast furnace CO2 emission model. Because of the endothermic reaction between CO2 and carbon, thermal compensation for the increases in the fuel ratio and oxygen enrichment is required. As the CO2 enrichment rate increases, the input of CO2 emission increases. However, as the CO concentration in the top gas increases and the top gas required by the hot blast stove decreases, the CO2 emission reduction at the output increases. When the CO2 enrichment limit is reached, the CO2 emission at the input increases by 530.97 kg/(tHM), the CO2 emission reduction at the output increases by 544.65 kg/(tHM), and the net CO2 emission decreases by 13.68 kg/(tHM). The high-quality top gas can replace a portion of the role of gas producers and reduce the CO2 emission of gas producers by 38.84 kg/(tHM). It can also prevent too much low-quality top gas from being released.
PubDate: 2022-01-17
• Effects of Metal and Inorganic Additives on the Tribological Performances
of Nickel-Based Composites with Rice Husk Ceramic Particles
Abstract: Rice husk ceramic (RHC) can be used as a reinforcing agent in metal matrix composites (MMC) to improve the tribological properties because of their remarkable mechanical and tribological properties. This research scrutinizes the role of RHC to enhance tribological properties of Ni-MMC (Nickel metal matrix composite). Ni-MMC with additives RHC, Al, Cu, MoS2, and graphite in different content were prepared by powder metallurgy method. The wear and friction mechanism of Ni-MMC was investigated using a ball-disk tribometer at room temperature. SEM/EDS and optical microscope were used to examine the wear properties, distribution of reinforcement, and micro structural phases, respectively. Results indicated that the addition of RHC particles can strengthen the anti-wear and friction reduction properties of Ni-MMC. The addition of 3 wt% RHC resulted in the lowest value for the coefficient of friction and wear rate. Further expansion of MoS2 and graphite along 3 wt% RHC, both the wear rate and coefficient of friction all decrease. At the same time, the value depends on the content of elements like aluminum and copper in MMCs.
PubDate: 2022-01-17
• A Study on the Suitability of Mahanadi Riverbed Sand as an Alternative to
Silica Sand for Indian Foundry Industries
Abstract: The availability of silica sand is diminishing and as a result, the price of silica sand increasing. Therefore, there is a need to evaluate the suitability of local riverbed sand for nonferrous alloy casting in Indian foundries. The present investigation focuses on the suitability of Mahanadi Riverbed Sand (MRS) for nonferrous alloy casting. The sand particle size, chemical composition, density, and fusion point, are evaluated and found suitable for nonferrous alloy casting. The mold ingredients are designed and optimized through the design of the experiment and response surface methodology, respectively. It is found that 11.5 wt.% of bentonite clay, 5 wt.% of moisture, and 3.5 wt.% of coal dust is suitable to achieve the desire sand mold properties. The fusion point of MRS shows that it is not suitable for ferrous casting. Therefore, aluminum alloy (A356) casting is performed using MRS and silica sand mold. The as-cast surface roughness (Ra), hardness, and microstructures of A356 alloy casting are evaluated and compared between MRS and silica sand mold castings. It is observed that the mechanical properties of MRS mold casting are better than silica sand mold casting and may be used as an alternative to silica sand for aluminum alloy casting in Indian foundry industries.
PubDate: 2022-01-16
• Effect of Aging Temperature on Microstructure and Tensile Properties of
Inconel 718 Fabricated by Selective Laser Melting
Abstract: Inconel 718 superalloy was prepared by selective laser melting. The microstructure, precipitated phase and tensile properties of Inconel 718 at different aging temperatures were tested by optical microscope, scanning electron microscopy and tensile testing machine. The results show that Inconel 718 alloy has fewer defects and higher density. With the increase of aging temperature, the δ phase gradually dissolves and the grains coarsen gradually. Compared with 650 °C, the strain hardening rate of the material increases when the aging temperature reaches 750 °C, the tensile strength can reach 1410.94 MPa, and the precipitation strengthening of γ" phase plays a dominant role. When the tagging emperature exceeds 750 °C, the strain hardening rate of the material is further increased and the solid solution strengthening effect is gradually enhanced.
PubDate: 2022-01-16
• Effect of Nano-ZrO $$_2$$ 2 Additions on Fabrication of ZrO $$_2$$ 2 /ZE41
Surface Composites by Friction Stir Processing
Abstract: Friction stir processing (FSP) technique was used to fabricate surface metal matrix composites of the newly commercialized ZE41-rare earth magnesium alloy (in cast form) using nano-ZrO $$_2$$ reinforcement particles. The volume percentage of reinforcement particles was varied as 4N $$\%$$ (where N $$\in$$ [1,2,3,4]) in the matrix. The results of microstructural examinations revealed the refinement of grains from 113 to 0.7 µm. The intermetallic compound (Mg $$_7$$ Zn $$_3$$ RE) present at the grain boundaries was seen to be distributed as particles after FSP suggesting the formation of fine grains. The presence of reinforcement particles and evolution of fine grains at the stir zone leads to increase in mechanical properties of fabricated composites. The microhardness was seen increased from 70 HV (ZE41) to 119 HV (ZE41-16 $$\%$$ ZrO $$_2$$ MMC). The tensile strength was observed to be increased from 154.5 MPa (ZE41) to 195.5 MPa (ZE41-12 $$\%$$ ZrO $$_2$$ MMC). From the present study, it was learnt that the addition of nano-ZrO $$_2$$ particles coupled with optimized FSP parameters led to the formation of defect-free nanocomposites with increased mechanical properties.
PubDate: 2022-01-16
• Thermoluminescence Characterization of Rare Earth-Doped Yttrium Stannate
Phosphors Deposited by Friction Stir Processing
Abstract: The goal of this study is to examine the thermoluminescence (TL) properties of materials obtained by depositing rare earth ion-doped yttrium stannate (Y2Sn2O7) phosphor powders on metallic surfaces using friction stir technique. Y2Sn2O7 phosphors doped with Tb, Eu and Dy rare earth ions were produced by the solid-state reaction synthesis method by sintering at 1450 °C. TL properties of Y2Sn2O7:Eu, Dy and Tb were examined under X-ray irradiation, UV radiation (254 nm) and beta radiation. Thermoluminescence dosimeter (TLD) reader was used for recording the TL glow curves, and linear heating rate was selected as 2 Ks−1. The thermoluminescence glow curves of deposited phosphors showed prominent glow peaks at 225 °C for Y2Sn2O7:Tb, 185 °C and 295 °C for Y2Sn2O7:Eu and 150 and 260 °C for Y2Sn2O7:Dy after irradiated with X-ray radiation. TL properties of phosphor composites deposited by friction stir processing are a pioneering study.
PubDate: 2022-01-15
• Electrowinning of Nickel and Cobalt from Non-circulated Sulfate
Electrolyte
Abstract: A laboratory investigation on nickel and cobalt electrowinning procedure from dilute synthetic sulfate-based electrolytes (with concentrations less than 40 gr/l) in a non-circulating apparatus has been carried out in the presence of boric acid as a buffering agent. By perusing previous studies on the same subject, five levels were identified for the parameters, including initial pH, temperature, and current density. Experiments were carried out in two time groups. Deposit morphology, current efficiency, and specific energy consumption were discussed as the outcomes of the process. The results produced evidence that favorable conditions (low power consumption, appropriate deposit quality, and high efficiencies) were reached at 400 A/m2, 55 °C and initial pH 5 for nickel and at 600 A/m2, 70 °C, and initial pH 5 for cobalt electrolysis. In addition, higher current efficiencies (lower power consumptions) widely occurred above 400 A/m2 in nickel electrowinning, while cobalt constraint regions of high efficiencies were observed at 200 A/m2 and 600 A/m2. Graphical abstract
PubDate: 2022-01-15
• Synthesis and Stability of Higher-Order Superstructure of Cubic Laves
Phase in an Al-Cu-Ta alloy
Abstract: In this work, we investigated on the synthesis and stability of a ternary complex metallic alloy in Al-Cu-Ta system, close to the composition of Al56.6Cu3.9Ta39.5. The Rietveld refinement of X-ray diffraction data established its structure to be FCC (space group: F $$\overline{4 }$$ 3m; lattice parameter, a = 45.339 (7) Å), which has been correlated with the seventh order superstructure (7x7x7) of imaginary cubic Laves phase (lattice parameter of it being close to 6.5 Å). After annealing at1400K for 24 h, the as-cast alloy exhibited the same type of intermetallic phase with the slightly reduced lattice parameter (a = 45.2908 (9)Å). However, no other major phases could be detected from the X-ray diffraction data in both the as-cast and annealed alloys. The microhardness tests showed a variation of hardness from 8.8 GPa to 7.2 GPa for annealed and 7.6 GPato 4.8 GPa for as-cast samples. The variation of hardness with load, known as indentation size effect, was found to be relatively less in the annealed sample compared to that of the as-cast one. No cracking is observed even at load of 1000 g implying the possibility of the limited toughness of this complex phase. Due to high hardness and high-temperature stability, this phase appears to have potential for applications as hard and tough coating materials.
PubDate: 2022-01-15
• A Review on Processing of Electric Arc Furnace Dust (EAFD) by
Pyro-Metallurgical Processes
Abstract: In recent years, the recovery of the valuable metals from iron-bearing solid waste from steel plant has been one of the most intensive research areas. Dumping of electric arc furnace dust is an environmental concern, and recovery of valuable metals like iron, zinc, lead from EAFD and safe disposal of residue has got enough attention. Evolution of improved and new processes has motivated industries to engage actively and targeting the new and efficient methods to recycle EAFD. The presence of valuable elements and increasing cost of waste incorporation are the motivational factors for the recycling of EAFD. In this article, the technologies that are in use to process EAFD have been discussed, and their advantages and disadvantages are also highlighted.
PubDate: 2022-01-14
• Finite Element Analysis of Residual Stress Induced by Single-Pass
Ultrasonic Surface Rolling Process
Abstract: The formation mechanism of residual stress field (RSF) and the coupling mechanisms of RSFs after single-pass ultrasonic surface rolling process (USRP) are revealed. Based on simulation, the transient evolution and steady distribution of compressive RSF are analyzed, and the influences of process parameters (i.e., the movement distance of the ball in a single period L, ball radius R, static force F and ultrasonic amplitude A) on residual stress (RS) after single-pass USRP are investigated. The results show that the plastic strain, which is induced by the extrusion of the target surface with the ball applying static force and ultrasonic vibration, causes compressive RS layer mainly. A new RSF will be induced simultaneously when the ball vibrates for a period, then the RSF induced by each period couples with several adjacent RSFs, and a compressive RS layer forms on the target surface finally. Furthermore, the surface compressive RS, the depth of maximum compressive RS and the depth of compressive RS layer increase effectively with the increase in F. Decreasing R can increase the surface compressive RS while decrease the depth of compressive RS layer slightly. Increasing A can increase the depth of compressive RS layer obviously when F is small, while has less influence on compressive RSF when F is large. For a relatively small L, the surface RS is compressive and achieves a uniform distribution.
PubDate: 2022-01-13
• Revisiting Quasicrystals for the Synthesis of 2D Metals
Abstract: Quasicrystals (QCs) are intermetallic materials with long-range ordering but with lack of periodicity. They have attracted much interest due to their interesting structural complexity, unusual physical properties, and varied potential applications. The last four decades of research have demonstrated the existence of different forms of QC composed of several metallic and non-metallic systems, which have already been exploited in several applications. Recently, with the experimental realization of 2D (atomically thin) metals, the potential applications of these structures have significantly increased (such as inflexible electronics, optoelectronics, electrocatalysis, strain sensors, nano-generators, innovative nano-electromechanical systems, and biomedical applications). As a result, high-quality 2D metals and alloys with engineered and tunable properties are in great demand. This review summarizes the recent advances in the synthesis of 2D single and few layered metals and alloys using quasicrystals. These structures present a large number of active sites for hydrogen evolution process catalysis and other functional properties. In this review, we also highlighted the possibility of using QC to synthesize other 2D metals and to explore their physical and chemical properties.
PubDate: 2022-01-13
• Influence of Shielding Gas Composition and Pulsed Current Frequency on the
Microstructure of Austenitic Stainless Steel Welded by Pulsed Current GTAW
Abstract: In the current research, an attempt was made to study the effect of shielding gas composition and operational parameters on the microstructure of austenitic stainless steel. Fulfilling this purpose, a 4 mm austenitic stainless steel sheet was provided. Pulsed current gas tungsten arc welding was carried out using nitrogen gas with the volume percent of zero, 0.5, 1, 2, 5, and 10 in addition to argon as the shielding gas and under pulsed current with frequencies of 40, 80, 120, 160, and 200 Hz. After welding, samples were cut, and the metallographic study was done on weld metal by optical microscope and scanning electron microscope. Furthermore, a ferrite-scope test was performed on the weld metal, and the results were evaluated. Mechanical properties were investigated, and fracture surface was studied. Results showed that increasing the frequency of pulsed current leads to a decrease in ferrite amount in the microstructure, and the area fraction of ferrite decreased to 23% by increasing the frequency. Moreover, it was proven that the addition of nitrogen to the shielding gas resulted in an ascending change in the heat input in the weld pool. Also, with the increase in welding metal nitrogen, ferrite at frequencies of 40 and 200 Hz decreased to 56% and 62%, respectively. In addition, the morphology of the remaining ferrite transformed from mixed lacy-vermicular to completely vermicular. The hardness of the weld metal increased to 66% and 37% at the frequencies of 40 and 200 Hz, respectively. Similarly, the yield strength increased to 11% and 10% at similar frequencies. Graphical abstract
PubDate: 2022-01-13
• Microstructure and Mechanical and Oxidation Properties of Multilayer
Aluminide Coatings Formed Over P91 Steel
Abstract: Iron aluminide-based oxidation-resistant and permeation barrier coatings were formed on P91 steel using pack cementation and post-coating annealing treatment. Fe2Al5 layers of variable thicknesses were formed at the coating temperatures in the range of 600–700 °C. The as-coated alloy annealed at 625–750 °C for different duration revealed the formation of multilayered coating structure that consisted of FeAl2, FeAl and Fe(Al) phases of varying thicknesses. Knoop hardness of Fe2Al5 phases was found to be 700 HK. High-resolution nanoindentation mappings were carried out for heat-treated specimen to establish structure–property interlink at the micron length scale. Hardness and modulus maps were generated using large numbers of arrays of nanoindentations at intervals of 2 µm. Scratch tests performed at varying loads revealed superior scratch resistance of FeAl as compared to Fe2Al5 phase. Oxidation studies under pure O2 environment of as-coated alloy in the temperature range of 700–1000 °C for 8 h revealed the parabolic growth of alumina scale.
PubDate: 2022-01-12
• Friction and Wear Characterization of Nanocomposites Based on Si3N4
Reinforced with SiC, Mo, MoSi2 Nanoparticles
Abstract: Friction and wear behaviors of nanocomposites based on silicon nitride ceramics reinforced by silicon carbide, molybdenum or molybdenum disilicide were studied. The nanocomposite materials were prepared via spark plasma sintering. Tribological tests were carried out against silicon carbide balls under dry conditions by means of reciprocating ball-on-flat tribometer. The aim of this study is to determine the most efficient ceramic material for sliding applications while optimizing tribological parameters such as sliding velocity and normal load. Experimental results have shown that tribological behaviors of nanocomposite materials based on silicon nitride ceramics depend on reinforcement nanoparticles, normal load and sliding velocity. A friction coefficient of 0.4–0.9 as average values was obtained for the three composite materials under a frequency of 1 Hz–0.25 Hz; moreover, specific wear rate decreased as the frequency increased. The lower friction coefficient is noted for Si3N4-MoSi2, which is equal to 0.28 for 17 N as normal load, while the highest is noted for Si3N4-SiC and equal to 0.56 at 57 N as normal load.
PubDate: 2022-01-11
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: [email protected]
Tel: +00 44 (0)131 4513762
|
# Vertical space before and after equations
The vertical space before and after equation (1) is greater than the vertical space before and after equation (2).
Which is the rule/reason for that?
\documentclass{book}
\usepackage{amsmath,amsfonts}
\usepackage{setspace}
\usepackage{mwe}
\begin{document}
\doublespacing
Is it normal that the vertical space before and after this equation:
$$a = b + c$$
is more than the space before and after this other equation?
$$d=\sum_{i=1}^{k}e_i$$
\blindtext
\end{document}
• Yes, it is normal. Measure the distance between the baselines. – egreg Aug 14 '19 at 22:00
• The spaces are exactly the same -- compare the gaps between the surrounding text, not the distance from the highest/lowest elements of the displayed material. The difference you observe is the fact that there is a large operator with limits in the second display; that is not part of the defined measurement, which is correctly described by @egreg. – barbara beeton Aug 14 '19 at 22:22
• @egreg Thank you! Would you like to add an answer? – CarLaTeX Aug 15 '19 at 4:44
• @barbarabeeton idem ^^^ – CarLaTeX Aug 15 '19 at 4:45
|
• # question_answer In each of the following questions, insert the letters that complete the first word and begin the second word. A) N B) S C) T D) U
Solution :
$~RISE\,HOUT\,i.e,RISE\,$ and $HOUT.$
You need to login to perform this action.
You will be redirected in 3 sec
|
# A Fun(ny) Treasure Hunt
I'll send you on a treasure hunt,
Hopefully you don't mind.
The clues may not be right up front,
Maybe they are behind.
The clues you need are down below,
And hidden in plain sight.
I do not wish contention to sow,
So solve this by your might.
Tell me the secret message
Here at the end of May.
\begin{array}{|l|c|c|c|} \hline 1. & 1 & 2 & 1\\ \hline 2. & 2 & 13 & 2 \\ \hline 3. & 1 & 19 & 1 \\ \hline 4. & 1 & 8 & 2 \\ \hline 5. & 3 & 4 & 1 \\ \hline 6. & 1 & 5 & 1 \\ \hline \end{array}
• Very nice rhymes! May 29 '15 at 14:55
• I think I found the clues, but no idea where to go from there... May 29 '15 at 14:59
• I have a hunch on what the first and second columns (not counting the one that goes 1, 2, 3, 4, 5, 6) are referring to, but I can't seem to get the third column to make sense in that context... May 29 '15 at 15:36
• @GlenO I'll post a clue a little later. If you can figure out everything else, I think the last column will come to you. May 29 '15 at 15:46
My imaginary friends think you have mental problems
(Gee, thanks)
Reasoning:
Looking in the source code, we can find links pointing to 6 xkcd comics:
1. http://xkcd.com/539/
2. http://xkcd.com/410/
3. http://xkcd.com/610/
4. http://xkcd.com/308/
5. http://xkcd.com/706/
6. http://xkcd.com/1171/
Then we can use the grid:
The first column tells you which comic to look at
The second column tells you which panel to look at
The third column tells you which word to start at
The fourth column tells you how many works to read
So...
The first comic gives up the single word "MY"
The second comic gives up the words "IMAGINARY FRIENDS" (counting hyphens as two words)
The third comic gives up the work "THINK"
The fourth comic gives up the words "YOU HAVE"
The fifth comic gives up the word "MENTAL"
And finally the sixth comic gives up the word "PROBLEMS"
Giving the final message as...
MY IMAGINARY FRIENDS THINK YOU HAVE MENTAL PROBLEMS
• Darn, I can't get the links to be spoilered... any help? May 29 '15 at 15:59
• Nice, but I don't understand where you got the links. Are they somewhere hidden in the text? May 29 '15 at 16:01
• @leoll2 Yep, if you click on the "edit" button in the bottom left you can see them in the plaintext May 29 '15 at 16:03
• @luxmi12 Very nicely done. May 29 '15 at 16:03
• @luxmi12 The links may only be visable to those that have already visited them: meta.stackexchange.com/questions/256941/… May 29 '15 at 16:13
|
# Laplace Transform of Constant Multiple/Examples/Example 1
## Examples of Use of Laplace Transform of Constant Multiple
Let $\laptrans f$ denote the Laplace transform of the real function $f$.
$\laptrans {\sin 3 t} = \dfrac 3 {s^2 + 9}$
## Proof
$\ds \laptrans {\sin 3 t}$ $=$ $\ds \dfrac 1 3 \dfrac 1 {\paren {s / 3}^2 + 1}$ Laplace Transform of Constant Multiple, Laplace Transform of Sine $\ds$ $=$ $\ds \dfrac 3 {s^2 + 9}$ simplification
$\blacksquare$
|
# Binary search for inserting in array
I have written a method that uses binary search to insert a value into an array.
It is working, but i would like to get a second opinion to see if i didn't write too much code for it. aka doing same thing twice for example.
The code looks for the right index for insertion and is called by an insert method that uses that index value to insert.
Here is the code:
public class OrdArray {
final private long[] a; // ref to array
int nElems; // number of dataitems
int curIn;
//----------------------------------------------------------------------
public OrdArray(int max) { // constructor
a = new long[max]; // create array
nElems = 0;
}
public int binaryInsert(long insertKey) {
int lowerBound = 0;
int upperBound = nElems - 1;
while (true) {
curIn = (upperBound + lowerBound) / 2;
if (nElems == 0) {
return curIn = 0;
}
if (lowerBound == curIn) {
if (a[curIn] > insertKey) {
return curIn;
}
}
if (a[curIn] < insertKey) {
lowerBound = curIn + 1; // its in the upper
if (lowerBound > upperBound) {
return curIn += 1;
}
} else if (lowerBound > upperBound) {
return curIn;
} else {
upperBound = curIn - 1; // its in the lower
}
}
}
public void display() { // display array contents
for (int j = 0; j < nElems; j++) { // for each element,
System.out.print(a[j] + " "); // display it
}
System.out.println("");
}
public void insert(long value) { // put element into array
binaryInsert(value);
int j = curIn;
int k;
for (k = nElems; k > j; k--) { // move bigger ones one up.
a[k] = a[k - 1];
}
a[j] = value; // insert value
nElems++; // increment size.
}
}
public static void main(String[] args) {
// TODO code application logic here
int maxSize = 100; // array size
OrdArray arr; // reference to array
arr = new OrdArray(maxSize); // create array
arr.insert(77); // insert 10 items
arr.insert(99);
arr.insert(44);
arr.insert(55);
arr.insert(22);
arr.insert(88);
arr.insert(11);
arr.insert(00);
arr.insert(66);
arr.insert(33);
arr.display();
}
Feedback appreciated.
• Can you also post the other variables that aren't mentioned in this function? It will help with the code review :) nElems, data type of curIn, declaration of a[] and whatever else you think is necessary. Also, this function is just supposed to return the index where the element should be inserted, correct? – Sanchit Nov 27 '13 at 16:05
• Yes ofc. Hold on. :) – WonderWorld Nov 27 '13 at 16:06
• Yes that is correct. It's for returning the index to insert. It's a modified one for searching for a particular value, from the book i'm reading. – WonderWorld Nov 27 '13 at 16:26
Here is my version of your code.
1. You never use "max" so I used it by throwing an exception if you are trying to insert too many elements.
2. You should make everything that shouldn't be public; private.
3. In your binaryInsert() you should move your base case to the begining.
4. Your binaryInsert() is a bit wonky but it works. I think this would do but I haven't checked it. Looks a bit neater and has one unnecessary if removed.
public int binaryInsert(long insertKey) {
if (nElems == 0)
return 0;
int lowerBound = 0;
int upperBound = nElems - 1;
int curIn = 0;
while (true) {
curIn = (upperBound + lowerBound) / 2;
if (a[curIn] == insertKey) {
return curIn;
} else if (a[curIn] < insertKey) {
lowerBound = curIn + 1; // its in the upper
if (lowerBound > upperBound)
return curIn + 1;
} else {
upperBound = curIn - 1; // its in the lower
if (lowerBound > upperBound)
return curIn;
}
}
}
5. Just use methods that return things directly. Don't need to store them in a temporary variable first. Talking about curIn here in your insert function.
6. If you want to return something (like an object) as a String or print something out. You should override the toString() method as I have done. Then you can just call System.out.println(arr.toString()) whenever you want to print the Object.
7. The whole point of doing a binary insert would be to quickly find out where to insert an element. Your implementation does this, however your implementation isn't super useful because you have to move each and every element foward by one. A double linked list (as usually taught in C++ classes) is ideal for your implementation of this better version of insertion sort. The java equivalent of a doubly linked list is a LinkedList. Which will give you much better performance as you will not need to move elements forward by one.
.
public class OrdArray {
final private long[] a; // ref to array
private int nElems; // number of dataitems
private final int MAX;
// ----------------------------------------------------------------------
public OrdArray(int max) { // constructor
this.MAX = max;
a = new long[MAX]; // create array
nElems = 0;
}
private int binaryInsert(long insertKey) {
if (nElems == 0) {
return 0;
}
int lowerBound = 0;
int upperBound = nElems - 1;
while (true) {
int curIn = (upperBound + lowerBound) / 2;
if (lowerBound == curIn) {
if (a[curIn] > insertKey) {
return curIn;
}
}
if (a[curIn] < insertKey) {
lowerBound = curIn + 1; // its in the upper
if (lowerBound > upperBound) {
return curIn += 1;
}
} else if (lowerBound > upperBound) {
return curIn;
} else {
upperBound = curIn - 1; // its in the lower
}
}
}
@Override
public String toString() { // display array contents
StringBuffer sb = new StringBuffer();
for (int j = 0; j < nElems; j++) { // for each element,
sb.append(a[j] + " "); // display it
}
sb.append(System.lineSeparator());
return sb.toString();
}
public void insert(long value) throws Exception { // put element into array
if (nElems == MAX)
throw new Exception("Can not add more elements.");
int j = binaryInsert(value);
int k;
for (k = nElems; k > j; k--) { // move bigger ones one up.
a[k] = a[k - 1];
}
a[j] = value; // insert value
nElems++; // increment size.
}
}
I'm sure I didn't get everything you need to improve on but hopefully that is atleast one step forward in the right direction. :)
• It looked a lot more wonkier when i saw it this morning. :) Your corrections work fine, except when there are no elements in the array, it starts inserting at index 1; Not sure how to fix that without an extra if statement saying if nElements == 0 --> return curIn; What do you mean at point 3 moving base case to beginning? – WonderWorld Nov 27 '13 at 18:43
• Base case sort of means stuff that you can use to immediately exit the function. It's stuff you know to be true / the basis for the rest of your fuction to work. Like for fibonacci numbers base case is fib(0) = 0 and fib(1) = 1. You hardcode this in so that the rest of your function works. – Sanchit Nov 27 '13 at 19:22
There are a couple of things I think you should consider, on top of what Sanchit has pointed out.
The two things I can see are:
• why are you 'reinventing the wheel'
• if you have to do your own binary search, there are some things you should do right
# Not-Reinventing-the-wheel
The 'right' way to do this is (and throw away your binary search code):
public void insert(long value) throws Exception { // put element into array
if (nElems == MAX)
throw new IllegalStateException("Can not add more elements.");
int j = Arrays.binarySearch(a, 0, nElems, value);
if (j < 0) {
// this is a new value to insert (not a duplicate).
j = - j - 1;
}
System.arraycopy(a, j, a, j+1, nElems - j);
a[j] = value;
nElems++;
}
# Re-inventing the wheel
If you have to reinvent this wheel (it's homework, or something), then consider this:
• curIn is redundant, and should be the return value of your binarySearch function.
• the 'standard' in Java is to return the position of the value in the array, or, if the value does not exist in the array, return - ip - 1 where 'ip' is the 'insertion point' or where the new value should be. i.e. in the array [1, 2, 4, 5] a binary search for '4' should return '2' (4 is at position 2). A search for '3' should return '-3' because - (-3) - 1 is +3 - 1 or 2 which is the place the 3 should go if it is inserted. This allows a single binary search to produce the answer to two questions: is it in the array, and if it is, where is it? and also if it isn't, where should it go?
• technically your code has a slight bug for large values of MAX.... your binary search should do upperbound + lowerbound >>> 1 because that will give the right values if/when upperbound + lowerbound overflows.
• It's about Re-inventing the wheel. :) The book i am reading states: "The techniques used in this chapter, while unsophisticated and comparatively slow are nevertheless worth examining." The code i have written is just as an exercise. With curIn is redundant and should be the return value. I thought it is the return value of my function? I don't understand how a search for 3 can return -3. – WonderWorld Nov 27 '13 at 19:16
• @Xbit This link has a pretty good description of wht Arrays.binary search is good: htmlgoodies.com/beyond/java/… – rolfl Nov 27 '13 at 19:25
• A note I'm not sure how relevant it is, but it is a better practice to throw an IllegalStateException instead of declaring throw Exception. You should be as specific as possible in "throws" declarations – Simon Forsberg Nov 27 '13 at 19:45
• Copy-paste problem.... hmmm. – rolfl Nov 27 '13 at 20:21
This is a simpler way.
EDIT: The original way works fine. With regards to my solution, it's just with less code. The algorithm itself covers all the cases and we don't need to put if-else conditions to catch edge cases.
fun searchInsert(array: IntArray, num: Int): Int {
var tail = array.lastIndex
var mid = (head + tail) / 2
if (num == array[mid])
return mid
else if (num > array[mid]) {
|
# Merging based on Chromosome coordinates
I have a simple task which seems very complicated now Two data-frame I have One contains the Chromosome coordinates of distal enhancer elements and other one rlog values of accessibility which I want to map.
The distal coordinates data
dput(head(a))
structure(list(Chr = c("chr6", "chr11", "chr20", "chr20", "chr10",
"chrX"), Start = c(154986726, 70314780, 15895696, 40706992, 29132014,
129525939), End = c(154987062, 70315059, 15895987, 40707264,
29132266, 129526265)), class = c("tbl_df", "tbl", "data.frame"
), row.names = c(NA, -6L))
The Rlog_ATAC_location data
Chr Start End CMP1 CMP2 CMP3 CMP4 CMP5 CMP6 GMP1 GMP2 GMP3 GMP4 GMP5 GMP6 HSC1 HSC2 HSC3 HSC4 HSC5 HSC6
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
chr1 16103 16348 2.70 2.99 2.87 1.88 2.06 1.94 2.63 2.37 2.61 1.81 1.31 1.21 1.89 1.67 1.95 1.09 2.17 0.861
chr1 96493 96796 0.976 0.325 0.943 0.216 1.33 1.77 1.66 1.33 1.47 2.39 1.76 2.37 0.939 0.198 1.37 0.439 0.868 0.756
chr1 271114 271432 1.95 2.62 2.04 2.29 2.49 2.86 2.12 2.89 2.30 2.25 3.16 3.50 0.695 1.03 0.491 0.785 1.15 1.00
chr1 273026 273333 0.740 1.88 1.48 1.71 2.50 2.88 2.81 2.66 2.89 2.07 3.02 3.61 1.24 1.40 0.984 1.35 0.651 1.53
chr1 274265 274599 0.954 1.59 2.06 0.227 2.05 1.19 0.906 1.60 1.62 1.99 2.05 2.47 0.920 0.211 1.06 0.426 1.99 0.529
chr1 402478 402771 2.98 3.26 3.01 3.07 2.51 2.76 3.29 2.65 2.34 3.03 3.00 2.99 2.54 3.04 2.88 2.96 2.94 2.68
I tried inner join ,merge etc but i can map the chromosome cooridnates but the expression values comes empty
tmp <- merge(a,Rlog_ATAC_location, by=c("Chr","Start" ,"End"), all.x=TRUE)
Chr Start End CMP1 CMP2 CMP3 CMP4 CMP5 CMP6 GMP1 GMP2 GMP3 GMP4 GMP5 GMP6 HSC1 HSC2 HSC3 HSC4 HSC5 HSC6 Mono1 Mono2 Mono3
chr1 16104 16348 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
chr1 96494 96796 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
chr1 271115 271432 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
chr1 273027 273333 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
chr1 274266 274599 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
chr1 402479 402771 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
Tried left join as well i get same result.
I want to map the coordinates as well as the samples columns which contains the expression
Any suggestion or help would be really appreciated
• Do your two data frames have the same coordinations? For instance, does chr1 16103 16348 exist in another data frame as well? If not, then you cannot use left_join or similar functions. If this is the case, maybe a more precise description of your problem is to annotate distal elements with your ATAC location? Mar 29, 2020 at 19:54
• I have updated my question with the data actul data Im using if i do left join I can map all the coordinates but the values are not coming " Do your two data frames have the same coordinations?" yes one is subset of others
– kcm
Mar 30, 2020 at 4:57
• in the Example you've given us, there is no overlap in Start positions between the two data frames, so the behaviour of something like join is to return NA. You should go back and check with any(a$Start %in% Rlog$Start) if there is any overlap between the to data frames Mar 30, 2020 at 13:05
• i think i have messed up somewhere i checked its not matching
– kcm
Mar 30, 2020 at 15:09
In your ATAC data frame, you have chr1 16103 16348. But in your merged data frame, you have chr1 16104 16348. Is it possible that you a is 1-based whereas your ATAC data frame is 0-based? This can happen in bioinformatics and can be annoying. This might be why you have a lot of NA in your merged data frame. You should subtract 1 from the start position of your a data frame while keeping the end position unchanged. Then try to merge again.
|
### Days and Dates
Investigate how you can work out what day of the week your birthday will be on next year, and the year after...
### Summing Consecutive Numbers
Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way?
### Always the Same
Arrange the numbers 1 to 16 into a 4 by 4 array. Choose a number. Cross out the numbers on the same row and column. Repeat this process. Add up you four numbers. Why do they always add up to 34?
# Quick Times
##### Stage: 3 Challenge Level:
$32 \times 38 = 30 \times 40 + 2 \times 8$
$34 \times 36 = 30 \times 40 + 4 \times 6$
$56 \times 54 = 50 \times 60 + 6 \times 4$
$73 \times77 = 70 \times80 + 3 \times 7$
And so on?
Verify and generalise if possible.
|
## Calculus Problem
For the vector field H = -yi + xj, find the line integral along the curve C from the origin along the x-axis to the point (14,0) and then counterclockwise around the circumference of the circe x^2 + y^2 = 196 to the point (14/ $$\sqrt{2}$$ , 14/ $$\sqrt{2}$$ ). Thanks
|
# Focus breathe: how to climb fast with a recumbent
Humanity seems convinced recumbents and mountains don’t go together. This might be the case for some recumbent designs and some recumbent cyclists, but it is not a law of nature. On the contrary, with the right design and the right technique, a recumbent can be a good and even excellent climber. Probably better than the best traditional bicycles.
The laws of physics are clear on this one. Climbing speed equals effective power divided by the total weight. The losses who make the difference between the power of the cyclist and the effective power ending up on the road, are generally not that large, and do not differ much between various bicycle concepts. Exceptions are aerodynamic drag in case of strong headwind, and rolling resistance on bad roads. However, this discussion deals with gaining height, not overcoming cobblestones or wind.
If we focus on factors that can be influenced by the design of the bike, and ignore energy losses which do not really differ between various designs, the whole physics of climbing boils down to this simple formula:
$climbing speed = \frac{breathing volume}{total weight}$
With SL-technology, the weight difference between a recumbent and a traditional lightweight bicycle is now within a few percent of the total weight. This difference is so small it could in the long run be offset by the better comfort alone, which saves a lot of energy. The difference is at least too small to denounce the recumbent as a bad climber.
However, it is the numerator which holds the secret to recumbent climbing. The body position has a large influence on the amount of air a cyclist can pump through his or her lungs. More oxygen in, more CO2 out means more power and faster recovery. On a traditional bicycle breathing is hampered by the position of shoulders and upper body. The recumbent concept leaves more freedom to optimize for breathing. I think this effect makes the recumbent potentially the best climber.
So how to exploit this benefit of the recumbent? Take a singing course and quit halfway the first lesson. You’ll know enough by then. To breath well, one needs to stand upright, shoulders to the back in a relaxed way, head up, and use both chest and belly.
## Rules of thumb
This translates to a few rules of thumb for both the design of the bike and the riding technique.
1. The body should be relatively stretched out. The recumbent-equivalent of standing upright is to lie stretched out. This means a relatively low bracket and a laid back seat. This goes against popular belief the seat angle should be steeper while climbing! It also means neither a full blooded low racer nor a comfort-orientated touring recumbent will ever be a really fast climber. A real climbing recumbent has the seat of the former and the bracket of the latter. The Fujin SL is a good example.
2. The shoulders should be held a little backwards. This means the design and adjustment of the tiller should be such that the arms are relaxed and leave a lot of room to move the shoulders. As a bonus, the relaxed arm muscles enhance the breakdown of lactic acid. For people with short arms, an under seat or open cockpit steer can be a bad choice because of this. Also, the seat should not push the shoulders towards each other.
3. The head should be held in line with the upper body. This might take a little practice, but is important to open up the chest and throat as much as possible. Users of a head rest might want to adjust it differently or even remove it before going into the mountains. It should also be taken into account when choosing a top pannier. Fine tuning of the seat helps too. Add or remove small pieces of padding until the bulge in the middle of the seat matches your body perfectly.
4. The upper body should be used for breathing, not for pushing. Push only with your legs and counter these forces with your lower back. Never brace yourself with the shoulders. Relax your abs.
Climbing position on a recumbent.
So why do so many people, including recumbent riders, still think the traditional bike is the better climber? The problem is, good recumbent climbing feels unnatural in the beginning. When riding gets tough, we are inclined to push harder. And to be able to push harder, we tend to sit more upright and use the upper body to apply even more force. But force is not what gets you up. It is power. Watts. It is the amount of fat and carbohydrates burnt per second that counts. And for this, you need as much breath as possible.
So it does not only take a well designed and adjusted recumbent to climb fast. It also takes practice. Go out riding and focus on breathing. You’ll soon feel the difference. After awhile the craving for breath will make a “breathing” position feel more natural than a “brute force” position.
This sounds like good news for recumbent-riders, but is it based in fact? Some research has been done on the influence of body position on aerobic power output. One experiment showed a significant influence of the angle between upper and lower body. Some 10% of power is lost when this angle decreases from 130~140 degrees to 100 degrees (1). There is however no difference between an upright and laid-back position (2). Only the angle counts. I suspect an angle of over 140 degrees is even better. On my climbing recumbent, a Challenge Fujin SL, it approaches 150 degrees.
Further, research has been done with traditional time trialists. An optimal aerodynamic position is not the fastest one, because the breathing is hampered by the position of arms and shoulders. A time trialist needs to accept more aerodynamic drag to give the lungs enough room (3).
The most important thing for me, however, is my own experience in the mountains. It is not scientific, I know, but I am pretty sure I’m not fooling myself. During my second journey in Scandinavia I have been experimenting with different climbing positions. I did not change the seat of the bike; I took different body positions by holding my head and shoulders to the front or to the back, and by shifting a little in my seat.
At climbs on which I took in a more upright position I had to open up the zipper of my jersey. But when I took a more laid back position with shoulders and head to the back, I had to close my zipper. It was too cold otherwise. But the cold came from the inside. It was the cooling effect of the extra air entering my chest. This air was quite cold, as I was riding at some 70 degrees latitude. So I clearly noticed the difference.
A year later, in France, I had the chance to see how big the difference can be. I was riding with a friend on a traditional racing bicycle. In the low countries he gives many recumbent riders a hard time. During this trip my bike was carrying most of the luggage: a tent, cooking stuff, both sleeping bags, mattresses and locks, and also most of his clothes. He had just a small day pack.
I had no ambition to keep his pace on the hills, and with a regular body position it was indeed impossible. However, any time I took the breathing-position, it was he who couldn’t follow. Even with the luggage, I climbed faster and recovered faster. For us it was clear the big difference came from better breathing.
1 Raoul F. Reiser II, Michael L. Peterson, Jeffrey P. Broker: Anaerobic Cycling Power Output With Variations in Recumbent Body Configuration, Journal of Applied Biomechanics, Vol. 17, No. 3, 2001.
2 Steven R. Bussolari and Ethan R. Nadel: The physiological limits of long-duration human power production-lessons learned from the Daedalus project, Human Power Vol. 7 No. 4, 1989.
## 8 reacties op “Focus breathe: how to climb fast with a recumbent”
1. Great article! Somehow it got tweeted to me a short while ago, and I’m “spreading the word” in various ‘bent forums. I think all ‘bent riders should try to learn this technique for better climbing power!
2. Great article. Im novice recumbent rider so I realy need texts like this to demistify the bad climb reputation of the recumbents. Cheers and Happy new Year!
3. Great article, I used the same approach by myself in the last year and I confirm everything: good breathing is the key.
I’m not really interested in “comparison” with traditional bikes, but until 1 year ago I was sure not being able to afford with a recumbent a particular hill climb (10% average with peaks up to 18%.) near my city where I was used to go 5 years ago with a traditional road bike: now, after 8 month of light training (2-4 h/week) with a Slyway Endorphin 700c (12 kg with my setup + 63 kg me), I measure the same climbing time without particular effort… and I’m 5 years older now (36 in March) and with 2+half years of almost-zero training in the meanwhile (= 2 times daddy).
From the end of 2012 I’m experiencing shorter cranks (135 mm, I’m 173 cm) and higher cadence (110-120 rpm instead of 90-95 with 170 mm cranks): the initial feeling is that, while I need some time to adapt myself to the new cadence, I’m going better more in climbing! Lower HR at the same speed… but it’s early to confirm, let’s see after 10-15 rides.
Next 2nd of March I will afford the Granfondo of Florence mid-track… not looking for podium, just having fun in a beautiful place, but, if nothing goes wrong I can classify in the first half (at least in my age-category).
Ride! Breathe! Enjoi! 😀
4. Great analysis, it also confirms what I experienced this year, my body telling me to stretch out and be more reclined in order to breathe better.
The more open position gave me the instant turbo boost I knew I was missing and have always enjoyed in my other sports, cross country skiing, swimming and running.
Thank you for clarifying the crucial aspects of head extension and bottom bracket height among other things, it really helped me understand what my intuition told me.
Eventually more recumbent riders will recognize and experience the validity of your findings.
Thank you for the remarkable english translation and beautiful web site.
Best regards,
Michel
5. Indeed interesting analysis ! Thanks for sharing.
I will adjust the backrest orientation to enable a better breathing.
Already the Kervelo Bike has shown very promising capabilities and sure it will become even better.
Thanks
Marc
|
Since Python is call-by-object(*1), a function that mutates a mutable argument, changes that object in the caller’s scope. some code to illustrate:
>>> mobject = {'magic': True}
>>> id(mobject)
140330485577440
>>>
>>> def toggle(mdata):
... '''flip the state of magic'''
... mdata['magic'] = not mdata['magic']
...
>>>
>>> toggle(mobject)
>>> mobject
{'magic': False}
>>> id(mobject)
140330485577440
So hopefully this does not surprise you. If it does, please see the two links in the footnotes(*1) as they explain it quiet well.
My question deals with the implicit nature of the mutation. Not that Python is somehow wrong, it is the fact that the function usage does not convey the mutation to the reader as pointedly as I want. Coming from other languages that are call by value, a function that wanted to mutate an argument and get it back into the caller’s scope had to return the mutated value.
>>> def toggle_explicit(mdata):
... '''flip the state of magic and return'''
... mdata['magic'] = not mdata['magic']
... return mdata
...
>>> mobject = toggle_explicit(mobject)
>>> mobject
{'magic': True}
>>> id(mobject)
140330485577440
>>>
Now I know that the above code is most definitely NOT call by value, but I do feel that it is more explicit about my intention. Even though the assignment is not needed for the actual effect. i.e.:
>>> toggle_explicit(mobject)
{'magic': False}
>>> mobject
{'magic': False}
So why does toggle_explicit() give me the warm fuzzies? Where as toggle() requires the reader to know what is going on. Is it just me shaking the cruft of call-by-value languages off? What do you do, when you mutate state within a function? Is the toggle_explicit() form not/less Pythonic?
— Footnotes —
(*1) Raymond Hettinger in this SO article references a post by Fredrik Lundh’s on “Call By Object
### 4 Responses to “Mutant Registration: Implicit or Explicit”
1. Part of the problem here might be the use of the term “scope”. The object is NOT “changed in the caller’s scope”, because in Python it is the *names* that are in the caller’s scope, not the objects. An object in Python has no scope. It exists separately, can be shared by everyone (i.e. bound to names that exist in any scope), and is automatically destroyed (ignoring circular references) when no scope contains any references to it.
• @Peter I concede the “scope” of the mutable object is not in a scope, however, the identifier in the caller’s scope is attached to the same object that is being modified in the function’s scope by another identifier. Hence the side-effect results and my question as to what would be the more Pythonic way.
2. I find toggle_explicit a little bit discomforting.
I would expect a function to either to be a ‘procedure’ that works via a side-effect, or work as a pure function (i.e. return a new dictionary, leaving the old one alone, no side-effects).
Making it look functional, but have unexpected side-effects, is dangerous. It means I can’t (for example) memoize/cache the result.
So, one approach would be to make it pure, and return a new copy of the object, leaving the original alone.
Another approach to consider would be see if mobject could be made an instance of a class with a toggle() method.
• @Julian – you raise an interesting point about not being able to memoize/cache the result. I had not that about that. Doing a copy.deepcopy would certainly be a performance penalty and another knock against the toggle_explicit form.
As to using an instance of MObject class and using a toggle method, I had purposely left that out. I had hope to focus on whether or not the “side-effect” form was considered acceptable
Sorry, the comment form is closed at this time.
|
getfem-users
[Top][All Lists]
## [Getfem-users] Non convex problems and Getfem solver
From: David Danan Subject: [Getfem-users] Non convex problems and Getfem solver Date: Wed, 13 Jan 2016 03:38:27 -0500
Dear users,
i would like to know whether the Getfem solver can handle non-convex problems, let's say for instance a Signorini problem with a non monotone friction law where the coefficient of friction is described by
\mu(\|\dot{\bu}_\tau\|)=(a-b)\cdot e^{-\alpha\|\dot{\bu}_\tau\|} +b, where a,b, \alpha >0 et a\geq b.
If it is, what kind of procedure is used with the Newton iteration? Convexification iteration ( fix the value of the coefficient of friction, in that case, at each "
Convexification " iteration to a given value depending on the tangential slip rate found in the previous iteration) ? Something else?
|
Enable contrast version
# Tutor profile: Kevin B.
Inactive
Kevin B.
Tutor Satisfaction Guarantee
## Questions
### Subject:Computer Science (General)
TutorMe
Question:
Write a pseudocode function, fib(n), to find the $$n^{th}$$ number in the fibonacci sequence, $$F_{n} = F_{n-1} + F_{n-2}$$, starting with $$F_{0} = 0, F_{1} = 1$$ and using recursion.
Inactive
Kevin B.
fib(n): if (n == 0): return 0 else if (n == 1): return 1 else: return fib(n-1) + fib(n-2)
### Subject:Calculus
TutorMe
Question:
Find the area bounded by the curves $$y=x^{2}$$ and $$y=x$$.
Inactive
Kevin B.
First, find the points at which the two curves intersect. $$x^{2}=x \Rightarrow x=0,1$$ Next, take the integral of the upper bound minus the lower bound on the interval. $$Area = \int_{0}^{1}(x-x^{2})dx = \left[ \frac{x^{2}}{2} - \frac{x^3}{3} \right]_{0}^{1} = \left( \frac{1}{2} - \frac{1}{3} \right) - 0 = \frac {1}{6}$$
### Subject:Physics
TutorMe
Question:
A ball is thrown horizontally at 10 m/s off a roof that is 120 m tall. What horizontal distance does the ball travel before hitting the ground? (Use g = 10 m/s^2 and neglect air resistance)
Inactive
Kevin B.
First, find the time it will take for the ball to hit the ground. $$V_{0y} = 0 m/s$$ $$y_{0} = 120 m$$ $$y_{f} = 0 m$$ $$\left( y_{f} - y_{0}\right) = v_{0y} * t - \frac{1}{2} * g * t^{2}$$ $$\Rightarrow \left( 0 - 120 \right) = 0 - \frac{1}{2} * 10 * t^{2}$$ $$\Rightarrow -120 = -5 * t^{2}$$ $$\Rightarrow t^{2} = 24$$ $$\Rightarrow t=\sqrt{24} = 2\sqrt{6} s$$ Then, find the horizontal distance travelled in this time. $$V_{x} = 10m/s$$ $$x_0 = 0 m/s$$ $$\left( x_{f} - x_{0} \right) = v_{x} * t$$ $$\Rightarrow \left( x_{f} - 0 \right) = 10 * 2\sqrt{6}$$ $$\Rightarrow x_{f} = 20\sqrt{6} \approx 48.99m$$ Therefore, the ball will travel approximately 48.99 m before hitting the ground.
## Contact tutor
Send a message explaining your
needs and Kevin will reply soon.
Contact Kevin
Start Lesson
## FAQs
What is a lesson?
A lesson is virtual lesson space on our platform where you and a tutor can communicate. You'll have the option to communicate using video/audio as well as text chat. You can also upload documents, edit papers in real time and use our cutting-edge virtual whiteboard.
How do I begin a lesson?
If the tutor is currently online, you can click the "Start Lesson" button above. If they are offline, you can always send them a message to schedule a lesson.
Who are TutorMe tutors?
Many of our tutors are current college students or recent graduates of top-tier universities like MIT, Harvard and USC. TutorMe has thousands of top-quality tutors available to work with you.
BEST IN CLASS SINCE 2015
TutorMe homepage
|
# Forces, density and pressure – CIE AS Level Physics 9702
Contents:-
• Types of force
1. describe the force on a mass in a uniform gravitational field and on a charge in a uniform electric field
2. understand the origin of the upthrust acting on a body in a fluid
3. show a qualitative understanding of frictional forces and viscous forces including air resistance (no treatment of the coefficients of friction and viscosity is required)
4. understand that the weight of a body may be taken as acting at a single point known as its centre of gravity
• Turning effects of forces
1. define and apply the moment of a force
2. understand that a couple is a pair of forces that tends to produce rotation only
3. define and apply the torque of a couple
• Equilibrium of forces
1. state and apply the principle of moments
2. understand that, when there is no resultant force and no resultant torque, a system is in equilibrium
3. use a vector triangle to represent coplanar forces in equilibrium
• Density and pressure
1. define and use density
2. define and use pressure
3. derive, from the definitions of pressure and density, the equation Δp = ρgΔh
4. use the equation Δp = ρgΔh
Video (2:06:53 mins)
Free Preview
|
## Communications on Pure and Applied Analysis
Short Title: Commun. Pure Appl. Anal. Publisher: American Institute of Mathematical Sciences (AIMS), Springfield, MO; Shanghai Jiao Tong University, Shanghai ISSN: 1534-0392; 1553-5258/e Online: https://www.aimsciences.org/journal/1534-0392 Comments: Indexed cover-to-cover
Documents Indexed: 2,327 Publications (since 2002) References Indexed: 1,292 Publications with 35,568 References.
all top 5
### Latest Issues
21, No. 6 (2022) 21, No. 5 (2022) 21, No. 4 (2022) 21, No. 3 (2022) 21, No. 2 (2022) 21, No. 1 (2022) 20, No. 12 (2021) 20, No. 11 (2021) 20, No. 10 (2021) 20, No. 9 (2021) 20, No. 6 (2021) 20, No. 5 (2021) 20, No. 4 (2021) 20, No. 3 (2021) 20, No. 2 (2021) 20, No. 1 (2021) 19, No. 12 (2020) 19, No. 11 (2020) 19, No. 10 (2020) 19, No. 9 (2020) 19, No. 8 (2020) 19, No. 7 (2020) 19, No. 6 (2020) 19, No. 5 (2020) 19, No. 4 (2020) 19, No. 3 (2020) 19, No. 2 (2020) 19, No. 1 (2020) 18, No. 6 (2019) 18, No. 5 (2019) 18, No. 4 (2019) 18, No. 3 (2019) 18, No. 2 (2019) 18, No. 1 (2019) 17, No. 6 (2018) 17, No. 5 (2018) 17, No. 4 (2018) 17, No. 3 (2018) 17, No. 2 (2018) 17, No. 1 (2018) 16, No. 6 (2017) 16, No. 5 (2017) 16, No. 4 (2017) 16, No. 3 (2017) 16, No. 2 (2017) 16, No. 1 (2017) 15, No. 6 (2016) 15, No. 5 (2016) 15, No. 4 (2016) 15, No. 3 (2016) 15, No. 2 (2016) 15, No. 1 (2016) 14, No. 6 (2015) 14, No. 5 (2015) 14, No. 4 (2015) 14, No. 3 (2015) 14, No. 2 (2015) 14, No. 1 (2015) 13, No. 6 (2014) 13, No. 5 (2014) 13, No. 4 (2014) 13, No. 3 (2014) 13, No. 2 (2014) 13, No. 1 (2014) 12, No. 6 (2013) 12, No. 5 (2013) 12, No. 4 (2013) 12, No. 3 (2013) 12, No. 2 (2013) 12, No. 1 (2013) 11, No. 6 (2012) 11, No. 5 (2012) 11, No. 4 (2012) 11, No. 3 (2012) 11, No. 2 (2012) 11, No. 1 (2012) 10, No. 6 (2011) 10, No. 5 (2011) 10, No. 4 (2011) 10, No. 3 (2011) 10, No. 2 (2011) 10, No. 1 (2011) 9, No. 6 (2010) 9, No. 5 (2010) 9, No. 4 (2010) 9, No. 3 (2010) 9, No. 2 (2010) 9, No. 1 (2010) 8, No. 6 (2009) 8, No. 5 (2009) 8, No. 4 (2009) 8, No. 3 (2009) 8, No. 2 (2009) 8, No. 1 (2009) 7, No. 6 (2008) 7, No. 5 (2008) 7, No. 4 (2008) 7, No. 3 (2008) 7, No. 2 (2008) 7, No. 1 (2008) ...and 22 more Volumes
all top 5
### Authors
26 Papageorgiou, Nikolaos S. 13 Tang, Xianhua 13 Zelik, Sergey V. 11 Goubet, Olivier 11 Kloeden, Peter Eris 10 Alves, Claudianor Oliveira 10 Miranville, Alain M. 10 Pata, Vittorino 10 Rossi, Julio Daniel 10 Strichartz, Robert S. 9 Caraballo Garrido, Tomás 9 Feng, Zhaosheng 9 Tachim Medjo, Theodore 8 Grasselli, Maurizio 7 Fan, Jishan 7 Ha, Seung-Yeal 7 Hu, Shouchuan 7 Li, Wan-Tong 7 Mu, Chunlai 7 Ozawa, Tohru 7 Pecher, Hartmut 7 Shi, Junping 7 Wang, Wenjun 7 Yao, Zhengan 7 Zhou, Jun 6 Fang, Daoyuan 6 Giné, Jaume 6 Li, Congming 6 Miyagaki, Olimpio Hiroshi 6 Rădulescu, Vicenţiu D. 6 Valdinoci, Enrico 6 Valls Anglés, Cláudia 6 Wang, Lihe 6 Yotsutani, Shoji 6 Zajączkowski, Wojciech M. 5 Brunner, Hermann 5 Chen, Chiun-Chuan 5 Chueshov, Igor’ Dmitrievich 5 Conti, Monica C. 5 Gasiński, Leszek 5 Guo, Zongming 5 Ju, Hongjie 5 Li, Yachun 5 Liu, Chein-Shan 5 Ma, Li 5 Messaoudi, Salim A. 5 Nolasco de Carvalho, Alexandre 5 Ogawa, Takayoshi 5 Peral Alonso, Ireneo 5 Qin, Yuming 5 Reyes, Guillermo 5 Shen, Wenxian 5 Sheng, Wancheng 5 Shivaji, Ratnasingham 5 Sommen, Franciscus 5 Teng, Kaimin 5 Valero, José 5 Wang, Shu 5 Wei, Juncheng 5 Wu, Jianhua 5 Wu, Tsungfang 5 Yanagida, Eiji 5 Yang, Dachun 5 Yin, Jingxue 5 Zahrouni, Ezzeddine 5 Zhang, Jihui 5 Zhang, Yimin 4 Abdellaoui, Boumediene 4 Amirat, Youcef Aït 4 Aouaoui, Sami 4 Chen, Jianqing 4 Chepyzhov, Vladimir V. 4 Cho, Yonggeun 4 Colliander, James E. 4 Cortázar, Carmen 4 Deugoue, Gabriel 4 Ding, Yanheng 4 Do Ó, João M. Bezerra 4 Ferreira, Lucas Catão de Freitas 4 García-Huidobro, Marta 4 Guibé, Olivier 4 Guo, Jong-Shenq 4 Guo, Yuxia 4 Guo, Zhiming 4 Han, Maoan 4 Hwang, Gyeongha 4 Jian, Huaiyu 4 Karlsen, Kenneth Hvistendahl 4 Kawohl, Bernd 4 Kurata, Kazuhiro 4 Langa, Jose’ Antonio 4 Li, Fucai 4 Li, Tong 4 Llibre, Jaume 4 Lu, Guozhen 4 Marín-Rubio, Pedro 4 Mercaldo, Anna 4 Naumkin, Pavel Ivanovich 4 Pawlow, Irena 4 Pražák, Dalibor ...and 3,252 more Authors
all top 5
### Fields
1,793 Partial differential equations (35-XX) 292 Fluid mechanics (76-XX) 259 Ordinary differential equations (34-XX) 252 Dynamical systems and ergodic theory (37-XX) 166 Biology and other natural sciences (92-XX) 165 Operator theory (47-XX) 122 Global analysis, analysis on manifolds (58-XX) 99 Mechanics of deformable solids (74-XX) 98 Calculus of variations and optimal control; optimization (49-XX) 95 Numerical analysis (65-XX) 86 Functional analysis (46-XX) 82 Integral equations (45-XX) 77 Harmonic analysis on Euclidean spaces (42-XX) 71 Probability theory and stochastic processes (60-XX) 54 Systems theory; control (93-XX) 50 Differential geometry (53-XX) 48 Real functions (26-XX) 45 Statistical mechanics, structure of matter (82-XX) 36 Classical thermodynamics, heat transfer (80-XX) 30 Potential theory (31-XX) 30 Quantum theory (81-XX) 27 Measure and integration (28-XX) 26 Difference and functional equations (39-XX) 23 Mechanics of particles and systems (70-XX) 23 Optics, electromagnetic theory (78-XX) 18 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 16 Operations research, mathematical programming (90-XX) 15 Computer science (68-XX) 14 Functions of a complex variable (30-XX) 14 Approximations and expansions (41-XX) 14 Statistics (62-XX) 13 Special functions (33-XX) 12 Geophysics (86-XX) 10 General and overarching topics; collections (00-XX) 8 Integral transforms, operational calculus (44-XX) 8 Information and communication theory, circuits (94-XX) 7 Abstract harmonic analysis (43-XX) 6 Several complex variables and analytic spaces (32-XX) 5 Relativity and gravitational theory (83-XX) 5 Astronomy and astrophysics (85-XX) 4 History and biography (01-XX) 4 Combinatorics (05-XX) 4 Number theory (11-XX) 4 Sequences, series, summability (40-XX) 4 Convex and discrete geometry (52-XX) 3 Topological groups, Lie groups (22-XX) 3 General topology (54-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Associative rings and algebras (16-XX) 2 Group theory and generalizations (20-XX) 1 Field theory and polynomials (12-XX) 1 Commutative algebra (13-XX) 1 Algebraic geometry (14-XX) 1 Algebraic topology (55-XX) 1 Manifolds and cell complexes (57-XX) 1 Mathematics education (97-XX)
### Citations contained in zbMATH Open
1,641 Publications have been cited 11,218 times in 9,602 Documents Cited by Year
A Brezis-Nirenberg result for non-local critical equations in low dimension. Zbl 1302.35413
2013
On the stationary solutions of generalized reaction diffusion equations with $$p$$& $$q$$-Laplacian. Zbl 1210.35090
Cherfils, L.; Il’yasov, Y.
2005
Convergence of generalized proximal point algorithms. Zbl 1095.90115
Marino, Giuseppe; Xu, Hong-Kun
2004
Uniqueness of positive solutions to some coupled nonlinear Schrödinger equations. Zbl 1264.35237
Wei, Juncheng; Yao, Wei
2012
On the Cahn-Hilliard equation with irregular potentials and dynamic boundary conditions. Zbl 1172.35417
Gilardi, Gianni; Miranville, A.; Schimperna, Giulio
2009
Quasilinear Schrödinger equations involving concave and convex nonlinearities. Zbl 1171.35118
do Ó, João Marcos; Severo, Uberlandio
2009
A general multipurpose interpolation procedure: The magic points. Zbl 1184.65020
Maday, Yvon; Nguyen, Ngoc Cuong; Patera, Anthony T.; Pau, S. H.
2009
A Liouville type theorem for an integral system. Zbl 1134.45007
Ma, Li; Chen, Dezhong
2006
The two-dimensional Riemann problem for isentropic Chaplygin gas dynamic system. Zbl 1197.35164
Guo, Lihui; Sheng, Wancheng; Zhang, Tong
2010
Reduced symmetry elements in linear elasticity. Zbl 1154.74041
Boffi, Daniele; Brezzi, Franco; Fortin, Michel
2009
Periodic solutions of nonlinear periodic differential systems with a small parameter. Zbl 1139.34036
Buică, Adriana; Françoise, Jean-Pierre; Llibre, Jaume
2007
On fractional Schrödinger equations in Sobolev spaces. Zbl 1338.35466
Hong, Younghun; Sire, Yannick
2015
Persistent regional null controllability for a class of degenerate parabolic equations. Zbl 1063.35092
Cannarsa, Piermarco; Martinez, Patrick; Vancostenoble, Judith
2004
Asymptotic regularity of solutions of a nonautonomous damped wave equation with a critical growth exponent. Zbl 1197.35168
Zelik, Sergey
2004
Uniqueness and boundary behavior of large solutions to elliptic problems with singular weights. Zbl 1174.35386
Chuaqui, M.; Cortazar, C.; Elgueta, M.; Garcia-Melian, J.
2004
Concentration phenomenon for fractional nonlinear Schrödinger equations. Zbl 1304.35634
Chen, Guoyuan; Zheng, Youquan
2014
Eigenvalue, maximum principle and regularity for fully non linear homogeneous operators. Zbl 1132.35032
Birindelli, I.; Demengel, F.
2007
Weakly dissipative semilinear equations of viscoelasticity. Zbl 1101.35016
Conti, Monica; Pata, Vittorino
2005
Some observations on the Green function for the ball in the fractional Laplace framework. Zbl 1334.35383
Bucur, Claudia
2016
A result on the existence of global attractors for semigroups of closed operators. Zbl 1152.47046
Pata, Vittorino; Zelik, Sergey
2007
Global weak solutions for a viscous liquid-gas model with singular pressure law. Zbl 1175.76143
Evje, Steinar; Karlsen, Kenneth Hvistendahl
2009
A blowup alternative result for fractional nonautonomous evolution equation of Volterra type. Zbl 1397.35331
Chen, Pengyu; Zhang, Xuping; Li, Yongxiang
2018
On quasilinear elliptic equations related to some Caffarelli-Kohn-Nirenberg inequalities. Zbl 1148.35324
Abdellaoui, B.; Peral, I.
2003
Existence and uniqueness of solutions to an aggregation equation with degenerate diffusion. Zbl 1213.35266
Bertozzi, Andrea L.; Slepčev, Dejan
2010
Renormalized solutions of an anisotropic reaction-diffusion-advection system with $$L^1$$ data. Zbl 1134.35371
Bendahmane, Mostafa; Karlsen, Kenneth H.
2006
Regularity of solutions for a system of integral equations. Zbl 1073.45004
Chen, Wenxiong; Li, Congming
2005
Blowup in higher dimensional two species chemotactic systems. Zbl 1278.35250
Biler, Piotr; Espejo, Elio E.; Guerra, Ignacio
2013
Positive solutions to a Dirichlet problem with $$p$$-Laplacian and concave-convex nonlinearity depending on a parameter. Zbl 1270.35219
Marano, Salvatore A.; Papageorgiou, Nikolaos S.
2013
Existence and multiplicity of solutions for Kirchhoff type problem with critical exponent. Zbl 1264.65206
Xie, Qi-Lin; Wu, Xing-Ping; Tang, Chun-Lei
2013
Quasilinear anisotropic degenerate parabolic equations with time-space dependent diffusion coefficients. Zbl 1084.35034
Chen, Gui-Qiang; Karlsen, Kenneth H.
2005
Asymptotic behavior of a parabolic-hyperbolic system. Zbl 1079.35022
Grasselli, Maurizio; Pata, Vittorino
2004
The Nehari manifold for fractional systems involving critical nonlinearities. Zbl 06636876
He, Xiaoming; Squassina, Marco; Zou, Wenming
2016
On existence and concentration behavior of ground state solutions for a class of problems with critical growth. Zbl 1183.35121
Alves, C. O.; Souto, M. A. S.
2002
A new approach to study the Vlasov-Maxwell system. Zbl 1037.35088
Klainerman, Sergiu; Staffilani, Gigliola
2002
Asymptotic behavior of solutions to the phase-field equations with Neumann boundary conditions. Zbl 1082.35033
Zhang, Zhenhua
2005
An evolution equation involving the normalized $$p$$-Laplacian. Zbl 1229.35132
Does, Kerstin
2011
Stability result for the Timoshenko system with a time-varying delay term in the internal feedbacks. Zbl 1228.35242
Kirane, Mokhtar; Said-Houari, Belkacem; Anwar, Mohamed Naim
2011
Twelve limit cycles in a cubic order planar system with $$Z_2$$ symmetry. Zbl 1085.34028
Yu, P.; Han, M.
2004
The extremal solution of a boundary reaction problem. Zbl 1156.35039
Dávila, Juan; Dupaigne, Louis; Montenegro, Marcelo
2008
Localized BMO and BLO spaces on RD-spaces and applications to Schrödinger operators. Zbl 1188.42008
Yang, Dachun; Yang, Dongyong; Zhou, Yuan
2010
A logistic equation with refuge and nonlocal diffusion. Zbl 1180.45002
García-Melián, J.; Rossi, Julio D.
2009
Time-frequency analysis of Fourier integral operators. Zbl 1196.42027
Cordero, Elena; Nicola, Fabio; Rodino, Luigi
2010
Friedlander’s eigenvalue inequalities and the Dirichlet-to-Neumann semigroup. Zbl 1267.35139
Arendt, Wolfgang; Mazzeo, Rafe
2012
Super polyharmonic property of solutions for PDE systems and its applications. Zbl 1270.35224
Chen, Wenxiong; Li, Congming
2013
A remark on the damped wave equation. Zbl 1140.35533
Pata, Vittorino; Zelik, Sergey
2006
Traveling waves of a delayed diffusive SIR epidemic model. Zbl 1422.35106
Li, Yan; Li, Wan-Tong; Lin, Guo
2015
Homoclinic solutions of discrete $$\phi$$-Laplacian equations with mixed nonlinearities. Zbl 1396.39008
Lin, Genghong; Zhou, Zhan
2018
Remarks on some dispersive estimates. Zbl 1232.35136
Cho, Yonggeun; Ozawa, Tohru; Xia, Suxia
2011
Eventual compactness for semiflows generated by nonlinear age-structured models. Zbl 1083.47061
Magal, P.; Thieme, H. R.
2004
Multicomponent reactive flows: global-in-time existence for large data. Zbl 1323.76091
Feireisl, Eduard; Petzeltová, Hana; Trivisa, Konstantina
2008
Qualitative analysis and travelling wave solutions for the SI model with vertical transmission. Zbl 1264.35254
Ducrot, Arnaud; Langlais, Michel; Magal, Pierre
2012
Smooth attractors for the Brinkman-Forchheimer equations with fast growing nonlinearities. Zbl 1264.35054
Kalantarov, Varga K.; Zelik, Sergey
2012
A fixed point result with applications in the study of viscoplastic frictionless contact problems. Zbl 1171.47047
Sofonea, Mircea; Avramescu, Cezar; Matei, Andaluzia
2008
On the orbital stability of fractional Schrödinger equations. Zbl 1282.35319
Cho, Yonggeun; Hajaiej, Hichem; Hwang, Gyeongha; Ozawa, Tohru
2014
Nonsimultaneous blow-up for a quasilinear parabolic system with reaction at the boundary. Zbl 1093.35031
Brändle, C.; Quirós, F.; Rossi, Julio D.
2005
A sixth order Cahn-Hilliard type equation arising in oil-water-surfactant mixtures. Zbl 1229.35108
Pawłow, Irena; Zajączkowski, Wojciech M.
2011
High order product integration methods for a Volterra integral equation with logarithmic singular kernel. Zbl 1066.65162
Diogo, T.; Franco, N. B.; Lima, P.
2004
Multiple positive solutions for Kirchhoff type problems with singularity. Zbl 1270.35242
Liu, Xing; Sun, Yijing
2013
Compactness of discrete approximate solutions to parabolic PDEs – application to a turbulence model. Zbl 1308.35208
Gallouët, T.; Latché, J.-C.
2012
Modeling the spontaneous curvature effects in static cell membrane deformations by a phase field formulation. Zbl 1078.92004
Du, Q.; Liu, C.; Ryham, R.; Wang, X.
2005
Pullback exponential attractors for evolution processes in Banach spaces: theoretical results. Zbl 1390.37126
De Carvalho, Alexandre Nolasco; Sonner, Stefanie
2013
High multiplicity and complexity of the bifurcation diagrams of large solutions for a class of superlinear indefinite problems. Zbl 1281.34027
López-Góme, Julián; Tellini, Andrea; Zanolin, Fabio
2014
Comparison of numerical methods for fractional differential equations. Zbl 1133.65115
Ford, Neville J.; Connolly, Joseph A.
2006
Smoothing transformation and piecewise polynomial projection methods for weakly singular Fredholm integral equations. Zbl 1133.65118
Pedas, A.; Vainikko, G.
2006
Error analysis of a conservative finite-element approximation for the Keller-Segel system of chemotaxis. Zbl 1264.65148
Saito, Norikazu
2012
Factor-analysis of nonlinear mappings: $$p$$-regularity theory. Zbl 1053.58004
Tret’yakov, Alexey; Marsden, Jerrold E.
2003
Solutions of a pure critical exponent problem involving the half-Laplacian in annular-shaped domains. Zbl 1236.35073
Capella, Antonio
2011
Null controllability and stabilization of the linear Kuramoto-Sivashinsky equation. Zbl 1188.93018
Cerpa, Eduardo
2010
Distributional chaos for strongly continuous semigroups of operators. Zbl 1287.47007
Albanese, Angela A.; Barrachina, Xavier; Mangino, Elisabetta M.; Peris, Alfredo
2013
Existence of nontrivial steady states for populations structured with respect to space and a continuous trait. Zbl 1303.92091
Arnold, Anton; Desvillettes, Laurent; Prévost, Céline
2012
Coexistence and extinction in the Volterra-Lotka competition model with nonlocal dispersal. Zbl 1264.35113
Hetzer, Georg; Nguyen, Tung; Shen, Wenxian
2012
The singularity analysis of solutions to some integral equations. Zbl 1137.45002
Li, Congming; Lim, Jisun
2007
Semi-positone nonlocal boundary value problems of arbitrary order. Zbl 1200.34025
Webb, J. R. L.; Infante, Gennaro
2010
The vanishing pressure limits of Riemann solutions to the Chaplygin gas equations with a source term. Zbl 1357.35221
Guo, Lihui; Li, Tong; Yin, Gan
2017
The geometry of a vorticity model equation. Zbl 1372.58005
Escher, Joachim; Kolev, Boris; Wunsch, Marcus
2012
Vorticity and regularity for flows under the Navier boundary condition. Zbl 1132.35067
Beirão da Veiga, H.
2006
Completely monotonic functions involving divided differences of the di- and tri-gamma functions and some applications. Zbl 1181.26024
Qi, Feng; Guo, Bai-Ni
2009
Kirchhoff systems with nonlinear source and boundary damping terms. Zbl 1223.35082
Autuori, Giuseppina; Pucci, Patrizia
2010
The inverse Fueter mapping theorem. Zbl 1258.30022
Colombo, Fabrizio; Sabadini, Irene; Sommen, Frank
2011
Remarks on the strong maximum principle for viscosity solutions to fully nonlinear parabolic equations. Zbl 1065.35084
Da Lio, Francesca
2004
The Poisson problem for the exterior derivative operator with Dirichlet boundary condition in nonsmooth domains. Zbl 1157.58007
Reyes, Guillermo; Vázquez, Juan-Luis
2008
Global gradient estimates in elliptic problems under minimal data and domain regularity. Zbl 1325.35027
2015
Elliptic equations having a singular quadratic gradient term and a changing sign datum. Zbl 1266.35103
Giachetti, Daniela; Petitta, Francesco; Segura De León, Sergio
2012
Existence of solutions for singularly perturbed Schrödinger equations with nonlocal part. Zbl 1270.35218
Yang, Minbo; Ding, Yanheng
2013
On semilinear elliptic equations involving critical Sobolev exponents and sign-changing weight function. Zbl 1156.35046
Wu, Tsung-Fang
2008
Bounds on Sobolev norms for the defocusing nonlinear Schrödinger equation on general flat tori. Zbl 1189.35301
Catoire, F.; Wang, W. M.
2010
Homoclinic orbits for a class of Hamiltonian systems with superquadratic or asymptotically quadratic potentials. Zbl 1231.37033
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2011
Long time behavior for the inhomogeneous PME in a medium with slowly decaying density. Zbl 1169.35313
Reyes, Guillermo; Vázquez, Juan-Luis
2009
Convergence to equilibrium for the backward Euler scheme and applications. Zbl 1215.65132
Merlet, Benoît; Pierre, Morgan
2010
A Serrin-type regularity criterion for the Navier-Stokes equations via one velocity component. Zbl 1278.35184
Zhang, Zujin
2013
The effect of delay on a diffusive predator-prey system with Holling type-II predator functional response. Zbl 1264.35120
Chen, Shanshan; Shi, Junping; Wei, Junjie
2013
Stability of rarefaction wave and boundary layer for outflow problem on the two-fluid Navier-Stokes-Poisson equations. Zbl 1264.76132
Duan, Renjun; Yang, Xiongfeng
2013
Equivalence of invariant measures and stationary statistical solutions for the autonomous globally modified Navier-Stokes equations. Zbl 1168.35412
Kloeden, P. E.; Marín-Rubio, Pedro; Real, José
2009
Initial-boundary value problems for the coupled modified Korteweg-de Vries equation on the interval. Zbl 1397.35262
Tian, Shou-Fu
2018
Homoclinic orbits for discrete Hamiltonian systems with local super-quadratic conditions. Zbl 1401.39017
Zhang, Qinqin
2019
Existence and regularity results for the primitive equations in two space dimensions. Zbl 1060.35033
Petcu, M.; Temam, R.; Wirosoetisno, D.
2004
Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent. Zbl 1334.35402
Teng, Kaimin; He, Xiumei
2016
On the blow-up boundary solutions of the Monge-Ampére equation with singular weights. Zbl 1266.35018
Yang, Haitao; Chang, Yibin
2012
Dynamics of non-autonomous nonclassical diffusion equations on $$\mathbb R^n$$. Zbl 1264.35052
Anh, Cung The; Bao, Tang Quoc
2012
Long-time dynamics of the parabolic $$p$$-Laplacian equation. Zbl 1267.35043
Geredeli, Pelin G.; Khanmamedov, Azer
2013
Radial quasilinear elliptic problems with singular or vanishing potentials. Zbl 1481.35234
Badiale, Marino; Guida, Michela; Rolando, Sergio
2022
Subharmonic solutions of bounded coupled Hamiltonian systems with sublinear growth. Zbl 07461711
Chen, Fanfan; Qian, Dingbian; Sun, Xiying; Wu, Yinyin
2022
A convergent finite difference method for computing minimal Lagrangian graphs. Zbl 07474340
Hamfeldt, Brittany Froese; Lesniewski, Jacob
2022
Analytical and numerical monotonicity results for discrete fractional sequential differences with negative lower bound. Zbl 1469.39003
Goodrich, Christopher S.; Lyons, Benjamin; Velcsov, Mihaela T.
2021
Semilinear Caputo time-fractional pseudo-parabolic equations. Zbl 1460.35381
Tuan, Nguyen Huy; Au, Vo Van; Xu, Runzhang
2021
A note on Riemann-Liouville fractional Sobolev spaces. Zbl 1472.46035
Carbotti, Alessandro; Comi, Giovanni E.
2021
Limit cycle bifurcations in a class of piecewise smooth cubic systems with multiple parameters. Zbl 1465.34036
Cai, Meilan; Han, Maoan
2021
Scattering of the focusing energy-critical NLS with inverse square potential in the radial case. Zbl 1460.35335
Yang, Kai
2021
A stability result for the Steklov Laplacian eigenvalue problem with a spherical obstacle. Zbl 1460.35086
Paoli, Gloria; Piscitelli, Gianpaolo; Sannipoli, Rossanno
2021
High-order Wong-Zakai approximations for non-autonomous stochastic $$p$$-Laplacian equations on $$\mathbb{R}^N$$. Zbl 1460.35404
Zhao, Wenqiang; Zhang, Yijin
2021
Dimension estimate of attractors for complex networks of reaction-diffusion systems applied to an ecological model. Zbl 1460.35045
Cantin, Guillaume; Aziz-Alaoui, M. A.
2021
Random data theory for the cubic fourth-order nonlinear Schrödinger equation. Zbl 1460.35323
Dinh, Van Duong
2021
Elliptic problems with rough boundary data in generalized Sobolev spaces. Zbl 1460.35116
Anop, Anna; Denk, Robert; Murach, Aleksandr
2021
Further regularity and uniqueness results for a non-isothermal Cahn-Hilliard equation. Zbl 1460.35289
Ipocoana, Erica; Zafferi, Andrea
2021
Periodic solutions of $$p$$-Laplacian equations via rotation numbers. Zbl 1471.34081
Wang, Shuang; Qian, Dingbian
2021
A note on the nonexistence of global solutions to the semilinear wave equation with nonlinearity of derivative-type in the generalized Einstein-de Sitter spacetime. Zbl 1479.35138
Hamouda, Makram; Hamza, Mohamed Ali; Palmieri, Alessandro
2021
Asymptotic expansion of the ground state energy for nonlinear Schrödinger system with three wave interaction. Zbl 1479.35799
2021
On the Calderon-Zygmund property of Riesz-transform type operators arising in nonlocal equations. Zbl 1477.45007
Yeepo, Sasikarn; Lewkeeratiyutkul, Wicharn; Khomrutai, Sujin; Schikorra, Armin
2021
Stability results of a singular local interaction elastic/viscoelastic coupled wave equations with time delay. Zbl 1476.93129
2021
On principal eigenvalues of biharmonic systems. Zbl 1460.35247
Kong, Lingju; Nichols, Roger
2021
Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Zbl 1460.35315
Ardila, Alex H.; Cardoso, Mykael
2021
Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity. II: Supercritical blow-up profiles. Zbl 1460.35328
Holmer, Justin; Liu, Chang
2021
On optimal autocorrelation inequalities on the real line. Zbl 1460.42002
Madrid, José; Ramos, João P. G.
2021
Weighted multipolar Hardy inequalities and evolution problems with Kolmogorov operators perturbed by singular potentials. Zbl 1470.35008
Canale, Anna; Pappalardo, Francesco; Tarantino, Ciro
2021
On the Cahn-Hilliard equation with mass source for biological applications. Zbl 1460.35351
Fakih, Hussein; Mghames, Ragheb; Nasreddine, Noura
2021
The anisotropic fractional isoperimetric problem with respect to unconditional unit balls. Zbl 1460.49035
Kreuml, Andreas
2021
Ground state and nodal solutions for fractional Schrödinger-Maxwell-Kirchhoff systems with pure critical growth nonlinearity. Zbl 1460.35142
Liu, Chungen; Zhang, Huabo
2021
Multiple positive solutions for coupled Schrödinger equations with perturbations. Zbl 1460.35129
Li, Haoyu; Wang, Zhi-Qiang
2021
Global existence and uniform estimates of solutions to reaction diffusion systems with mass transport type boundary conditions. Zbl 1472.35220
Sharma, Vandana
2021
Large-time behavior of solutions to unipolar Euler-Poisson equations with time-dependent damping. Zbl 1471.35242
Wu, Qiwei; Luan, Liping
2021
An overdetermined problem associated to the Finsler Laplacian. Zbl 1466.35270
Ciraolo, Giulio; Greco, Antonio
2021
Polynomial growth of high Sobolev norms of solutions to the Zakharov-Kuznetsov equation. Zbl 1467.35280
Côte, Raphaël; Valet, Frédéric
2021
Well-posedness and attractor for a strongly damped wave equation with supercritical nonlinearity on $$\mathbb{R}^N$$. Zbl 1466.35032
Ding, Pengyan; Yang, Zhijian
2021
Stability of rarefaction wave for the compressible non-isentropic Navier-Stokes-Maxwell equations. Zbl 1471.35225
Yao, Huancheng; Yin, Haiyan; Zhu, Changjiang
2021
On the uniqueness of solutions of a semilinear equation in an annulus. Zbl 1471.35141
Cortázar, Carmen; García-Huidobro, Marta; Herreros, Pilar; Tanaka, Satoshi
2021
Asymptotic behaviors of solutions to a sixth-order Boussinesq equation with logarithmic nonlinearity. Zbl 1466.35049
Zhang, Huan; Zhou, Jun
2021
Existence of multi-peak solutions to the Schnakenberg model with heterogeneity on metric graphs. Zbl 1466.35014
Ishii, Yuta; Kurata, Kazuhiro
2021
Extremal functions for a class of trace Trudinger-Moser inequalities on a compact Riemann surface with smooth boundary. Zbl 1478.46040
Zhang, Mengjie
2021
Homogenization of a modified bidomain model involving imperfect transmission. Zbl 1466.35015
Amar, Micol; Andreucci, Daniele; Timofte, Claudia
2021
A note on the energy transfer in coupled differential systems. Zbl 1471.34109
Conti, Monica; Liverani, Lorenzo; Pata, Vittorino
2021
Multiplicity of solutions for a class of fractional elliptic problems with critical exponential growth and nonlocal Neumann condition. Zbl 1466.35352
Alves, Claudianor O.; Ledesma, César T.
2021
Asymptotics for the concentrated field between closely located hard inclusions in all dimensions. Zbl 1478.78042
Zhao, Zhiwen; Hao, Xia
2021
Traveling waves for a two-group epidemic model with latent period and bilinear incidence in a patchy environment. Zbl 07436375
San, Xuefeng; He, Yuan
2021
Sharp gradient estimates on weighted manifolds with compact boundary. Zbl 1482.58012
Dung, Ha Tuan; Dung, Nguyen Thac; Wu, Jiayong
2021
Uniqueness and stability of time-periodic pyramidal fronts for a periodic competition-diffusion system. Zbl 1423.35160
Bao, Xiongxiong; Li, Wan-Tong; Wang, Zhi-Cheng
2020
Real-variable characterizations of new anisotropic mixed-norm Hardy spaces. Zbl 1437.42033
Huang, Long; Liu, Jun; Yang, Dachun; Yuan, Wen
2020
Global boundedness of solutions to a chemotaxis-fluid system with singular sensitivity and logistic source. Zbl 1442.35038
Ren, Guoqiang; Liu, Bin
2020
On large potential perturbations of the Schrödinger, wave and Klein-Gordon equations. Zbl 1428.35431
D’Ancona, Piero
2020
On a delayed epidemic model with non-instantaneous impulses. Zbl 1447.34070
Bai, Liang; Nieto, Juan J.; Uzal, José M.
2020
Bifurcation and stability of a two-species diffusive Lotka-Volterra model. Zbl 1436.35230
Ma, Li; Guo, Shangjiang
2020
Controllability of the one-dimensional fractional heat equation under positivity constraints. Zbl 1446.93012
Biccari, Umberto; Warma, Mahamadi; Zuazua, Enrique
2020
The dynamics of nonlocal diffusion systems with different free boundaries. Zbl 1445.35211
Li, Lei; Wang, Jianping; Wang, Mingxin
2020
Averaging principle for stochastic real Ginzburg-Landau equation driven by $$\alpha$$-stable process. Zbl 1431.35259
Sun, Xiaobin; Zhai, Jianliang
2020
A stochastic threshold for an epidemic model with isolation and a non linear incidence. Zbl 1435.92068
Caraballo, Tomás; Fatini, Mohamed El; Sekkak, Idriss; Taki, Regragui; Laaribi, Aziz
2020
Elliptic approximation of forward-backward parabolic equations. Zbl 1439.35304
Paronetto, Fabio
2020
Liouville theorems for stable weak solutions of elliptic problems involving Grushin operator. Zbl 1427.35029
Le, Phuong
2020
Symmetry of singular solutions for a weighted Choquard equation involving the fractional $$p$$-Laplacian. Zbl 1423.35403
Le, Phuong
2020
Traveling waves in a nonlocal dispersal epidemic model with spatio-temporal delay. Zbl 1439.35251
Wei, Jingdong; Zhou, Jiangbo; Chen, Wenxia; Zhen, Zaili; Tian, Lixin
2020
Nonclassical diffusion with memory lacking instantaneous damping. Zbl 1446.35063
Conti, Monica; Dell’Oro, Filippo; Pata, Vittorino
2020
Stability of multi-peak symmetric stationary solutions for the Schnakenberg model with periodic heterogeneity. Zbl 1446.35051
Ishii, Yuta
2020
Sharp Hardy-Leray inequality for three-dimensional solenoidal fields with axisymmetric swirl. Zbl 1437.35006
Hamamoto, Naoki; Takahashi, Futoshi
2020
The effect of nonlocal reaction in an epidemic model with nonlocal diffusion and free boundaries. Zbl 1460.35400
Zhao, Meng; Li, Wantong; Du, Yihong
2020
General condition for exponential stability of thermoelastic Bresse systems with Cattaneo’s law. Zbl 1439.35057
de Lima, Pedro Roberto; Fernández Sare, Hugo D.
2020
Approximation by multivariate max-product Kantorovich-type operators and learning rates of least-squares regularized regression. Zbl 1472.41011
Coroianu, Lucian; Costarelli, Danilo; Gal, Sorin G.; Vinti, Gianluca
2020
Liouville theorems for an integral equation of Choquard type. Zbl 1428.35665
Le, Phuong
2020
Pullback dynamics of a non-autonomous mixture problem in one dimensional solids with nonlinear damping. Zbl 1430.35152
Freitas, Mirelson M.; Costa, Alberto L. C.; Araújo, Geraldo M.
2020
Least energy solutions for coupled Hartree system with Hardy-Littlewood-Sobolev critical exponents. Zbl 1427.35066
Zheng, Yu; Santos, Carlos A.; Shen, Zifei; Yang, Minbo
2020
Analytic integrability around a nilpotent singularity: the non-generic case. Zbl 1483.34006
Algaba, Antonio; Díaz, María; García, Cristóbal; Giné, Jaume
2020
Global existence for a chemotaxis-haptotaxis model with $$p$$-Laplacian. Zbl 1433.92007
Liu, Changchun; Li, Pingping
2020
Travelling corners for spatially discrete reaction-diffusion systems. Zbl 1436.34009
Hupkes, H. J.; Morelli, L.
2020
Global higher integrability of weak solutions of porous medium systems. Zbl 1432.35128
Moring, Kristian; Scheven, Christoph; Schwarzacher, Sebastian; Singer, Thomas
2020
Bending-torsion moments in thin multi-structures in the context of nonlinear elasticity. Zbl 1431.74023
Ferreira, Rita; Zappale, Elvira
2020
Strong convergence of trajectory attractors for reaction-diffusion systems with random rapidly oscillating terms. Zbl 1435.35072
Bekmaganbetov, Kuanysh A.; Chechkin, Gregory A.; Chepyzhov, Vladimir V.
2020
Uniform a priori estimates for elliptic problems with impedance boundary conditions. Zbl 1436.35077
Chaumont-Frelet, Théophile; Nicaise, Serge; Tomezyk, Jérôme
2020
Global dynamics for a class of reaction-diffusion equations with distributed delay and Neumann condition. Zbl 1439.35512
Touaoula, Tarik Mohammed
2020
$$BV$$ functions on open domains: the Wiener case and a Fomin differentiable case. Zbl 1441.49036
Addona, Davide; Menegatti, Giorgio; Miranda, Michele jun.
2020
Nonexistence results on the space or the half space of $$-\Delta u+\lambda u = \vert u\vert ^{p-1}u$$ via the Morse index. Zbl 1437.35341
Selmi, Abdelbaki; Harrabi, Abdellaziz; Zaidi, Cherif
2020
The probabilistic Cauchy problem for the fourth order Schrödinger equation with special derivative nonlinearities. Zbl 07196904
Zhang, Shuai; Xu, Shaopeng
2020
A second order fractional differential equation under effects of a super damping. Zbl 1460.35370
Charão, Ruy Coimbra; Espinoza, Juan Torres; Ikehata, Ryo
2020
Improved Sobolev inequalities and critical problems. Zbl 1439.35011
Chen, Xiaoli; Yang, Jianfu
2020
Bound state positive solutions for a class of elliptic system with Hartree nonlinearity. Zbl 1440.35092
Che, Guofeng; Chen, Haibo; Wu, Tsung-fang
2020
Function approximation by deep networks. Zbl 1442.62215
2020
Geometry of self-similar measures on intervals with overlaps and applications to sub-Gaussian heat kernel estimates. Zbl 1429.28014
Gu, Qingsong; Hu, Jiaxin; Ngai, Sze-Man
2020
Entire subsolutions of Monge-Ampère type equations. Zbl 1427.35091
Dai, Limei; Li, Hongyu
2020
Dynamics of spatially heterogeneous viral model with time delay. Zbl 1430.35141
Yang, Hong; Wei, Junjie
2020
On the Schrödinger-Debye system in compact Riemannian manifolds. Zbl 1428.35533
Nogueira, Marcelo; Panthee, Mahendra
2020
Potential well and multiplicity of solutions for nonlinear Dirac equations. Zbl 1428.35420
Chen, Yu; Ding, Yanheng; Xu, Tian
2020
Stability of the spectral gap for the Boltzmann multi-species operator linearized around non-equilibrium Maxwell distributions. Zbl 1442.35277
Bondesan, Andrea; Boudin, Laurent; Briant, Marc; Grec, Bérénice
2020
Clustering phase transition layers with boundary intersection for an inhomogeneous Allen-Cahn equation. Zbl 1437.35342
Wei, Suting; Yang, Jun
2020
Local Lipschitz regularity for functions satisfying a time-dependent dynamic programming principle. Zbl 07182512
Han, Jeongmin
2020
Limiting dynamical behavior of random fractional Fitzhugh-Nagumo systems driven by a Wong-Zakai approximation process. Zbl 1439.35586
Li, Dingshi; Wang, Xiaohu; Zhao, Junyilang
2020
Elliptic and parabolic problems in thin domains with doubly weak oscillatory boundary. Zbl 1439.35036
Arrieta, José M.; Villanueva-Pesqueira, Manuel
2020
Advances in the truncated Euler-Maruyama method for stochastic differential delay equations. Zbl 1462.60073
Fei, Weiyin; Hu, Liangjian; Mao, Xuerong; Xia, Dengfeng
2020
Theoretical and numerical results for some bi-objective optimal control problems. Zbl 1437.35534
Fernández-Cara, Enrique; Marín-Gayte, Irene
2020
Attractors for semilinear wave equations with localized damping and external forces. Zbl 1441.35062
Ma, To Fu; Seminario-Huertas, Paulo Nicanor
2020
Asymptotic behavior of a Cahn-Hilliard/Allen-Cahn system with temperature. Zbl 1437.35084
Miranville, Alain; Quintanilla, Ramon; Saoud, Wafa
2020
Sigmoidal approximations of a delay neural lattice model with heaviside functions. Zbl 1451.34101
Wang, Xiaoli; Yang, Meihua; Kloeden, Peter E.
2020
Asymptotic profiles of steady states for a diffusive SIS epidemic model with spontaneous infection and a logistic source. Zbl 1446.35058
Zhu, Siyao; Wang, Jinliang
2020
Monotonicity with respect to $$p$$ of the first nontrivial eigenvalue of the $$p$$-Laplacian with homogeneous Neumann boundary conditions. Zbl 1465.35327
Mihăilescu, Mihai; Rossi, Julio D.
2020
Optimal decay to the non-isentropic compressible micropolar fluids. Zbl 1460.35292
liu, Lvqiao; Zhang, Lan
2020
...and 1287 more Documents
all top 5
### Cited by 9,075 Authors
117 Papageorgiou, Nikolaos S. 62 Rădulescu, Vicenţiu D. 54 Miranville, Alain M. 44 Llibre, Jaume 40 Li, Wan-Tong 35 Figueiredo, Giovany Malcher 34 Tang, Xianhua 33 Alves, Claudianor Oliveira 33 Colli, Pierluigi 31 Repovš, Dušan D. 31 Rossi, Julio Daniel 29 Liu, Chein-Shan 29 Zelik, Sergey V. 28 Zhu, Changjiang 27 Grasselli, Maurizio 27 Miyagaki, Olimpio Hiroshi 27 Pata, Vittorino 25 Ambrosio, Vincenzo 25 Gasiński, Leszek 25 Mu, Chunlai 25 Tret’yakov, Alexey A. 25 Yang, Dachun 24 Chen, Pengyu 23 Caraballo Garrido, Tomás 23 Li, Yangrong 23 Nicola, Fabio 23 Vetro, Calogero 22 Molica Bisci, Giovanni 22 Pucci, Patrizia 22 Quintanilla de Latorre, Ramón 22 Winkert, Patrick 22 Zhou, Shengfan 21 Chen, Caisheng 21 Gilardi, Gianni 21 Sreenadh, Konijeti 21 Tang, Zhongwei 21 Yang, Minbo 21 Zou, Wenming 20 Conti, Monica C. 20 Li, Yongxiang 20 Messaoudi, Salim A. 20 Tachim Medjo, Theodore 19 Cordero, Elena 19 Gal, Ciprian Gheorghe Sorin 19 Kloeden, Peter Eris 19 Qin, Yuming 19 Sommen, Franciscus 19 Vetro, Francesca 19 Wang, Weike 19 Zhang, Xuping 18 Chen, Wenxiong 18 De Schepper, Hennie 18 Gala, Sadek 18 Han, Maoan 18 Ishige, Kazuhiro 18 Li, Congming 18 Liao, Jiafeng 18 Liu, Bingchen 18 López-Gómez, Julián 18 Peral Alonso, Ireneo 18 Qi, Feng 18 Sofonea, Mircea 18 Valdinoci, Enrico 18 Wu, Hao 18 Yang, Sibei 18 Zhang, Binlin 18 Zhang, Zujin 18 Zhao, Caidi 17 Dinh, Van Duong 17 García-Melián, Jorge 17 Li, Fengjie 17 Shi, Junping 17 Strichartz, Robert S. 17 Tian, Lixin 17 Wei, Juncheng 17 Yang, Zhijian 17 Yao, Zhengan 17 Zhang, Jihui 17 Zhao, Xiaopeng 16 Brackx, Fred F. 16 Lei, Yutian 16 Quaas, Alexander 16 Rocca, Elisabetta 16 Saanouni, Tarek 16 Sprekels, Jürgen 16 Vougalter, Vitali 16 Wang, Zhi-Qiang 16 Zhang, Fubao 16 Zhou, Jun 16 Zhou, Zhan 15 Abdellaoui, Boumediene 15 Fan, Jishan 15 Fiscella, Alessio 15 Kawakami, Tatsuki 15 Ruan, Shigui 15 Sasu, Bogdan 15 Severo, Uberlandio Batista 15 Sun, Chunyou 15 Temam, Roger Meyer 15 Tian, Shoufu ...and 8,975 more Authors
all top 5
### Cited in 558 Journals
596 Journal of Differential Equations 591 Journal of Mathematical Analysis and Applications 475 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 363 Communications on Pure and Applied Analysis 262 Discrete and Continuous Dynamical Systems 202 Journal of Mathematical Physics 174 Nonlinear Analysis. Real World Applications 170 Discrete and Continuous Dynamical Systems. Series B 167 ZAMP. Zeitschrift für angewandte Mathematik und Physik 161 Calculus of Variations and Partial Differential Equations 150 Applicable Analysis 143 Boundary Value Problems 124 Applied Mathematics Letters 121 Applied Mathematics and Computation 107 Computers & Mathematics with Applications 96 Mathematical Methods in the Applied Sciences 94 NoDEA. Nonlinear Differential Equations and Applications 90 SIAM Journal on Mathematical Analysis 89 Journal of Functional Analysis 86 Journal of Evolution Equations 83 Archive for Rational Mechanics and Analysis 79 Discrete and Continuous Dynamical Systems. Series S 76 Acta Applicandae Mathematicae 75 Complex Variables and Elliptic Equations 71 Abstract and Applied Analysis 71 Mediterranean Journal of Mathematics 67 Journal de Mathématiques Pures et Appliquées. Neuvième Série 66 Proceedings of the American Mathematical Society 65 Applied Mathematics and Optimization 62 Journal of Computational and Applied Mathematics 61 Communications in Contemporary Mathematics 61 Advances in Nonlinear Analysis 59 Journal of Dynamics and Differential Equations 59 Journal of Mathematical Fluid Mechanics 59 Advanced Nonlinear Studies 58 Annali di Matematica Pura ed Applicata. Serie Quarta 56 Nonlinearity 55 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 54 Communications in Mathematical Physics 53 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 53 Advances in Difference Equations 51 Communications in Partial Differential Equations 50 Transactions of the American Mathematical Society 50 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 50 Science China. Mathematics 49 Evolution Equations and Control Theory 48 Topological Methods in Nonlinear Analysis 46 The Journal of Geometric Analysis 46 Electronic Journal of Differential Equations (EJDE) 44 Advances in Mathematics 44 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 42 Asymptotic Analysis 41 Journal of Inequalities and Applications 39 Comptes Rendus. Mathématique. Académie des Sciences, Paris 38 Journal of Computational Physics 38 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 35 Mathematische Nachrichten 34 Physica D 33 Chinese Annals of Mathematics. Series B 33 Discrete Dynamics in Nature and Society 32 Journal of Scientific Computing 32 Communications in Nonlinear Science and Numerical Simulation 32 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 32 Journal of Applied Analysis and Computation 32 AIMS Mathematics 32 Electronic Research Archive 31 Journal of Nonlinear Science 30 Applied Numerical Mathematics 30 Journal of Difference Equations and Applications 30 Qualitative Theory of Dynamical Systems 30 Journal of Fixed Point Theory and Applications 29 Chaos, Solitons and Fractals 29 Open Mathematics 27 Monatshefte für Mathematik 27 Taiwanese Journal of Mathematics 27 Acta Mathematica Sinica. English Series 27 Journal of Elliptic and Parabolic Equations 26 SIAM Journal on Control and Optimization 26 Mathematical Biosciences and Engineering 25 Mathematische Annalen 25 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 25 Journal of Function Spaces 24 Rocky Mountain Journal of Mathematics 24 Potential Analysis 24 Journal of Mathematical Sciences (New York) 23 Numerical Functional Analysis and Optimization 23 Numerische Mathematik 23 Quarterly of Applied Mathematics 22 Computer Methods in Applied Mechanics and Engineering 22 Results in Mathematics 22 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 21 Archiv der Mathematik 21 The Journal of Fourier Analysis and Applications 21 Kinetic and Related Models 20 Journal of Statistical Physics 20 Manuscripta Mathematica 20 Acta Mathematicae Applicatae Sinica. English Series 20 Advances in Computational Mathematics 20 International Journal of Biomathematics 20 Advances in Mathematical Physics ...and 458 more Journals
all top 5
### Cited in 59 Fields
7,304 Partial differential equations (35-XX) 1,193 Fluid mechanics (76-XX) 905 Ordinary differential equations (34-XX) 787 Biology and other natural sciences (92-XX) 752 Dynamical systems and ergodic theory (37-XX) 747 Operator theory (47-XX) 713 Numerical analysis (65-XX) 512 Mechanics of deformable solids (74-XX) 497 Calculus of variations and optimal control; optimization (49-XX) 373 Integral equations (45-XX) 353 Systems theory; control (93-XX) 336 Global analysis, analysis on manifolds (58-XX) 307 Probability theory and stochastic processes (60-XX) 292 Functional analysis (46-XX) 248 Real functions (26-XX) 241 Harmonic analysis on Euclidean spaces (42-XX) 196 Statistical mechanics, structure of matter (82-XX) 166 Classical thermodynamics, heat transfer (80-XX) 157 Differential geometry (53-XX) 153 Quantum theory (81-XX) 104 Difference and functional equations (39-XX) 101 Measure and integration (28-XX) 100 Optics, electromagnetic theory (78-XX) 85 Potential theory (31-XX) 84 Mechanics of particles and systems (70-XX) 83 Functions of a complex variable (30-XX) 68 Operations research, mathematical programming (90-XX) 65 Special functions (33-XX) 55 Geophysics (86-XX) 51 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 39 Computer science (68-XX) 38 Approximations and expansions (41-XX) 31 Relativity and gravitational theory (83-XX) 30 Linear and multilinear algebra; matrix theory (15-XX) 27 Abstract harmonic analysis (43-XX) 27 Integral transforms, operational calculus (44-XX) 26 Several complex variables and analytic spaces (32-XX) 25 Information and communication theory, circuits (94-XX) 20 Statistics (62-XX) 18 Combinatorics (05-XX) 17 Topological groups, Lie groups (22-XX) 16 Astronomy and astrophysics (85-XX) 14 Number theory (11-XX) 13 Manifolds and cell complexes (57-XX) 12 General topology (54-XX) 9 Convex and discrete geometry (52-XX) 8 Algebraic geometry (14-XX) 7 General and overarching topics; collections (00-XX) 6 History and biography (01-XX) 5 Sequences, series, summability (40-XX) 4 Nonassociative rings and algebras (17-XX) 4 Group theory and generalizations (20-XX) 4 Geometry (51-XX) 3 Mathematical logic and foundations (03-XX) 3 Algebraic topology (55-XX) 2 Field theory and polynomials (12-XX) 1 Commutative algebra (13-XX) 1 Associative rings and algebras (16-XX) 1 $$K$$-theory (19-XX)
|
# Ensemble Learning7
Posted by Johanna Pingel,
Combining Deep Learning networks to increase prediction accuracy. The following post is from Maria Duarte Rosa, who wrote a great post on neural network feature visualization, talking about ways to increase your model prediction accuracy.
• Have you tried training different architectures from scratch?
• Have you tried different weight initializations?
• Have you tried transfer learning using different pretrained models?
• Have you run cross-validation to find the best hyperparameters?
If you answered Yes to any of these questions, this post will show you how to take advantage of your trained models to increase the accuracy of your predictions. Even if you answered No to all 4 questions, the simple techniques below may still help to increase your prediction accuracy. First, let's talk about ensemble learning.
### What is ensemble learning?
Ensemble learning or model ensembling, is a well-established set of machine learning and statistical techniques [LINK: https://doi.org/10.1002/widm.1249] for improving predictive performance through the combination of different learning algorithms. The combination of the predictions from different models is generally more accurate than any of the individual models making up the ensemble. Ensemble methods come in different flavours and levels of complexity (for a review see https://arxiv.org/pdf/1106.0257.pdf), but here we focus on combining the predictions of multiple deep learning networks that have been previously trained. Different networks make different mistakes and the combination of these mistakes can be leveraged through model ensembling. Although not so popular in the deep learning literature as it is for more traditional machine learning research, model ensembling for deep learning has led to impressive results, specially in highly popular competitions, such as ImageNet and other Kaggle challenges. These competitions are commonly won by ensembles of deep learning architectures. In this post, we focus on three very simple ways of combining predictions from different deep neural networks:
1. Averaging: a simple average over all the predictions (output of the softmax layer) from the different networks
2. Weighted average: the weights are proportional to an individual model's performance. For example, the predictions for the best model could be weighted by 2, while the rest of the models have no weight;
3. Majority voting: for each test observation, the prediction is the most frequent class in all predictions
We will use two examples to illustrate how these techniques can increase the accuracy in the following situations: Example 1: combining different architectures trained from scratch. Example 2: combining different pretrained models for transfer learning. Even though we picked two specific use cases, these techniques apply to most situations where you have trained multiple deep learning networks, including networks trained on different datasets.
### Example 1 – combining different architectures trained from scratch
Here we use the CiFAR-10 dataset to train from scratch 6 different ResNet architectures. We follow this example [LINK: https://www.mathworks.com/help/deeplearning/examples/train-residual-network-for-image-classification.html] but instead of training a single architeture we vary the number of units and network width using the following 6 combinations: numUnits = [3 6 9 12 18 33]; and netWidth = [12 32 16 24 9 6]. We train each network using the same training options as in the example and estimate their individual validation errors (validation error = 100 - prediction accuracy): Individual validation errors:
Network 1: 16.36% Network 2: 7.83% Network 3: 9.52% Network 4: 7.68% Network 5: 10.36% Network 6: 12.04%
We then calculated the errors for the three different ensembling techniques: Model ensembling errors:
Average: 6.79% Weighted average: 6.79% (Network 4 counted twice). Majority vote: 7.16%
A quick chart of these numbers:
figure; bar(example1Results); title('Example 1: prediction errors (%)');
xticklabels({'Network 1','Network 2','Network 3', 'Network 4', 'Network 5', 'Network 6', ...
'Average', 'Weighted average', 'Majority vote'}); xtickangle(40)
The ensemble prediction errors are smaller than any of the individual models. The difference is small but in 10000 images it means that 89 images are now correctly classified in comparison with the best individual model. We can see some examples of these images:
% Plot some data (misclassified for best model)
figure;
for i = 1:4
subplot(2,2,i);imshow(dataVal(:,:,:,i))
title(sprintf('Best model: %s / Ensemble: %s',bestModelPreds(i),ensemblePreds(i)))
end
### Example 2 – combining different pretrained models for transfer learning
In this example we use again the CiFAR-10 dataset but this time we use different pretrained models for transfer learning. The models were originally trained on ImageNet and can be dowloaded as support packages [LINK: https://www.mathworks.com/matlabcentral/profile/authors/8743315-mathworks-deep-learning-toolbox-team]. We used googlenet, squeezenet, resnet18, xception and mobilenetv2 and followed the transfer learning example [LINK: https://www.mathworks.com/help/deeplearning/examples/train-deep-learning-network-to-classify-new-images.html] Individual validation errors:
googlenet: 7.23% squeezneet: 12.89% resnet18: 7.75% xception: 3.92% mobilenetv2: 6.96%
Model ensembling errors:
Average: 3.56% Weighted average: 3.28% (Xception counted twice). Majority vote: 4.04%
% Plot errors
figure;bar(example2Results); title('Example 2: prediction errors (%)');
'Average', 'Weighted average', 'Majority vote'}); xtickangle(40)
Again the ensemble prediction errors are smaller than any of the individual models and 64 more images were correctly classified. These included:
% Plot some data (misclassified for best model)
figure;
for i = 1:4
subplot(2,2,i);imshow(dataVal(:,:,:,i))
title(sprintf('Best model: %s / Ensemble: %s',bestModelPreds(i),ensemblePreds(i)))
end
### What else should I know?
Model ensembling can significantly increase prediction time, which makes it impractical in applications where the cost of inference time is higher than the cost of making the wrong predictions. One other thing to note is that performance does not increase monotonically with the number of networks. Typically, as this number increases, training time significantly increases but the return of combining all models diminishes. There isn’t a single magic number for how many networks one should combine. This is heavily dependent on the networks, data, and computational resources. Having said this, performance tends to improve the more model variety we have in an ensemble. Hope you found this useful - Have you tried ensemble learning or thinking of trying it? Leave a comment below.
Kathleen Edwards replied on : 1 of 7
For the 4 techniques you list at the start, it would be great to add a link to a documentation or an example showing how to do it in Matlab. Very useful post, thanks!
Maria Duarte Rosa replied on : 2 of 7
Hi Kathleen, Thank you for your feedback! That is a great suggestion. I have added a few doc links to this comment. I hope you find them useful too.
Mesho replied on : 3 of 7
Thank you very much Johanna for this great post. It's really important and useful. However, for many people new to deep learning, like me, it will be difficult to know how to run this example and to get use of Ensemble Learning. Also going through all the many examples and links kindly suggested by Maria means we need to build a high level of understanding of DNN to run this example, which is not easy for people came from different field. I think many followers to this blog seeks a fast, easy and practical examples like what you have been kindly doing. I would appreciate it very much if you could make this post easier to apply.
Johanna Pingel replied on : 4 of 7
Thanks for the feedback Mesho, and I will take this under consideration for future posts. Hopefully you'll see a mix of content: more advanced to more beginner. if you have any specific questions about applying this example, feel free to ask in the comments!
Mesho replied on : 5 of 7
Thank you Johanna, and will take this opportunity to ask for your suggestion. I have a "training-set" of similar images and I use AlexNet to train them. And I have a "testing-set" and I use AlexNet to find the first best match and also the matching percentage, and everything seems to be working correctly. The problem is; if I repeated the same experiment then each time I get different result. I learned this is a normal thing because CNN generates different random variables during initialization. I don’t want to use rand(0) because I am afraid this will make my result different other people if they use another programing language. Moreover, I learned that we can’t use rand(0) when using parallel computing to process in the GPU. Do you think this reproducibility problem can be solved if I use Ensemble Learning? I would appreciate it very much if you can comment on this problem.
Mesho replied on : 6 of 7
Sorry, just one correction; In the above post I meant rng(0) not rand(0) Mesho
Maria Duarte Rosa replied on : 7 of 7
Hi Mesho, thank you for your feedback. As you mentioned, this behavior is normal and we can leverage it with model ensembling. In other words, we can take advantage of this variability by running the experiment multiple times and combining the outputs of all models as suggested in the post (e.g. averaging the predictions or looking at the majority vote). This can reduce uncertainty in your predictions but also provide a more accurate model (the combination of all models tends to perform better than the best individual model). The only downside is the computation time needed to run multiple experiments but if that's not an issue then ensembling is a good solution.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
# Maximization problem
I've been trying to solve the following problem from Stewart's Calculus Textbook for a while without any success. My answer makes sense, but I'm looking for a way to solve it analytically.
The problem concerns a pulley that is attached to the ceiling of a room at a point C by a rope of length r. At another point B on the ceiling, at a distance d from C (where d > r), a rope of length l is attached and passed through the pulley at F and connected to a weight W. The weight is released and comes to rest at its equilibrium position D. This happens when the distance |ED| is maximized. Show that when the system reaches equilibrium, the value of x is:
$$\frac{r}{4d}(r+\sqrt{r^2+8d^2})$$
Here is what I've done. First, I expressed |DE| as a function of x
$$|DE|(x)={a}_{2}+{a}_{3}=l-{a}_{1}+\sqrt{{r}^{2}-{x}^{2}}=l-\sqrt{{a}_{3}^2+y^2}+\sqrt{r^2-x^2}$$
from what follows that $$|DE|(x)=l-\sqrt{r^2+d^2-2xd}+\sqrt{r^2-x^2}$$ defined for $$0\leq x \leq r$$
...and it works since $$|DE|(0)=l+r-\sqrt{r^2+d^2}$$ and $$|DE|(r)=l-|r-d|$$
To find the maximum of this function, I calculated |DE|'(x)
$$|DE|'(x)=\frac{d}{\sqrt{r^2+d^2-2xd}}-\frac{x}{\sqrt{r^2-x^2}}$$
I proved the two radicals at the denominator are defined for $$0\leq x < r$$
...so basically I'm interested in finding when |DE|'(x) equals zero, more specifically the roots of
$$d\sqrt{r^2-x^2}-x\sqrt{r^2+d^2-2xd}=0$$ that becomes
$$2dx^3-(r^2+2d^2)x^2+d^2r^2=0$$
I graphed |DE|(x) and |DE|'(x) (using l = 15, r = 3, and d = 4), they are consistent with the problem. |DE|'(x) has only one root at about 2.76 and at the same point |DE|(x) has its maximum. Moreover, if you substitute my test numbers for l, r, and d in the given formula for |DE|(x) maximum you get the same numerical result.
So, how was the author of the problem able to find an analytical solution to the problem?
Thanks!
• I appreciate the diagrams, $TeX$, and work. This is a great example of how to format a question so that it will be answered. – davidlowryduda Jan 25 '12 at 1:54
Note that $x=d$ is a solution of your cubic equation (not the one you want, of course).
|
# Lecture Notes on Topology and Algebraic Geometry
I developed these notes for the course Advanced Geometry, Spring 2018, in ShanghaiTech University. The course covers Point Set Topology, and gives students some tastes of Algbraic Geometry. (Lecture 27-30 is absent)
## 1 Point Set Topology
Lecture 01
Introduction to this course, some set theory.
Lecture 02
Metric space, open ball, neighborhood, and continuity.
Lecture 03
Limit of a sequence in metric space, continuous functions, and characterization of continuity in terms of open set.
Lecture 04
Closed set and limit point.
Lecture 05
Open set and closed set on the real line.
Lecture 06
Topological equivalence.
Lecture 07
Re-work of what we learned in metric space in the context of topological space, Hausdorff Space, irreducible space, and density.
Lecture 08
Irreducibility, basis and product topology.
lecture 09-10
Compactness.
Lecture 11 (unfinished)
Connectedness.
## 2 Commutative Algebra and Algebraic Geometry
Lecture 12-13
Introduction to group, ring, and field. A first taste of algebraic geometry, polynomial, ideal and linear form, and some theories related to dimension.
Lecture 14-15
Operations between ideals, prime ideals, radical, Zariski Topology, and some theorems.
Lecture 16-18
Hilbert Basis Theorem, for which modules, Noetherian modules, etc. are introduced.
Lecture 19
Quotients of modules and isomorphism theorems.
Lecture 20-21
Hilbert's Nullstellensatz and its consequences, for which maximal ideals and localization are introduced. Irreducible spaces are discussed.
Lecture 22-23
Irreducible Decomposition Theorem, Projective Spaces and Varieties.
Lecture 24-26
Dimension, prime ideals in a ring, in quotient spaces and in their localization, integral things.
Lecture 27
A proof of of the fact that $$\dim \mathbb{F}[x_1,\dots,x_n]=n$$.
Lecture 28
Noether Normalization and its geometric interpretation.
Lecture 29
Noether projection for Noether Normalization: the induced map is surjective and has finite fibers.
Lecture 30
Hyperplane intersection characterization of dimension.
|
# I Sampling and analysis of variance
1. Oct 30, 2016
### imsolost
Dear forumers,
In advance, thank you for reading this post and helping me solving this.
The problem is the following :
I have a population (say, a bag) from which i take 3 samples.
Each sample gets analysed once by a laboratory in order to know their concentration of a product.
The result from the laboratory for each sample "i" is in the form : the real value of the concentration is somewhere (i.e. normally distributed) around µ_i with the variance sigma²_i.
sigma²_i depends on the time of measurement t_i, : the longer the measurement, the more accurate the result is, which means that µ_i gets closer to the real value of concentration of the samplie "i". The relation between sigma_i and t_i is known.
So far, so good.
Now the question is simply :
• What do i know about the concentration of my whole bag ? With which uncertainty (expressed as a variance sigma) ?
• If i take a 4th sample, what can I expect from it ? If I measure it during a time t_4, what will be the associated uncertainty (using all the information I have) ?
• What do i know about the heterogeneity of my bag ? (i.e. how can i split sigma between a relation with all the sigma_i and another "heterogeneity" term representing the variance of the 3 means µ1, µ2 and µ3).
To me, the problem seems a bit similar to an ANOVA analysis, except that, for ANOVA, u have multiple values for each sample while here, u have only one value with a known uncertainty. I don't know, I'm lost...
Would be amazing if u guys can give me some help on this !and
2. Oct 30, 2016
### FactChecker
This is really a question about sampling techniques more than ANOVA. The formula for combining results with different variances can be found in https://en.wikipedia.org/wiki/Stratified_sampling It's not completely clear to me that this directly applies to your problem but I think it is getting close. Since you are talking about such a tiny sample, you will not have a good sample estimate of the variance. Unless you have some way of estimating the variance as a function of time, I don't think you can do anything.
3. Oct 30, 2016
### imsolost
Hi FactChecker.
I don't really know if stratified samplings does apply here...
I mean, I guess u consider each of my 3 samples as a "stratum", but then quantity as the size of a stratum is not really defined, so I can't apply this theory here..
I am a bit confused.
Btw, the number of samples, 3 in this case, is just for the example.
Thank you for trying to help me though !
4. Oct 30, 2016
### FactChecker
Sorry. I had assumed that the reference equation for the sample mean would have weightings inversely proportional to the variance. I see that it doesn't and it is not immediately clear to me how it should be done. I still think that the subject of sampling techniques is the right subject to address your question.
5. Oct 30, 2016
### Stephen Tashi
What is that relation? Is it deterministic?
6. Oct 30, 2016
### imsolost
For the measurement of samplei during a time ti giving a result µi, then the variance σi² = (µi / ti)
7. Oct 30, 2016
### Stephen Tashi
It isn't clear what the equation $\sigma_i = \mu_i/ t$ signifies.
For a population parameter, we have to define the associated population. If you are trying to estimate the population mean of a set of samples and we denote the mean of this population as $\mu$ then what population is associated with the mean $\mu_i$ and how it related to $\mu$ ?
Likewise we need to know the population associated with $\sigma_i$. For example, what population is associated with $\mu_2$ and $\sigma_2$ ?
8. Oct 30, 2016
### imsolost
Please note that µi/t = σi2 (the square disappeared from your expression).
Now, I'll try to answer to your remark. The real value of the concentration in the sample i, let's call it ρi, is unknown but considering we measured µi, we assume that ρi is somewhere around µi and µi is the best estimate. We assume here that f(ρi I µi), i.e. the probability distribution of ρi considering µi was measured, follows a gaussian with a standard deviation of √(µi/t) and a mean µi.
And the whole problem is about :
1. testing the following hypothesis : H0 : For all i,j : ρij
2. Expressing the total variance s² as a split between one term that represent the heterogeneity (probably something in the form Σii-µ)² where µ is the mean between all the µi ; and another term that implie all the σi. In a sense, it's very similar to the ANOVA approach.
I hope this clarifies these things a bit :-/
And thank you for putting your time into helping me on this !
9. Oct 30, 2016
### FactChecker
Is this a time series of measurements of a changing population at times ti or are they measurements of the same population that take different elapsed times ti?
If they are a time series of a changing population, then you should apply time series analysis. I am guessing that you don't mean that.
If they are measurements of the same population that take different elapsed times ti, then there is only one true mean that you want to estimate. The usual way to combine the mean estimators of the same thing from several samples of different sizes is to weight each estimate by its sample size. Since the relationship between the sample size and the estimator variance is sample_mean_sigma2 = true_sigma2 / sample_size, I would suggest weighting each estimator by 1/sample_mean_sigma2. In your case, that would be a time-weighted estimate of the mean. A sample that took twice as long would have twice as much weight in the average estimator.
10. Oct 31, 2016
### Stephen Tashi
There is zero probability that two samples from a gaussian distribution are exactly equal.
You used the notation "$p_i$" to denote both a random variable and a parameter. To do a hypothesis test, you should test a statement about the distribution of a random variable . For example, you might test a hypothesis about a population parameter of its distribution.
If we do Bayesian statistics then we can assume a population parameter is a random variable. But unless $p_i$ and $p_j$ are discrete random variables it doesn't make sense to test the hypothesis that they are equal.
So you are using $\mu_i$ to denote the result of a single measurement?
I think you need to straighten out your notation. If $\sigma_i$ is a standard deviation of some population then it can't be a given by a function that gives a different answer depending on the value of a sample that is drawn from that population - (i.e. by $\sigma_i^2 = \mu_i/t$) If you want $\sigma_i$ to be an estimator of a population parameter then its value can depend on the value(s) in a sample.
"Somewhere around" is too wishy-washy to translate into a mathematical statement. "Best estimate" is also subject to interpretation because there are several different criteria for what makes an estimate "good" or "best". (e.g. unbiased, minimum variance, maximum liklihood, least squares).
Perhaps you intend to assume that $\mu_i$ is an unbiased estimator of $p_i$.
How do you define the concentration of the whole bag? Is it the average concentration of the 3 samples ?
-----
A difficulty with your problem is that $\sigma_i^2 = \mu_i/t$ must be interpreted as an estimate of the standard deviation (of the population of all imaginable tests of a given concentration c that last for the time t). So there is "uncertainty" in the estimator $\sigma_i$. Hence if you attempt to estimate the "uncertainty" (i.e. standard deviation) of $(p_1 + p_2 + p_3)/3$ you must examine how the uncertainty in the $\sigma_i$ contributes to the uncertainty in your estimate.
There are familiar statistical scenarios where the population standard deviation is assumed to be correctly estimated by the sample standard deviation. The justification for this assumption is that the uncertainty of the sample standard deviation is small when it is computed from a large sample by the usual formula. But in your case, you are not computing $\sigma_i$ by the usual formula.
11. Oct 31, 2016
### imsolost
Each sample doesn't change as a function of time : ρi is a constant over time.
As stated above, it's just that the longer the measurement, the more accurate it gets. The estimate µi of the real value ρi gets closer to ρi. (see my post above with the distribution function f(ρi I µi).
Now, is it the same population ? Well, all 3 samples were taken from the same lot indeed. The unknown concentration of the whole lot is ρ. I'd like to have an estimate of ρ and an evaluation of the uncertainty. But keep in mind that my 3 samples are'nt necessary the same. The lot can be heterogeneous : each sample "i" has its own concentration ρi wich is normally distributed around ρ. That"s why I thought I should first test the following hypothesis: H0 : For all i,j : ρi=ρj. I don't know how to test that.
I don't know what time series are, but indeed the population isn't changing over time, so i guess you guessed well
The concept of sample "size", in this problem, isn't very clear to me. But I was also expecting a weight that is a function of time, although I can't see any rigorous explanation for it. In your expression, can you define sample_mean_sigma and true_sigma ?
Again, I'd like to thank you guys for helping me on this.
edit : just saw the post of Stephen Tashi. Gonna read it and answer soon.
12. Oct 31, 2016
### imsolost
Okay so I just read Tashi's very interesting post and I think we definitely are getting close to the core of my problem and the mistakes I'm probably making here.
Honestly I think I was doing both at the same time (conventional stats and bayesian ones) and that's probably not appropriate. So let's say ρi is a population parameter and not a random variable.
That said, now that ρi for all i is a parameter, I dont see why I couldn't test an hypothesis like Ho : for all i,j : ρij.
Small side-conversation : {
Let's compare to the ANOVA analysis. Isn't it what they do ? Testing if sub-populations differs according to some explanatory variable ; which here translates into testing if my 3 samples differ by more than their "within-sample" deviation. The only difference with a classical ANOVA being that here, i have only one single measurement for each level which gives me an estimator of the mean and of the dispersion, while ANOVA has multiple results datas for each level from which it calculates... its mean and its dispersion !
But I can't use ANOVA theory since there is no such thing as sample size or freedom degree in my problem.
}
Yes. µi is the result of one single measurement on the sample i. But you are right, maybe I should have used another letter for it, it confuses people thinking it's a mean. That said, if you guys are okay with this, I propose keeping this notation in this post for consistency.
You are totally right. Big mistake of my part, so let's rephrase this : √(µi/t) is an estimator of the standard deviation.
Yes ! Thank you for correcting me.
Well that's one of my questions.
I guess the best use of the information I have would be to say that the concentration of the whole bag is best estimated (unbiaised estimator? ) by the mean of the 3 results (on the 3 samples). So something like (µ1+µ2+µ3)/3 ? I don't know . Someone suggested using a weight average that implies the elapsed time of measurement ? What would be the justification for this ? My intuition makes me think that, indeed, I should give more credit to a more accurate result and give it more weight. I'm confused :/
Yep ! But how can i do that ?
I didn't really understand that part.
Last edited: Oct 31, 2016
13. Oct 31, 2016
### Stephen Tashi
Assuming $p_i$ is a population parameter there is no conceptual objection to a hypothesis test. But there may be a practical obstacle to doing a hypothesis test because you only have 1 sample from each population.
The statement of your problem is ambiguous because the equation $\sigma_i^2 = \mu_i/t$ is merely the definition of an estimator of a variance. We don't have any probability model that explains why that formula ought to work (in any sense) as a estimator of the variance.
If that estimator is suggested by some data, it would help to know the format of that data. If the estimator is a theoretical result then what is the probability model that the theory uses?
If you can write computer programs, you can investigate the behavior of the estimator by the Monte-Carlo method . Pick an arbitrary "true" concentration value $p_i$ and a time t. Set the population standard deviation $\sigma$ equal to $\sqrt{ p_i/ t}$ . Generate random samples from a normal distribution with mean $p_i$ and standard deviation $\sigma$. For each sample value $s_k$ , compute the value of the estimator $\sigma_k = \sqrt{s_k/t}$ and look at the distribution of the $\sigma_k$.
14. Oct 31, 2016
### Stephen Tashi
If you knew that $p_1 = p_2 = p_3 = p$ then it would make intuitive sense to give more weight to estimators of $p$ that had less variance. But if the 3 estimators are estimating different things, then it isn't clear that an giving an estimator of $p_1$ more weight helps you estimate the sum $(p_1 + p_2 + p_3)/3$ better.
To answer questions about a good estimator for $(p_1 + p_2 + p_3)/3$ , you need a probability model for how $p_1,p_2,p_3$ are selected from some population of concentrations.
The various estimators of $(p_1 + p_2 + p_3)/3$ is another topic that can be investigated by Monte-Carlo simulations.
Yep ! But how can i do that ?
I didn't really understand that part.[/QUOTE]
15. Oct 31, 2016
### imsolost
I'll take more time into reading your 2 last posts with attention but I can already answer your first question. The measurand is not really a concentration. It's a count rate which relates to a radioactive activity per unit mass (see here for a quick summary). Basically for a given number of counts Ni, the uncertainty (variance) is (√Ni)². So for a counting during a time ti, the measured count rate is Ni/ti = Ri with a variance Ni/ (ti²) which is equal to Ri/ti. This is the expression written in the above posts.
Now, gonna study the rest of your post !
Last edited: Oct 31, 2016
16. Oct 31, 2016
### FactChecker
Here is the point I was trying to make. The normal method of combining estimates from sample groups of different sizes is by weighting each estimate by its sample size. That is because there is a direct inverse relationship between the sample size and the variance of that sample estimate. In your case, you have a direct inverse relationship between measurement elapsed time and variance. So measurement elapsed time takes the place of the sample size. Weigh your estimates by the measurement elapsed time.
17. Nov 2, 2016
### Stephen Tashi
I agree with that idea if there is some statistical relation among the concentrations (or counts) in the 3 samples.
However, consider a situation at the other extreme. Suppose we are trying to estimate the mean of 3 random variables that are "unrelated". For example let X1 be the age of a randomly selected resident of California, let X2 be the closing price of the stock of the Monsanto company on a randomly selected day in 2015, and let X3 be the mileage on a randomly selected automobile that is registered in the state of North Carolina. If we have different different sized samples of each of these random variable, is it wise to estimate the mean value of the random variable Y = X1+X2+X3 by using an unequally weighted sum of the sample means ?
From the original statement of the problem:
This hints that the 3 samples might be samples "of the same thing". So the estimate of the mean concentration of the 3 samples, is also an estimate of the mean concentration of the population of things in the bag. In this case, I agree with using a weighted sum of the sample means.
But the question is also posed:
So perhaps the things in the bag are not samples of the same thing.
18. Nov 2, 2016
### FactChecker
That's a good point. But how is that really different from any other source of random variation? I admit that if the amount of time of a measurement correlates with different types being selected from the bag, then there is a problem. But otherwise, I would just consider the differences due to the selection of a type from the bag just another source of random variation.
This entire problem seems so much like the problem of averaging poll results given only the published average result and margin of error. But I am not sure that anyone does that without also knowing the sample sizes, which they might just use directly for weighting the individual results. I searched for details of their methods, but didn't find any.
I also think that this is closely related to importance sampling and stratified sampling methods, where the "bag" is not homogeneous. But I could not find a reference that directly weighted the individual results using 1/strata_variancei, rather than the sample size, Ni. The only thing I can think of is to weight the individual results by the time spent on the measurement, since that is given to be proportional to 1/strata_variancei. I admit that I am kind of "winging it" here more than I should.
|
# Write an expression for the nth term of the sequence. \frac{5}{2}, \frac{5}{4}, \frac{5}{8},...
## Question:
Write an expression for the nth term of the sequence.
{eq}\frac{5}{2} {/eq}, {eq}\frac{5}{4} {/eq}, {eq}\frac{5}{8} {/eq}, _____, _____, _____, ...
## The nth term of a Geometric Sequence
The general terms of a standard geometric sequence can be represented as:
$$\displaystyle a,ar, ar^{2},ar^{3},................,ar^{n-1},...............$$
Here {eq}a {/eq} is the first term and {eq}r {/eq} is the common ratio.
In the above geometric terms, it is very clear that the ratio of two successive terms is constant which known as the common ratio (r). It means each term of a geometric sequence is the same multiple of the preceding terms.
#### nth term of a geometric sequence:
$$\displaystyle a_{n} = a.r^{n-1}$$
Given:
The geometric sequence:
$$\displaystyle \frac{5}{2},\frac{5}{4},\frac{5}{8},..............$$
Here
{eq}\text{First term} (a) = \displaystyle \frac{5}{2} {/eq}
and
{eq}\displaystyle \text{Common ratio} (r) = \frac{\displaystyle \frac{5}{4}}{\displaystyle \frac{5}{2}} = \frac{\displaystyle \frac{5}{8}}{\displaystyle \frac{5}{4}} = \frac{1}{2} {/eq}
Now the {eq}n {/eq}th term of the given geometric sequence:
\begin{align} a_{n} & = a.r^{n-1} \\[0.2 cm] & = \frac{5}{2}.\left(\frac{1}{2}\right)^{n-1} \\[0.2 cm] & = \frac{5}{2} \times \frac{1}{2^{n-1} } \\[0.2 cm] a_{n} & = \frac{5}{2^{n} } \\[0.2 cm] \end{align}
|
# Homework Help: Rotation about CM - Can I use Moment of Inertia?
1. Mar 26, 2012
### RoshanBBQ
Rotation about CM -- Can I use Moment of Inertia?
1. The problem statement, all variables and given/known data
I have a program I coded to simulate the movement of a some point masses connected to each other rigidly. Each mass is the same. I am trying to code in the correct equations for rotation, but I am having some difficulty with a nonzero product of inertia.
3. The attempt at a solution
I have the x, y, and z coordinates to these masses where x y and z are n by 1 matrices (where I have n masses).
I calculated the center of mass as:
xcm = average(x)
ycm = average(y)
zcm = average(z)
Then I defined new coordinates, xx yy and zz:
xx = x - xcm
yy = y - ycm
zz = z- zcm
The following Wikipedia article then says, "With respect to a coordinate frame whose origin coincides with the body's center of mass, they can be expressed in matrix form as:"
and then it shows that the rotation depends only on the moments of inertia.
http://en.wikipedia.org/wiki/Newton–Euler_equations
However, I then compute the product of inertia:
sum(yy.*zz), which does not equal 0!
where .* means element-wise multiplication.
What is going on here? Am I computing something incorrectly, is Wikipedia wrong, or am I just misunderstanding something?
Last edited: Mar 26, 2012
2. Mar 27, 2012
### Andrew Mason
Re: Rotation about CM -- Can I use Moment of Inertia?
This is the first problem you have. This is not the definition of the centre of mass. You have to multiply the mass elements by their displacements from an origin, sum them and then divide by the total mass. That gives you the displacement of the centre of mass from the origin.
The second problem you have is with the moment of inertia. The moment of inertia relates the torque to the angular acceleration produced by that torque: $\vec{\tau} = I\vec{\alpha}$
An unconstrained rigid body will rotate about an axis through the centre of mass of the body. You have to then find the moment of inertia of the body about that axis. This can be a difficult calculation and depends on the geometry of the object. See, for example: http://hyperphysics.phy-astr.gsu.edu/hbase/mi.html#mi
You will have to have a solid understanding of rotational motion before you can write your program.
AM
3. Mar 28, 2012
### RoshanBBQ
Re: Rotation about CM -- Can I use Moment of Inertia?
Since all masses are the same, the center of mass is the average of the coordinates.
Let us assume we have n point masses of mass m located on the x-axis with coordinates x_1, x_2, ..., x_n.
$$M = \sum_{k=1}^n m=m\sum_{k=1}^n 1=mn$$
$$x_{cm} = \frac{1}{M}\sum_{k=1}^n x_n m=\frac{m}{M}\sum_{k=1}^n x_n=\frac{m}{nm}\sum_{k=1}^n x_n=\frac{1}{n}\sum_{k=1}^n x_n$$
The result is known as the arithmetic average of the coordinates.
Also, the moment of inertia calculation is not difficult as you claimed it can be since I am dealing with point masses. It is simply the sum of squared coordinates scaled by m.
4. Mar 30, 2012
### Andrew Mason
Re: Rotation about CM -- Can I use Moment of Inertia?
Ok.
So would the moment of inertia about the axis of rotation (z axis) not be:
$$m\sum_{i=1}^{n} |x_i-x_{cm}|^2 + |y_i-y_{cm}|^2$$?
AM
5. Mar 30, 2012
### D H
Staff Emeritus
Re: Rotation about CM -- Can I use Moment of Inertia?
That wikipedia article is poorly written, which results in you misunderstanding something.
That first equation in that wikipedia article essentially says that translational and rotational dynamics are decoupled in the special case in which the origin of the frame describing the rigid body has its origin at the body's center of mass. It does not say that "rotation depends only on the moments of inertia." The Jc in that wiki article's equation is the inertia tensor, not the moments of inertia.
To understand this equation, you need to know a bit about what I'll call body frames and structure frames. A structure frame has its origin at some pre-determined point on the rigid body and has some pre-determined set of orthogonal axes that are based on the structure. A body frame is also based on the rigid body, but has its origin at the center of mass of the body.
For example, an aircraft structure frame often has the origin in front of and well below the nose of the aircraft. The x-hat axis points toward the rear of the aircraft along the aircraft's longitudinal axis, the the z-hat axis points up (away from the wheels), and the y-hat axis completes a right-handed coordinate system. This makes the x and z coordinates of any point inside the aircraft positive. A typical aircraft body frame has its origin at the aircraft center of mass, with x-hat and z-hat pointing in the opposite directions as the structural x-hat and z-hat axes. Thus the body frame's x-hat axis points toward the nose, y-hat points out the right wing, and z-hat points down.
These body axes almost certainly are not the principal axes of the aircraft. The principal axes are convenient for simplifying the equations of motion but are quite useless in reality. Think of a plane with sumo wrestlers seated on the left side, a bunch of school kids seated on the right, and a somewhat unbalanced cargo below. The principal axes vary from flight to flight, and even within flight as the plane burns fuel.
You can define a structure frame and a body frame for your set of rigidly connected point masses. Just pick some orthogonal axes that make sense. Working with the center of mass (a body frame) is a lot easier than some other point as the origin (a structure frame) because translational and rotational dynamics decouple with this choice. Translational motion is described by good old $\vec F_{\text{ext}} = d{\vec p}/dt = m \ddot{\vec x}_{\text{cm}}$. Rotational dynamics is a bit hairier, but a lot less hairy than using something other than the center of mass as your origin.
For a number of reasons, the angular velocity is typically expressed as angular velocity of the body frame with respect to inertial but expressed in body frame coordinates. One reason is tradition: This goes way back to Euler. Another is that the inertial tensor is a time-varying beast when expressed in inertial coordinates but it is constant (assuming a true constant mass rigid body) in the body frame. Yet another is that sensor readings are in some frame whose orientation is fixed with respect to structure / body.
This representation of angular velocity in body frame coordinates means that the rotational equations of motion are not the simple rotational equivalent of F=dp/dt=ma, $\vec {\tau}_{\text{ext}} =\mathbf J \dot{\vec{\omega}}$, where J is the inertia tensor. The relation F=dp/dt does have a rotational equivalent; it is
$$\vec {\tau}_{\text{ext}} = \left(\frac{d\vec L}{dt}\right)_I$$
where that subscript "I" on the derivative means "from the perspective of an inertial observer."
The time derivative of any vector quantity $\vec u$ from the perspective of an inertial (non-rotating) observer and a rotating observer are related via
$$\left(\frac{d\vec q}{dt}\right)_I = \left(\frac{d\vec q}{dt}\right)_R + \vec{\omega}\times \vec q$$
where the subscript "R" on the derivative on the right hand side means "from the perspective of a rotating observer."
The angular momentum as expressed in body frame coordinates is $\vec L = \mathbf J \vec{\omega}$. Using the rotational equivalent of F=dp/dt and the expression that relates inertial and rotating time derivatives, the rotational equations of motion become
$$\vec {\tau}_{\text{ext,body}} = \mathbf J \frac{d\vec{\omega}}{dt} + \vec{\omega} \times \left(\mathbf J \vec{\omega}\right)$$
Here $\vec {\tau}_{\text{ext,body}}$ are the real external torques on the body expressed in body frame coordinates (rather than inertial).
One final note: The above gives the angular velocity in body frame coordinates. It does not give the orientation of the body frame with respect to inertial. You cannot simply integrate angular velocity. Do you need to know the orientation of your body, and if so, do you know how to compute that?
6. Mar 30, 2012
### RoshanBBQ
Re: Rotation about CM -- Can I use Moment of Inertia?
Yes, that is how I computed it. I used xx yy and zz, which are relative to CM.
That makes much more sense. Wikipedia calls J "moment of inertia about the center of mass." They should call it the inertia tensor if it is the inertia tensor. I was previously using an inertia tensor in my program when I was using a structure frame (But at the time, I didn't know that meant a coupling between the two types of motion. When I figured it out, I promptly switched to a body frame and threw away my tensor stuff because of this Wikipedia article). You should really edit this Wikipedia article's definition of J if you have the knowledge to do it confidently.
As for your question, I do not think I need to know the angle with respect to the inertial observer. I am modeling a boat with 10 states in my state space: x y z (relative to starting point), their derivatives, roll and yaw, and their derivatives. I am assuming pitch is always zero to simplify the controls problem of directing the boat's motion.
So basically, my code outputs this:
Code (Text):
a = tensor\torque.'; % inv(tensor)*torque.'
where a is the instantaneous angular acceleration of pitch, roll, and yaw about the CM. "torque" is a 1 by 3 vector (which is why I transpose it during its interaction with the inertia tensor) defined as
$$\vec {\tau}_{\text{ext,body}} - \vec{\omega} \times \left(\mathbf J \vec{\omega}\right)$$
I then have the output go into 2 integrators in a Simulink model. Roll/Yaw and dRoll/Dyaw are fed back into the function for use in computing buoyancy forces etc. as well as torque in the above equation.
I have one question for now: Let us assume I have a 3x3 inertia tensor, T, relative to CM for a certain orientation I call "0 degrees from x, y, and z". Is there a way to transform T into T' where T' is the inertia tensor for that same object rotated a, b, and c degrees from x, y, and z respectively? That would save a lot of computation. As of now, I simply recompute the inertia tensor each time (and I have 100,000s of point masses). The code is called 100s of times, so it would be nice to cut off a few ms. As of now, the MATLAB code runs .5 seconds the first time (when it computes a lot of stuff, most burdensome being the x,y, and z coordinates of the hull). Subsequent calls to the function use persistent variables to keep many results, taking only .1 seconds. To take out the tensor calculation would probably cut .03 seconds or so since it is 100,000s of multiplications, sums, and squares. The simulink finally compiles the MATLAB code into C++ code, so it is hard to tell how fast that executes.
PS
Before I made this thread, I tried to do some serious self-studying on the topic. Take a look at that misleading image I uploaded from a book I was using. It seemed all sources agreed you could use only moments of inertia to analyze motion about the CM (wikipedia, books they cite, the book i quote, etc.), but failed to stress you would still need the principal axes for that frame of reference. The book I quote does, however, mention the idea about principal axes. I was left to assume one of two things: I could analyze that motion with that equation about the CM GIVEN I HAVE CHOSEN AXES THAT WERE PRINCIPAL or I could analyze motion about the CM BECAUSE THOSE AXES ARE ALWAYS PRINCIPAL AXES. That is just not a nice ambiguity. You cleared it up well though -- apparently it was the former. I am doing this for an engineering introduction to controls class. I am not well-versed even in 1-d physics, so I have had to do a lot of thinking and self-studying to formulate my 3-d model.
#### Attached Files:
File size:
70.1 KB
Views:
137
Last edited: Mar 30, 2012
7. Apr 2, 2012
### D H
Staff Emeritus
Re: Rotation about CM -- Can I use Moment of Inertia?
RoshanBBQ,
I am a bit troubled by this last post of yours. What exactly are you trying to do here? What problem are you trying to solve? You talked in the initial post about rigid bodies, but now you are talking about thousands (hundreds of thousands) of point masses! So which is it?
In a rigid body formalism, you have essentially twelve independent variables that describe each body. With hundreds of thousands of point masses, each with six degrees of freedom (position and velocity), you have on the order of a million independent variables! Whoa! A mesh of hundreds of thousands points would stress even the best finite element analysis package. You might want to rethink what you are doing.
Compared to your million or so degree of freedom system, a rigid body has but twelve degrees of freedom:
- Three for the position of some selected point on the rigid body,
- Three for the time derivative (velocity) of that point,
- Three for the orientation of the body or structure frame, and
- Three for the rotation rate (angular velocity) of the body frame.
There are reasons to use a finite elements approach. Real bodies are deformable rather than rigid. With a rigid body formalism you are forgoing looking at what happens inside the body. This is obviously a disadvantage, but the advantage of a rigid body formalism is that the number of independent variables shrinks by orders and orders of magnitude. There's no reason to use a finite elements approach if the object of interest is fairly rigid and you don't care about behaviors within the body.
The issues that need to be addressed in a rigid body formalism are
- How to physically describe the body,
- How to represent the body's dynamical state,
- How to formulate the equations of motion, and
- How to integrate those equations of motion with respect to time.
How to physically describe the body
A point mass has but one physical characteristic, its mass. A rigid body has mass plus an inertia tensor. The inertia tensor of a rigid body is constant in a frame fixed with respect to the body. Assuming the body truly is rigid, you need to calculate the inertia tensor but once.
Here's where you can make use of your hundreds of thousands of point masses. Calculate the inertia tensor for this system of masses. You'll need to transform to the body frame if the frame in which the locations of the point masses are represented is not the body frame. Suppose you calculate the inertia tensor in some frame A, but you want it in frame B. Let T be the transformation matrix that transforms a vector as represented in frame A to the representation of that same vector in frame B. The inertia tensor transforms according to
$$\mathbf I_B = \mathbf T\, \mathbf I_A\,\mathbf T^T$$
How to represent state
Rigid bodies have a translational and rotational state. The translational and rotational equations of motion decouple if the translational state is represented as the position and velocity of the center of mass of the rigid body. There are reasons to a structure frame whose origin is not the center of mass, but I recommend that you start with a center of mass body frame. The equations of motion for the translational state such a frame are fairly simple: It's just Fext=mbodya. It is rotational state that is the tough nut to crack.
One way to represent rotational state is via the transformation matrix. This is in a sense the canonical form. After all, the mathematics of rotation in Euclidean three space is the group SO(3), where SO is short for special orthogonal (matrices). There is an issue with this representation scheme: Too many numbers. A 3x3 matrix has nine elements, but rotation has but three degrees of freedom. This will lead to challenges when it comes to integrating the equations of motion. There are ways around this. The problem is not intractable.
F=ma is a second order differential equation. The rotational equations of motion are also second order. Some rotational analog of velocity is needed here. The obvious choice is angular velocity. One advantage of using angular velocity is that it is not over-specified. Angular velocity comprises three parameters, and there are three degrees of freedom. For a number of reasons, angular velocity is typically represented as angular velocity of the rotating body with respect to inertial but represented in body frame coordinates.
How to formulate the equations of motion
The translational equations of motion are simply F=ma for a center of mass representation. The tricky issue is rotational state. The equations of motion for angular velocity is given in post #5. That leaves rotation itself. If you represent the orientation of your rigid body by the transformation matrix T from inertial to body coordinates, the time derivative of this matrix is
$$\dot{\mathbf T} = -\mathbf X(\vec{\omega}) \mathbf T$$
where $\vec{\omega}$ is the angular velocity as represented in body frame coordinates and $\mathbf X(\vec{\omega})$ is the cross product matrix generated from the vector $\vec{\omega}$. If you instead use the transformation from body to inertial, this becomes
$$\dot{\mathbf T} = \mathbf T \mathbf X(\vec{\omega})$$
How to integrate those equations of motion with respect to time
Position, velocity, and angular velocity are conceptually easy. Just use some numerical integration technique. Once again it is rotation that is tricky. Use a typical numerical integration technique on either of the above derivative equations and you will get some matrix that is not orthogonal. One way to get around this is to force the matrix to be orthogonal. Pick some row and normalize (make that row a unit vector). Next, remove the projection of that vector from some second row and normalize. Finally, remove the projections of both of these unit vectors from the third row and normalize. If you use this approach it is best to alternate your choices of this first, second, and third row with each integration step.
Another approach is to use a better integration technique. Google the term "geometric integration". Yet another approach is to use a better representation of orientation. There are many different ways to represent SO(3) that are better suited to numerical integration. Some of these alternative approaches are unit quaternions, Rogrigues parameters, and modified Rogrigues parameters.
|
# Markov chain which is also a projection
Let $P$ be the transition matrix of a Markov chain, and assume that $P^2=P$. One immediate conclusion is that $P=P^\infty$.
Furthermore, assume that there is a state $i$ such as each state $j$ (including $j=i$), is accessible from $i$: $i\rightarrow j$.
I am fairly sure that this means that $P$ is irreducible (and thus, has only one stationary distribution, and all the columns of its matrix are equal), but I can't find a quick argument why this is true.
I think I can get a proof using the coefficients, to show that if there is a state $j$ such as $j\not\rightarrow i$, then $P^\infty\neq P$, because the probability of being in state $j$ will "grow over time", but I was wondering whether there was a more direct argument, perhaps making use of a standard result.
• You defined irreducibility in your second sentence. To say that something is irreducible does not need proof. – tintinthong Sep 4 '17 at 12:35
• perhaps take a look at this link. stats.stackexchange.com/questions/126412/… – tintinthong Sep 4 '17 at 12:37
• @tintinthong I only assume one direction: there exists a state $i$ with transitions to all other states. I'm looking for an easy way to show that the reverse is also true (= all states $j$ have a transition to $i$) if the Markov chain is a projection. – Ted Sep 4 '17 at 12:40
• Ah, that link is interesting. The result I'm looking for seems to be a consequence of Theorem 1.16 (page 48) of this book. This seems to use some heavy artillery to prove it, and I'm curious to know whether there is a more elementary proof, but this'll be enough to cite this. Thanks! – Ted Sep 4 '17 at 13:01
• It was just a matter of terminology; a Markov chain is a process, a transition matrix gives the probabilities of the possible state-transitions in that process. I think it's fine now. – Glen_b Sep 4 '17 at 23:14
|
Page updated: April 16, 2020
Author: Curtis Mobley
View PDF
# The Secchi Disk
The previous pages give us the background needed to derive the maximum depth at which a Secchi disk can be seen.
#### 0.1 The Classical Secchi Depth Model
Consider only the case of looking straight down, and drop the direction arguments in luminances and contrasts, e.g. ${L}_{vB}\left(z,\stackrel{̂}{\xi }\right)={L}_{vB}\left(z\right)$. The underlying idea is that a disk at some depth $z$ is illuminated by the downwelling plane illuminance ${E}_{dv}\left(z\right)$. The luminance reflected by the disk then propagates upward to the observer as a narrow beam of luminance. The development then proceeds as follows.
The downwelling plane illuminance at depth $z$ is given by
${E}_{dv}\left(z\right)={E}_{dv}\left(0\right)exp\left[-{⟨{K}_{dv}⟩}_{z}z\right]\phantom{\rule{0.3em}{0ex}},$ (1)
where ${⟨\cdots \phantom{\rule{0.3em}{0ex}}⟩}_{z}$ denotes the average over 0 to $z$.
The target is assumed to be a Lambertian reflector with an illuminance reflectance of ${R}_{vT}$. The luminance reflected by the target is then
${L}_{vT}\left(z\right)={E}_{dv}\left(z\right)\phantom{\rule{0.3em}{0ex}}{R}_{vT}∕\pi \phantom{\rule{0.3em}{0ex}}.$ (2)
The backgound water is also assumed to be a Lambertian reflector, so that
${R}_{vB}\left(z\right)=\frac{{E}_{uv}\left(z\right)}{{E}_{dv}\left(z\right)}\phantom{\rule{0.3em}{0ex}}.$ (3)
The luminance of the background water is then
${L}_{vB}\left(z\right)={E}_{dv}\left(z\right)\phantom{\rule{0.3em}{0ex}}{R}_{vB}\left(z\right)∕\pi \phantom{\rule{0.3em}{0ex}}.$ (4)
The inherent contrast at depth $z$ is
$\begin{array}{llll}\hfill {C}_{in}\left(z\right)=& \phantom{\rule{1em}{0ex}}\frac{{L}_{vT}\left(z\right)-{L}_{vB}\left(z\right)}{{L}_{vB}\left(z\right)}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill =& \phantom{\rule{1em}{0ex}}\frac{{R}_{vT}-{R}_{vB}\left(z\right)}{{R}_{vB}\left(z\right)}\phantom{\rule{2em}{0ex}}& \hfill \text{(5)}\end{array}$
where the last equation follows from (2) and (4) into (5).
The apparent contrast of the Secchi disk as seen from just below the sea surface is
${C}_{ap}\left(0\right)=\frac{{L}_{vT}\left(0\right)-{L}_{vB}\left(0\right)}{{L}_{vB}\left(0\right)}$ (6)
(Note that in this development the argument 0 refers to depth $z$, not to the distance from the target, which is $z$.)
The luminance difference law
${L}_{vT}\left(0\right)-{L}_{Bv}\left(0\right)=\left[{L}_{vT}\left(z\right)-{L}_{v}\left(z\right)\right]exp\left[-{⟨{c}_{v}⟩}_{z}\phantom{\rule{0.3em}{0ex}}z\right]$ (7)
allows the apparent contrast to be written as
${C}_{ap}\left(0\right)=\frac{\left[{L}_{vT}\left(z\right)-{L}_{vB}\left(z\right)\right]}{{L}_{vB}\left(0\right)}exp\left[-{⟨{c}_{v}⟩}_{z}\phantom{\rule{0.3em}{0ex}}z\right]$ (8)
by (??) into (6).
Inserting (2) and (4) into (8) then gives
${C}_{ap}\left(0\right)=\frac{{R}_{vT}-{R}_{vB}\left(z\right)}{{R}_{vB}\left(0\right)}\frac{{E}_{dv}\left(z\right)}{{E}_{dv}\left(0\right)}exp\left[-{⟨{c}_{v}⟩}_{z}\phantom{\rule{0.3em}{0ex}}z\right]$ (9)
Assuming that ${R}_{vB}\left(0\right)={R}_{vB}\left(z\right)$ and using (1) and (5) gives
${C}_{ap}\left(0\right)={C}_{in}\left(z\right)exp\left[-\left({⟨{K}_{dv}⟩}_{z}+{⟨{c}_{v}⟩}_{z}\right)\phantom{\rule{0.3em}{0ex}}z\right]$ (10)
This equation gives the apparent contrast of the Secchi disk as seen from just below the water surface. For viewing from above the surface, we must account for loss of contrast caused by the water surface. This loss is due both to refraction by waves and to surface-reflected sky light. Thus
${C}_{ap}\left(air\right)=\mathsc{𝒯}{C}_{ap}\left(0\right)=\mathsc{𝒯}{C}_{in}\left(z\right)exp\left[-\left({⟨{K}_{dv}⟩}_{z}+{⟨{c}_{v}⟩}_{z}\right)\phantom{\rule{0.3em}{0ex}}z\right]$
where $\mathsc{𝒯}$ denotes the transmission of contrast, not of luminance or illuminance.
The Secchi depth ${z}_{SD}$ is the depth at which the apparent contrast in air falls below a threshhold contrast ${C}_{T}$. Solving for ${z}_{SD}$ when ${C}_{ap}\left(air\right)={C}_{T}$ gives
$\begin{array}{lll}\hfill {z}_{SD}=& \phantom{\rule{1em}{0ex}}\frac{ln\left[\frac{\mathsc{𝒯}{C}_{in}\left(z\right)}{{C}_{T}}\right]}{{⟨{K}_{dv}⟩}_{{z}_{SD}}+{⟨{c}_{v}⟩}_{{z}_{SD}}}\phantom{\rule{2em}{0ex}}& \hfill \text{(11)}\\ \hfill \equiv & \phantom{\rule{1em}{0ex}}\frac{\Gamma }{{⟨{K}_{dv}⟩}_{{z}_{SD}}+{⟨{c}_{v}⟩}_{{z}_{SD}}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{2em}{0ex}}& \hfill \text{(12)}\end{array}$
Studies with human observers show that ${C}_{T}$ depends on the angular subtense of the disk and on the ambient luminance (e.g., Table 1 of Preisendorfer (1986)). The values of $\Gamma$ vary from about 6 to 9 for a disk with ${R}_{vT}=0.85$, depending on the water reflectance ${R}_{vB}$ (which is 0.015 to 0.1; Table 2 of Preisendorfer (1986)). The HydroLight code uses $\Gamma =8$ as its default.
Note that Eq. (12) must be solved interatively because ${⟨{K}_{dv}⟩}_{{z}_{SD}}$ and ${⟨{c}_{v}⟩}_{{z}_{SD}}$ are averages over the (unknown) Secchi depth ${z}_{SD}$. This is easily done after solution of the radiative transfer equation to some depth greater than ${z}_{SD}$ over the visible wavelengths. The photopic ${K}_{dv}\left(z\right)$ and ${c}_{v}\left(z\right)$ can then be computed from ${E}_{d}\left(z,\lambda \right)$ and $c\left(z,\lambda \right)$. The values of ${K}_{dv}$ and ${c}_{v}$ just below the water surface (at depth 0) are then used to get an initial estimate of ${z}_{SD}$, which is then used to compute an improved estimate of the depth-averaged ${K}_{dv}$ and ${c}_{v}$, and so on. Convergence is obtained within a few interations.
#### 0.2 The Secchi Depth Model of Lee et al.
Lee et al. (2015) re-examined the classic theory of the Secchi disk. They assumed that
• The disk needs not be angularly small and can perturb the ambient light field seen near the edge of the disk.
• Visibility is not based on target vs background luminance differences at the sharp edge of the disk, but on on differences in target and background reflectances.
• Visibility is determined by the wavelength where the disk is most visible (which can change with depth and between water bodies), rather than on broadband photopic variables.
They argue that the classic analysis should
• Replace the photopic ${K}_{dv}\left(z\right)$ with ${K}_{d}\left(z,{\lambda }_{o}\right)$, where ${\lambda }_{o}$ is the wavelength at which ${K}_{d}\left(z,\lambda \right)$ is a minimum; and
• Replace the photopic ${c}_{v}\left(z\right)$ with $1.5{K}_{d}\left(z,{\lambda }_{o}\right)$.
One end result of their analysis is a formula of the form (Eq. 28 of their paper)
${z}_{SD}=\frac{\gamma }{2.5{K}_{d}\left(z,{\lambda }_{o}\right)}\phantom{\rule{0.3em}{0ex}},$ (13)
where $\gamma$ depends on a difference in reflectances, rather than on contrasts as seen in Eq. (11). This formula has the great virtue that ${K}_{d}\left(z,{\lambda }_{o}\right)$ can be estimated from multi- or hyperspectral satellite imagery.
Comparison of ${z}_{SD}$ measured and computed by Eq. (13) gives reasonable agreement (see Fig. 6 of their paper). However, comparison of Lee et al. ${z}_{SD}$ predictions with those of the classic theory have not been made.
|
# TensorFlow
TensorFlow is the famous machine learning library by Google
## Install
• Install CUDA and CuDNN
• Create a conda environment with python 3.7
• You can also just create a tensorflow environment using conda
• conda create -n my_env tensorflow
• Install with pip
### Install TF2
pip install tensorflow tensorflow-addons
### Install TF1
Note: You will only need TF1 if working with a TF1 repo.
If migrating your old code, you can install TF2 and use:
pip install tensorflow-gpu==1.15
## Usage (TF2)
Here we'll cover usage using TensorFlow 2 which has eager execution.
This is using the Keras API in tensorflow.keras.
### Basics
The general pipeline using Keras is:
• Define a model, typically using tf.keras.Sequential
• Call model.compile
• Here you pass in your optimizer, loss function, and metrics.
• Train your model by calling model.fit
• Here you pass in your training data, batch size, number of epochs, and training callbacks
After training, you can use your model by calling model.evaluate
### Custom Models
An alternative way to define a model is by extending the Model class:
• Write a python class which extends tf.keras.Model
• Implement a forward pass in the call method
### Custom Training Loop
Reference
While you can train using model.compile and model.fit, using your own custom training loop is much more flexable and easier to understand. You can write your own training loop by doing the following:
my_model= keras.Sequential([
keras.layers.Dense(400, input_shape=400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(2)
])
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
training_loss = []
validation_loss = []
for epoch in range(100):
print('Start of epoch %d' % (epoch,))
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
guess = my_model(x_batch_train)
loss_value = my_custom_loss(y_batch_train, guess)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
# Log every 200 batches.
if step % 200 == 0:
print('Training loss at step %s: %s' % (step, float(loss_value)))
training_loss.append(loss_value)
guess_validation = model(x_validation)
validation_loss.append(my_custom_loss(y_validation, guess_validation))
### Custom Layers
Extend tf.keras.layer.Layer
class ReflectionPadding2D(Layer):
self.input_spec = [InputSpec(ndim=4)]
def compute_output_shape(self, s):
""" If you are using "channels_last" configuration"""
return (s[0], s[1] + 2 * self.padding[0], s[2] + 2 * self.padding[1], s[3])
BilinearUpsample
class BilinearUpsample(layers.Layer):
def __init__(self):
super().__init__()
self.input_spec = [keras.layers.InputSpec(ndim=4)]
def compute_output_shape(self, shape):
return shape[0], 2 * shape[1], 2 * shape[2], shape[3]
new_height = int(2 * inputs.shape[1])
new_width = int(2 * inputs.shape[2])
return tf.image.resize_images(inputs, [new_height, new_width])
## Operators
### Matrix Multiplication
The two matrix multiplication operators are:
New: With both operators, the first $$k-2$$ dimensions can now be the batch size.
E.g. If $$A$$ is $$b_1 \times b_2 \times 3 \times 3$$ and $$B$$ is $$b_1 \times b_2 \times 3$$, you can multiply them with $$C = \operatorname{tf.linalg.matvec}(A,B)$$ and $$C$$ will be $$b_1 \times b_2 \times 3$$.
Also the batch size in A can be 1 and it will properly broadcast to the same size as $$B$$.
## Estimators
### Training Statistics
Reference
You can extract the training loss from the events file in tensorflow.
pip install tensorflow-addons
### tfa.image.interpolate_bilinear
This is a bilinear interpolation. It is equivalent to PyTorch's grid_sample.
However, you need to reshape the grid to a nx2 array and make sure indexing='xy' when calling the function.
You can reshape the output back to the dimensions of your original image.
## Tensorflow Graphics
pip install tensorflow-graphics --upgrade
You may need to install a static openexr from https://www.lfd.uci.edu/~gohlke/pythonlibs/#openexr.
|
2 formatting
A small clarification on bhargav's answer: in algebraic geometry we only have quasi-unipotency of the {\em local} monodromy in one parameter one-parameter families (which is what bhargav is talking about) about); or in multi-parameter families but only near a normal crossing point of the discriminant. Global monodromies are reductive and local monodromies near bad points of the discriminant can be more general.
For concreteness look at a projective morphism $f : X \to B$, where $X$, $B$ are smooth complex projective varieties. Let $D \subset B$ be the discriminant divisor of $f$, i.e. the divisor where the differential of $f$ is not surjective. The global monodromy of the smooth fibration $f : X - f^{-1}(D) \to B - D$ is always reductive by a theorem of Borel. That is: the Zariski closure of the monodromy in the linear automorphisms of the cohomology of the marked fiber is a complex reductive group. If we take a small analytic ball $U \subset B$ centered at some point of $D$, and if we know that $D\cap U$ is a normal crossings divisor in $U$, then the monodromy of the local fibration $f : f^{-1}(U-D) \to U-D$ is quasi-unipotent as bhargav explained. Note that the normal crossings condition implies that the fundamental group of $U - D$ is abelian, so the quasi-unipotency condition makes sense here.
If however $U\cap D$ does not have normal crossings, then $\pi_{1}(U-D)$ need not be abelian and the monodromy of $f : f^{-1}(U-D) \to U-D$ need not be quasi-unipotent. An easy example is to look at a generic projective plane $\mathbb{P}^{2}$ in the $9$ dimensional projective space of cubic curves in $\mathbb{P}^{2}$. This plane parametrizes a family of cubics which degenerates along a discriminant curve $D \subset \mathbb{P}^{2}$ and under the genericity assumption $D$ has only nodes and cusps. The cuspidal points of $D$ correspond to cuspidal cubics, and near a cusp of $D$ the local fundamental group of $U - D$ is the amalgamated product of $\mathbb{Z}/4$ and $\mathbb{Z}/6$ over $\mathbb{Z}/2$ and so is isomorphic to $SL_{2}(\mathbb{Z})$ the local monodromy representation near the cusp, i.e. the representation of $\pi_{1}(U-D,u_{0})$ to the linear automorphisms of the first integral cohomology of the cubic corresponding to $u_{0} \in U - D$, is an inclusion, i.e. has image $SL_{2}(\mathbb{Z})$. In particular it is not quasi-unipotent.
1
A small clarification on bhargav's answer: in algebraic geometry we only have quasi-unipotency of the {\em local} monodromy in one parameter families (which is what bhargav is talking about) or in multi-parameter families but only near a normal crossing point of the discriminant. Global monodromies are reductive and local monodromies near bad points of the discriminant can be more general.
For concreteness look at a projective morphism $f : X \to B$, where $X$, $B$ are smooth complex projective varieties. Let $D \subset B$ be the discriminant divisor of $f$, i.e. the divisor where the differential of $f$ is not surjective. The global monodromy of the smooth fibration $f : X - f^{-1}(D) \to B - D$ is always reductive by a theorem of Borel. That is: the Zariski closure of the monodromy in the linear automorphisms of the cohomology of the marked fiber is a complex reductive group. If we take a small analytic ball $U \subset B$ centered at some point of $D$, and if we know that $D\cap U$ is a normal crossings divisor in $U$, then the monodromy of the local fibration $f : f^{-1}(U-D) \to U-D$ is quasi-unipotent as bhargav explained. Note that the normal crossings condition implies that the fundamental group of $U - D$ is abelian, so the quasi-unipotency condition makes sense here.
If however $U\cap D$ does not have normal crossings, then $\pi_{1}(U-D)$ need not be abelian and the monodromy of $f : f^{-1}(U-D) \to U-D$ need not be quasi-unipotent. An easy example is to look at a generic projective plane $\mathbb{P}^{2}$ in the $9$ dimensional projective space of cubic curves in $\mathbb{P}^{2}$. This plane parametrizes a family of cubics which degenerates along a discriminant curve $D \subset \mathbb{P}^{2}$ and under the genericity assumption $D$ has only nodes and cusps. The cuspidal points of $D$ correspond to cuspidal cubics, and near a cusp of $D$ the local fundamental group of $U - D$ is the amalgamated product of $\mathbb{Z}/4$ and $\mathbb{Z}/6$ over $\mathbb{Z}/2$ and so is isomorphic to $SL_{2}(\mathbb{Z})$ the local monodromy representation near the cusp, i.e. the representation of $\pi_{1}(U-D,u_{0})$ to the linear automorphisms of the first integral cohomology of the cubic corresponding to $u_{0} \in U - D$, is an inclusion, i.e. has image $SL_{2}(\mathbb{Z})$. In particular it is not quasi-unipotent.
|
# Proactive Bundle Patches – Change reflected now in MOS as well
Now the Known Isuses and Alerts and Recommended Patches Notes in MOS reflect the naming and classification change of the different types of patches as well.
See in:
MOS Note:1683799.1 – 12.1.0.2 Patch Set – Availability and Known Issues
We don’t speak about Bundle Patches for Exadata and DBIM anymore making it easier to find the correct patch for your platform.
|
## Zhang, Fubao
Compute Distance To:
Author ID: zhang.fubao Published as: Zhang, Fubao; Zhang, Fu Bao
Documents Indexed: 116 Publications since 1987 Co-Authors: 31 Co-Authors with 83 Joint Publications 1,450 Co-Co-Authors
all top 5
### Co-Authors
13 single-authored 47 Xu, Junxiang 33 Wang, Jun 26 Zhang, Hui 10 Tian, Lixin 8 Ding, Jian 7 Du, Miao 3 Meng, Fengjuan 3 Xiao, Lu 2 Bower, Allan F. 2 Cai, Li 2 Cao, Xiaofei 2 Huang, Yonggang Young 2 Hwang, Keh-Chi 2 Kim, Jong Kyu 2 Ma, Yihai 2 Nix, William D. 2 Yin, Cuicui 1 An, Tianqing 1 Bockhorn, Henning 1 Boyle, K. P. 1 Chang, Shih-Sen 1 Chen, Rong 1 Chen, Xumei 1 Cheng, Rong 1 Curtin, William A. 1 Du, Zengji 1 Fan, Ming 1 Feng, Guizhen 1 Ge, Weigao 1 Grohs, J. 1 Grönig, Hans 1 Habisreuther, Peter 1 He, Qing 1 Hettel, M. 1 Huang, Chengshan 1 Ißler, H. 1 Jiang, Shunjun 1 Klingshirn, Claus F. 1 Lee, Jin 1 Lu, Dianchen 1 Meng, Qing 1 Oka, Fusao 1 Pharr, George M. 1 Redekop, David 1 Saha, Ranjana 1 Wei, Jicheng 1 Ye, Bin 1 Zhang, Jianmei 1 Zhang, Yuanyuan
all top 5
### Serials
11 Journal of Mathematical Analysis and Applications 7 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 6 Mathematical Methods in the Applied Sciences 4 Applicable Analysis 4 Electronic Journal of Differential Equations (EJDE) 3 Journal of Mathematical Physics 3 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 3 Nonlinear Analysis. Real World Applications 2 International Journal of Plasticity 2 ZAMP. Zeitschrift für angewandte Mathematik und Physik 2 Journal of Differential Equations 2 Applied Mathematics and Mechanics. (English Edition) 2 Acta Applicandae Mathematicae 2 Journal of Nanjing University. Mathematical Biquarterly 2 Applied Mathematics Letters 2 Calculus of Variations and Partial Differential Equations 2 NoDEA. Nonlinear Differential Equations and Applications 2 Differential Equations and Dynamical Systems 2 Communications on Pure and Applied Analysis 2 Boundary Value Problems 2 International Journal of Nonlinear Science 2 Science China. Mathematics 2 Scientia Sinica. Mathematica 1 Computers & Mathematics with Applications 1 Computer Methods in Applied Mechanics and Engineering 1 Computers and Structures 1 International Journal for Numerical and Analytical Methods in Geomechanics 1 Journal of the Mechanics and Physics of Solids 1 Nonlinearity 1 Rocky Mountain Journal of Mathematics 1 Wave Motion 1 Chaos, Solitons and Fractals 1 International Journal for Numerical Methods in Engineering 1 Mathematische Nachrichten 1 Journal of Mathematical Research & Exposition 1 Journal of Shandong University. Natural Science Edition 1 Advances in Mathematics 1 Journal of Systems Science and Mathematical Sciences 1 Science in China. Series A 1 Journal of Southeast University. English Edition 1 Progress of Mathematics (Varanasi) 1 Systems Science and Mathematical Sciences 1 Real-Time Systems 1 Communications in Statistics. Theory and Methods 1 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 1 Topological Methods in Nonlinear Analysis 1 International Journal of Numerical Methods for Heat & Fluid Flow 1 Discrete and Continuous Dynamical Systems 1 Abstract and Applied Analysis 1 Acta Mathematica Scientia. Series B. (English Edition) 1 Acta Mathematica Scientia. Series A. (Chinese Edition) 1 Markov Processes and Related Fields 1 Electronic Journal of Qualitative Theory of Differential Equations 1 Flow, Turbulence and Combustion 1 International Journal of Applied Mathematics 1 CMES. Computer Modeling in Engineering & Sciences 1 Dynamics of Continuous, Discrete & Impulsive Systems. Series A. Mathematical Analysis 1 Advanced Nonlinear Studies 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 Journal of Southeast University. Natural Science Edition 1 Journal of Xuzhou Normal University. Natural Science Edition 1 Central European Journal of Mathematics 1 International Journal of Numerical Analysis and Modeling 1 Mathematical Modelling of Natural Phenomena 1 Electronic Research Archive 1 SN Partial Differential Equations and Applications
all top 5
### Fields
59 Partial differential equations (35-XX) 25 Ordinary differential equations (34-XX) 15 Dynamical systems and ergodic theory (37-XX) 15 Global analysis, analysis on manifolds (58-XX) 9 Mechanics of deformable solids (74-XX) 8 Calculus of variations and optimal control; optimization (49-XX) 6 Operator theory (47-XX) 3 Integral equations (45-XX) 3 Classical thermodynamics, heat transfer (80-XX) 2 Statistics (62-XX) 2 Numerical analysis (65-XX) 2 Mechanics of particles and systems (70-XX) 2 Fluid mechanics (76-XX) 2 Optics, electromagnetic theory (78-XX) 2 Quantum theory (81-XX) 1 Real functions (26-XX) 1 Measure and integration (28-XX) 1 General topology (54-XX) 1 Algebraic topology (55-XX) 1 Probability theory and stochastic processes (60-XX) 1 Computer science (68-XX) 1 Operations research, mathematical programming (90-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Biology and other natural sciences (92-XX)
### Citations contained in zbMATH Open
84 Publications have been cited 670 times in 495 Documents Cited by Year
Multiplicity and concentration of positive solutions for a Kirchhoff type problem with critical growth. Zbl 1402.35119
Wang, Jun; Tian, Lixin; Xu, Junxiang; Zhang, Fubao
2012
Existence and concentration of positive solutions for semilinear Schrödinger-Poisson systems in $${\mathbb{R}^{3}}$$. Zbl 1278.35074
Wang, Jun; Tian, Lixin; Xu, Junxiang; Zhang, Fubao
2013
Infinitely many homoclinic orbits for the second-order Hamiltonian systems with super-quadratic potentials. Zbl 1162.34328
Yang, Jie; Zhang, Fubao
2009
Existence and multiplicity of homoclinic orbits for the second order Hamiltonian systems. Zbl 1204.37067
Wang, Jun; Zhang, Fubao; Xu, Junxiang
2010
Homoclinic orbits for a class of Hamiltonian systems with superquadratic or asymptotically quadratic potentials. Zbl 1231.37033
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2011
Indentation of a hard film on a soft substrate: strain gradient hardening effects. Zbl 1266.74032
Zhang, F.; Saha, R.; Huang, Y.; Nix, W. D.; Hwang, K. C.; Qu, S.; Li, M.
2007
Ground states for the nonlinear Kirchhoff type problems. Zbl 1333.35042
Zhang, Hui; Zhang, Fubao
2015
A model of size effects in nano-indentation. Zbl 1120.74658
Huang, Y.; Zhang, F.; Hwang, K. C.; Nix, W. D.; Pharr, G. M.; Feng, G.
2006
Existence of multiple positive solutions for Schrödinger-Poisson systems with critical growth. Zbl 1329.35144
Wang, Jun; Tian, Lixin; Xu, Junxiang; Zhang, Fubao
2015
Existence of multi-bump solutions for a semilinear Schrödinger-Poisson system. Zbl 1301.35159
Wang, Jun; Xu, Junxiang; Zhang, Fubao; Chen, Xumei
2013
Existence and multiplicity of solutions for superlinear fractional Schrödinger equations in $$\mathbb{R}^{N}$$. Zbl 1328.35225
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2015
Existence and multiplicity of solutions for a generalized Choquard equation. Zbl 1375.35134
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2017
Existence and asymptotic behavior of positive solutions for Kirchhoff type problems with steep potential well. Zbl 1448.35202
Zhang, Fubao; Du, Miao
2020
Existence of solutions for nonperiodic superquadratic Hamiltonian elliptic systems. Zbl 1183.35115
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2010
Positive ground states for asymptotically periodic Schrödinger-Poisson systems. Zbl 1271.35022
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2013
Existence and multiplicity of solutions for asymptotically Hamiltonian elliptic systems in $$\mathbb R^N$$. Zbl 1187.35055
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2010
On a class of semilinear Schrödinger equations with indefinite linear part. Zbl 1312.35105
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2014
On the connection between the order of the fractional derivative and the Hausdorff dimension of a fractal function. Zbl 1198.26013
Yao, K.; Liang, Y. S.; Zhang, F.
2009
Ground state solutions for asymptotically periodic Schrödinger equations with indefinite linear part. Zbl 1317.35024
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2015
Homoclinic orbits for superlinear Hamiltonian systems without Ambrosetti-Rabinowitz growth condition. Zbl 1203.37103
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2010
Periodic solutions for some second order systems. Zbl 1169.34322
Meng, Fengjuan; Zhang, Fubao
2008
Existence of semiclassical ground-state solutions for semilinear elliptic systems. Zbl 1329.35141
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2012
Numerical simulations of necking during tensile deformation of aluminum single crystals. Zbl 1155.74038
Zhang, F.; Bower, A. F.; Mishra, R. K.; Boyle, K. P.
2009
Partitioned EDF scheduling for multiprocessors using a $$C=D$$ task splitting scheme. Zbl 1243.68101
Burns, A.; Davis, R. I.; Wang, P.; Zhang, F.
2012
Semiclassical states for fractional Choquard equations with critical growth. Zbl 1401.35091
Zhang, Hui; Wang, Jun; Zhang, Fubao
2019
Existence and asymptotic behavior of solutions for nonlinear Schrödinger-Poisson systems with steep potential well. Zbl 1345.35099
Du, Miao; Tian, Lixin; Wang, Jun; Zhang, Fubao
2016
Existence of infinitely many homoclinic orbits for nonperiodic superquadratic Hamiltonian systems. Zbl 1248.34055
Wang, Jun; Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2012
Weighted elastic net model for mass spectrometry imaging processing. Zbl 1187.92040
Hong, D.; Zhang, F.
2010
Existence of normalized solutions for nonlinear fractional Schrödinger equations with trapping potentials. Zbl 1422.35059
Du, Miao; Tian, Lixin; Wang, Jun; Zhang, Fubao
2019
Ground states for asymptotically periodic Schrödinger-Poisson systems with critical growth. Zbl 1300.35033
Zhang, Hui; Xu, Junxiang; Zhang, Fubao; Du, Miao
2014
Infinitely many homoclinic orbits for superlinear Hamiltonian systems. Zbl 1261.37026
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2012
Existence and nonexistence of the ground state solutions for nonlinear Schrödinger equations with nonperiodic nonlinearities. Zbl 1256.35143
Wang, Jun; Tian, Lixin; Xu, Junxiang; Zhang, Fubao
2012
Homoclinic orbits for an unbounded superquadratic Hamiltonian systems. (Homoclinic orbits for an unbounded superquadratic.) Zbl 1202.58015
Wang, Jun; Xu, Junxiang; Zhang, Fubao; Wang, Lei
2010
Zhang, F.; Redekop, D.
1992
Bound and ground states for a concave-convex generalized Choquard equation. Zbl 06737056
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2017
Erratum to: Existence and concentration of positive solutions for semilinear Schrödinger-Poisson systems in $${\mathbb{R}^{3}}$$. Zbl 1278.35073
Wang, Jun; Tian, Lixin; Xu, Junxiang; Zhang, Fubao
2013
Multiplicity of semiclassical states for Schrödinger-Poisson systems with critical frequency. Zbl 1433.35065
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2020
Existence of ground state solutions for a super-biquadratic Kirchhoff-type equation with steep potential well. Zbl 1337.35035
Du, Miao; Tian, Lixin; Wang, Jun; Zhang, Fubao
2016
Positive solutions for a class of quasilinear problems with critical growth in $${\mathbb R}^N$$. Zbl 1325.35068
Wang, Jun; An, Tianqing; Zhang, Fubao
2015
Infinitely many solutions for diffusion equations without symmetry. Zbl 1231.35095
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2011
A mesh-grading material point method and its parallelization for problems with localized extreme deformation. Zbl 1423.74901
Lian, Y. P.; Yang, P. F.; Zhang, X.; Zhang, F.; Liu, Y.; Huang, P.
2015
Existence and concentration of positive ground state solutions for Schrödinger-Poisson systems. Zbl 1295.35219
Wang, Jun; Tian, Lixin; Xu, Junxiang; Zhang, Fubao
2013
Existence of positive ground states for some nonlinear Schrödinger systems. Zbl 1282.35363
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2013
Existence and multiplicity of semiclassical solutions for asymptotically Hamiltonian elliptic systems. Zbl 1270.35222
Xiao, Lu; Wang, Jun; Fan, Ming; Zhang, Fubao
2013
A coupled finite difference material point method and its application in explosion simulation. Zbl 1356.80062
Cui, X. X.; Zhang, X.; Zhou, X.; Liu, Y.; Zhang, F.
2014
Quasiperiodic solutions of higher dimensional Duffing’s equations via the KAM theorem. Zbl 1186.37070
Zhang, Fubao
2001
On the multiplicity and concentration of positive solutions for a class of quasilinear problems in $$\mathbb{R}^N$$. Zbl 1306.35048
Xiao, Lu; Wang, Jun; Chen, Rong; Zhang, Fubao
2015
Solutions of super linear Dirac equations with general potentials. Zbl 1242.47055
Ding, Jian; Xu, Junxiang; Zhang, Fubao
2009
The existence of solutions for superquadratic Hamiltonian elliptic systems on $$\mathbb R^N$$. Zbl 1204.35084
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2011
Even homoclinic orbits for super quadratic Hamiltonian systems. Zbl 1201.37096
Ding, Jian; Xu, Junxiang; Zhang, Fubao
2010
Existence and multiplicity of semiclassical solutions for a Schrödinger equation. Zbl 1173.35695
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2009
Positive solutions for higher-order boundary value problems with sign changing nonlinear terms. Zbl 1129.34018
Du, Zengji; Zhang, Fubao; Ge, Weigao
2006
Semiclassical ground states for quasilinear Schrödinger equations with three times growth. Zbl 1454.35144
Zhang, Hui; Zhang, Fubao
2017
Existence and concentration of ground states for a Choquard equation with competing potentials. Zbl 1394.35027
Zhang, Fubao; Zhang, Hui
2018
On local influence in canonical correlations analysis. Zbl 1075.62571
Tanaka, Y.; Zhang, F.; Yang, W. S.
2002
Multiplicity of semiclassical states for fractional Schrödinger equations with critical frequency. Zbl 1430.35072
Zhang, Hui; Zhang, Fubao
2020
Multiplicity and concentration of solutions for Choquard equations with critical growth. Zbl 1426.35020
Zhang, Hui; Zhang, Fubao
2020
Ground states for Schrödinger-Poisson systems with three growth terms. Zbl 1315.35088
Zhang, Hui; Zhang, Fubao; Xu, Junxiang
2014
Multiple positive solutions for semilinear Schrödinger equations with critical growth in $$\mathbb{R}^{N}$$. Zbl 1318.35111
Wang, Jun; Lu, Dianchen; Xu, Junxiang; Zhang, Fubao
2015
Positive solutions for Schrödinger system with asymptotically periodic potentials. Zbl 1334.35045
Wang, Jun; He, Qing; Xiao, Lu; Zhang, Fubao
2016
Homoclinic orbits for the second-order Hamiltonian systems with obstacle item. Zbl 1218.37083
Yin, Cuicui; Zhang, Fubao
2010
Lagrangian stability of a class of second-order periodic systems. Zbl 1222.34047
Jiang, Shunjun; Xu, Junxiang; Zhang, Fubao
2011
Ergodicity and reversibility of the minimal diffusion process. Zbl 1089.60521
Zhang, F.
2005
Modelling of a premixed swirl-stabilized flame using a turbulent flame speed closure model in LES. Zbl 1169.76060
Zhang, F.; Habisreuther, P.; Hettel, M.; Bockhorn, H.
2009
Existence of periodic solutions for $$n$$-dimensional systems of Duffing type. Zbl 1196.34051
Meng, Fengjuan; Zhang, Fubao
2009
Solutions of non-periodic super-quadratic Dirac equations. Zbl 1186.35171
Ding, Jian; Xu, Junxiang; Zhang, Fubao
2010
Existence of homoclinic orbits for Hamiltonian systems with superquadratic potentials. Zbl 1187.37086
Ding, Jian; Xu, Junxiang; Zhang, Fubao
2009
Two existence theorems for the solutions to differential inclusions. Zbl 0838.34018
Zhang, Fubao
1995
Lack of direction property of the coincidence degree and applications. Zbl 0945.34008
Zhang, Fubao
1999
Three-dimensional simulation of thermals using a split-operator scheme. Zbl 0969.76556
Li, C. W.; Zhang, F.
1996
A theorem on three solutions to periodic boundary value problems for second-order differential equations. Zbl 0971.34006
Zhang, Fubao
2000
Existence of positive solutions for a nonhomogeneous Schrödinger-Poisson system in $$\mathbb R^3$$. Zbl 1394.35165
Du, Miao; Zhang, Fubao
2013
The Brezis-Nirenberg type double critical problem for a class of Schrödinger-Poisson equations. Zbl 1466.35142
Cai, Li; Zhang, Fubao
2021
Semiclassical ground states for nonlinear Schrödinger-Poisson systems. Zbl 1387.35131
Zhang, Hui; Zhang, Fubao
2018
Existence and nonexistence of ground states for a class of quasilinear Schrödinger equations. Zbl 1357.35109
Zhang, Hui; Zhang, Fubao; Xu, Junxiang
2016
Semiclassical ground states for a class of nonlinear Kirchhoff-type problems. Zbl 1379.35127
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2017
Normalized solutions for a coupled Schrödinger system with saturable nonlinearities. Zbl 1382.35262
Cao, Xiaofei; Xu, Junxiang; Wang, Jun; Zhang, Fubao
2018
Ground state solutions for asymptotically periodic Schrödinger equations with critical growth. Zbl 1291.35040
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2013
Self-oscillations and optical chaos of an induced absorber in coupled cavities. Zbl 0875.78012
Zhang, F.; Grohs, J.; Steffen, J.; Ißler, H.; Klingshirn, C.
1994
Positive ground states of coupled nonlinear Schrödinger equations on $$\mathbb{R}^n$$. Zbl 07449112
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2013
The Brezis-Nirenberg type double critical problem for the Choquard equation. Zbl 1458.35196
Cai, Li; Zhang, Fubao
2020
Multiple positive solutions for biharmonic equation of Kirchhoff type involving concave-convex nonlinearities. Zbl 1448.35155
Meng, Fengjuan; Zhang, Fubao; Zhang, Yuanyuan
2020
Homoclinic orbits of nonperiodic super quadratic Hamiltonian system. Zbl 1192.37088
Ding, Jian; Xu, Junxiang; Zhang, Fubao
2010
Infinitely many radial and nonradial solutions for a Choquard equation with general nonlinearity. Zbl 1440.35147
Zhang, Hui; Zhang, Fubao
2020
The Brezis-Nirenberg type double critical problem for a class of Schrödinger-Poisson equations. Zbl 1466.35142
Cai, Li; Zhang, Fubao
2021
Existence and asymptotic behavior of positive solutions for Kirchhoff type problems with steep potential well. Zbl 1448.35202
Zhang, Fubao; Du, Miao
2020
Multiplicity of semiclassical states for Schrödinger-Poisson systems with critical frequency. Zbl 1433.35065
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2020
Multiplicity of semiclassical states for fractional Schrödinger equations with critical frequency. Zbl 1430.35072
Zhang, Hui; Zhang, Fubao
2020
Multiplicity and concentration of solutions for Choquard equations with critical growth. Zbl 1426.35020
Zhang, Hui; Zhang, Fubao
2020
The Brezis-Nirenberg type double critical problem for the Choquard equation. Zbl 1458.35196
Cai, Li; Zhang, Fubao
2020
Multiple positive solutions for biharmonic equation of Kirchhoff type involving concave-convex nonlinearities. Zbl 1448.35155
Meng, Fengjuan; Zhang, Fubao; Zhang, Yuanyuan
2020
Infinitely many radial and nonradial solutions for a Choquard equation with general nonlinearity. Zbl 1440.35147
Zhang, Hui; Zhang, Fubao
2020
Semiclassical states for fractional Choquard equations with critical growth. Zbl 1401.35091
Zhang, Hui; Wang, Jun; Zhang, Fubao
2019
Existence of normalized solutions for nonlinear fractional Schrödinger equations with trapping potentials. Zbl 1422.35059
Du, Miao; Tian, Lixin; Wang, Jun; Zhang, Fubao
2019
Existence and concentration of ground states for a Choquard equation with competing potentials. Zbl 1394.35027
Zhang, Fubao; Zhang, Hui
2018
Semiclassical ground states for nonlinear Schrödinger-Poisson systems. Zbl 1387.35131
Zhang, Hui; Zhang, Fubao
2018
Normalized solutions for a coupled Schrödinger system with saturable nonlinearities. Zbl 1382.35262
Cao, Xiaofei; Xu, Junxiang; Wang, Jun; Zhang, Fubao
2018
Existence and multiplicity of solutions for a generalized Choquard equation. Zbl 1375.35134
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2017
Bound and ground states for a concave-convex generalized Choquard equation. Zbl 06737056
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2017
Semiclassical ground states for quasilinear Schrödinger equations with three times growth. Zbl 1454.35144
Zhang, Hui; Zhang, Fubao
2017
Semiclassical ground states for a class of nonlinear Kirchhoff-type problems. Zbl 1379.35127
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2017
Existence and asymptotic behavior of solutions for nonlinear Schrödinger-Poisson systems with steep potential well. Zbl 1345.35099
Du, Miao; Tian, Lixin; Wang, Jun; Zhang, Fubao
2016
Existence of ground state solutions for a super-biquadratic Kirchhoff-type equation with steep potential well. Zbl 1337.35035
Du, Miao; Tian, Lixin; Wang, Jun; Zhang, Fubao
2016
Positive solutions for Schrödinger system with asymptotically periodic potentials. Zbl 1334.35045
Wang, Jun; He, Qing; Xiao, Lu; Zhang, Fubao
2016
Existence and nonexistence of ground states for a class of quasilinear Schrödinger equations. Zbl 1357.35109
Zhang, Hui; Zhang, Fubao; Xu, Junxiang
2016
Ground states for the nonlinear Kirchhoff type problems. Zbl 1333.35042
Zhang, Hui; Zhang, Fubao
2015
Existence of multiple positive solutions for Schrödinger-Poisson systems with critical growth. Zbl 1329.35144
Wang, Jun; Tian, Lixin; Xu, Junxiang; Zhang, Fubao
2015
Existence and multiplicity of solutions for superlinear fractional Schrödinger equations in $$\mathbb{R}^{N}$$. Zbl 1328.35225
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2015
Ground state solutions for asymptotically periodic Schrödinger equations with indefinite linear part. Zbl 1317.35024
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2015
Positive solutions for a class of quasilinear problems with critical growth in $${\mathbb R}^N$$. Zbl 1325.35068
Wang, Jun; An, Tianqing; Zhang, Fubao
2015
A mesh-grading material point method and its parallelization for problems with localized extreme deformation. Zbl 1423.74901
Lian, Y. P.; Yang, P. F.; Zhang, X.; Zhang, F.; Liu, Y.; Huang, P.
2015
On the multiplicity and concentration of positive solutions for a class of quasilinear problems in $$\mathbb{R}^N$$. Zbl 1306.35048
Xiao, Lu; Wang, Jun; Chen, Rong; Zhang, Fubao
2015
Multiple positive solutions for semilinear Schrödinger equations with critical growth in $$\mathbb{R}^{N}$$. Zbl 1318.35111
Wang, Jun; Lu, Dianchen; Xu, Junxiang; Zhang, Fubao
2015
On a class of semilinear Schrödinger equations with indefinite linear part. Zbl 1312.35105
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2014
Ground states for asymptotically periodic Schrödinger-Poisson systems with critical growth. Zbl 1300.35033
Zhang, Hui; Xu, Junxiang; Zhang, Fubao; Du, Miao
2014
A coupled finite difference material point method and its application in explosion simulation. Zbl 1356.80062
Cui, X. X.; Zhang, X.; Zhou, X.; Liu, Y.; Zhang, F.
2014
Ground states for Schrödinger-Poisson systems with three growth terms. Zbl 1315.35088
Zhang, Hui; Zhang, Fubao; Xu, Junxiang
2014
Existence and concentration of positive solutions for semilinear Schrödinger-Poisson systems in $${\mathbb{R}^{3}}$$. Zbl 1278.35074
Wang, Jun; Tian, Lixin; Xu, Junxiang; Zhang, Fubao
2013
Existence of multi-bump solutions for a semilinear Schrödinger-Poisson system. Zbl 1301.35159
Wang, Jun; Xu, Junxiang; Zhang, Fubao; Chen, Xumei
2013
Positive ground states for asymptotically periodic Schrödinger-Poisson systems. Zbl 1271.35022
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2013
Erratum to: Existence and concentration of positive solutions for semilinear Schrödinger-Poisson systems in $${\mathbb{R}^{3}}$$. Zbl 1278.35073
Wang, Jun; Tian, Lixin; Xu, Junxiang; Zhang, Fubao
2013
Existence and concentration of positive ground state solutions for Schrödinger-Poisson systems. Zbl 1295.35219
Wang, Jun; Tian, Lixin; Xu, Junxiang; Zhang, Fubao
2013
Existence of positive ground states for some nonlinear Schrödinger systems. Zbl 1282.35363
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2013
Existence and multiplicity of semiclassical solutions for asymptotically Hamiltonian elliptic systems. Zbl 1270.35222
Xiao, Lu; Wang, Jun; Fan, Ming; Zhang, Fubao
2013
Existence of positive solutions for a nonhomogeneous Schrödinger-Poisson system in $$\mathbb R^3$$. Zbl 1394.35165
Du, Miao; Zhang, Fubao
2013
Ground state solutions for asymptotically periodic Schrödinger equations with critical growth. Zbl 1291.35040
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2013
Positive ground states of coupled nonlinear Schrödinger equations on $$\mathbb{R}^n$$. Zbl 07449112
Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2013
Multiplicity and concentration of positive solutions for a Kirchhoff type problem with critical growth. Zbl 1402.35119
Wang, Jun; Tian, Lixin; Xu, Junxiang; Zhang, Fubao
2012
Existence of semiclassical ground-state solutions for semilinear elliptic systems. Zbl 1329.35141
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2012
Partitioned EDF scheduling for multiprocessors using a $$C=D$$ task splitting scheme. Zbl 1243.68101
Burns, A.; Davis, R. I.; Wang, P.; Zhang, F.
2012
Existence of infinitely many homoclinic orbits for nonperiodic superquadratic Hamiltonian systems. Zbl 1248.34055
Wang, Jun; Zhang, Hui; Xu, Junxiang; Zhang, Fubao
2012
Infinitely many homoclinic orbits for superlinear Hamiltonian systems. Zbl 1261.37026
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2012
Existence and nonexistence of the ground state solutions for nonlinear Schrödinger equations with nonperiodic nonlinearities. Zbl 1256.35143
Wang, Jun; Tian, Lixin; Xu, Junxiang; Zhang, Fubao
2012
Homoclinic orbits for a class of Hamiltonian systems with superquadratic or asymptotically quadratic potentials. Zbl 1231.37033
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2011
Infinitely many solutions for diffusion equations without symmetry. Zbl 1231.35095
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2011
The existence of solutions for superquadratic Hamiltonian elliptic systems on $$\mathbb R^N$$. Zbl 1204.35084
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2011
Lagrangian stability of a class of second-order periodic systems. Zbl 1222.34047
Jiang, Shunjun; Xu, Junxiang; Zhang, Fubao
2011
Existence and multiplicity of homoclinic orbits for the second order Hamiltonian systems. Zbl 1204.37067
Wang, Jun; Zhang, Fubao; Xu, Junxiang
2010
Existence of solutions for nonperiodic superquadratic Hamiltonian elliptic systems. Zbl 1183.35115
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2010
Existence and multiplicity of solutions for asymptotically Hamiltonian elliptic systems in $$\mathbb R^N$$. Zbl 1187.35055
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2010
Homoclinic orbits for superlinear Hamiltonian systems without Ambrosetti-Rabinowitz growth condition. Zbl 1203.37103
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2010
Weighted elastic net model for mass spectrometry imaging processing. Zbl 1187.92040
Hong, D.; Zhang, F.
2010
Homoclinic orbits for an unbounded superquadratic Hamiltonian systems. (Homoclinic orbits for an unbounded superquadratic.) Zbl 1202.58015
Wang, Jun; Xu, Junxiang; Zhang, Fubao; Wang, Lei
2010
Even homoclinic orbits for super quadratic Hamiltonian systems. Zbl 1201.37096
Ding, Jian; Xu, Junxiang; Zhang, Fubao
2010
Homoclinic orbits for the second-order Hamiltonian systems with obstacle item. Zbl 1218.37083
Yin, Cuicui; Zhang, Fubao
2010
Solutions of non-periodic super-quadratic Dirac equations. Zbl 1186.35171
Ding, Jian; Xu, Junxiang; Zhang, Fubao
2010
Homoclinic orbits of nonperiodic super quadratic Hamiltonian system. Zbl 1192.37088
Ding, Jian; Xu, Junxiang; Zhang, Fubao
2010
Infinitely many homoclinic orbits for the second-order Hamiltonian systems with super-quadratic potentials. Zbl 1162.34328
Yang, Jie; Zhang, Fubao
2009
On the connection between the order of the fractional derivative and the Hausdorff dimension of a fractal function. Zbl 1198.26013
Yao, K.; Liang, Y. S.; Zhang, F.
2009
Numerical simulations of necking during tensile deformation of aluminum single crystals. Zbl 1155.74038
Zhang, F.; Bower, A. F.; Mishra, R. K.; Boyle, K. P.
2009
Solutions of super linear Dirac equations with general potentials. Zbl 1242.47055
Ding, Jian; Xu, Junxiang; Zhang, Fubao
2009
Existence and multiplicity of semiclassical solutions for a Schrödinger equation. Zbl 1173.35695
Wang, Jun; Xu, Junxiang; Zhang, Fubao
2009
Modelling of a premixed swirl-stabilized flame using a turbulent flame speed closure model in LES. Zbl 1169.76060
Zhang, F.; Habisreuther, P.; Hettel, M.; Bockhorn, H.
2009
Existence of periodic solutions for $$n$$-dimensional systems of Duffing type. Zbl 1196.34051
Meng, Fengjuan; Zhang, Fubao
2009
Existence of homoclinic orbits for Hamiltonian systems with superquadratic potentials. Zbl 1187.37086
Ding, Jian; Xu, Junxiang; Zhang, Fubao
2009
Periodic solutions for some second order systems. Zbl 1169.34322
Meng, Fengjuan; Zhang, Fubao
2008
Indentation of a hard film on a soft substrate: strain gradient hardening effects. Zbl 1266.74032
Zhang, F.; Saha, R.; Huang, Y.; Nix, W. D.; Hwang, K. C.; Qu, S.; Li, M.
2007
A model of size effects in nano-indentation. Zbl 1120.74658
Huang, Y.; Zhang, F.; Hwang, K. C.; Nix, W. D.; Pharr, G. M.; Feng, G.
2006
Positive solutions for higher-order boundary value problems with sign changing nonlinear terms. Zbl 1129.34018
Du, Zengji; Zhang, Fubao; Ge, Weigao
2006
Ergodicity and reversibility of the minimal diffusion process. Zbl 1089.60521
Zhang, F.
2005
On local influence in canonical correlations analysis. Zbl 1075.62571
Tanaka, Y.; Zhang, F.; Yang, W. S.
2002
Quasiperiodic solutions of higher dimensional Duffing’s equations via the KAM theorem. Zbl 1186.37070
Zhang, Fubao
2001
A theorem on three solutions to periodic boundary value problems for second-order differential equations. Zbl 0971.34006
Zhang, Fubao
2000
Lack of direction property of the coincidence degree and applications. Zbl 0945.34008
Zhang, Fubao
1999
Three-dimensional simulation of thermals using a split-operator scheme. Zbl 0969.76556
Li, C. W.; Zhang, F.
1996
Two existence theorems for the solutions to differential inclusions. Zbl 0838.34018
Zhang, Fubao
1995
Self-oscillations and optical chaos of an induced absorber in coupled cavities. Zbl 0875.78012
Zhang, F.; Grohs, J.; Steffen, J.; Ißler, H.; Klingshirn, C.
1994
Zhang, F.; Redekop, D.
1992
all top 5
### Cited by 461 Authors
37 Tang, Xianhua 30 Zhang, Fubao 25 Wang, Jun 15 Xu, Junxiang 15 Zhang, Hui 14 Sun, Juntao 13 Chen, Haibo 13 Zhang, Wen 13 Zhang, Ziheng 12 Ma, Shiwang 11 Liu, Zhisu 11 Wu, Tsungfang 11 Xie, Qilin 11 Zhang, Jian 10 Chen, Sitong 9 Ambrosio, Vincenzo 9 Chen, Guanwei 9 Li, Gongbao 9 Yuan, Rong 9 Zhang, Jianjun 8 Li, Quanqing 8 Zhao, Fukun 7 Chen, Peng 7 Guo, Shangjiang 7 Teng, Kaimin 7 Wang, Dabin 7 Wang, Wenbo 7 Wu, Xian 7 Xiao, Lu 7 Ye, Hongyu 6 Alves, Claudianor Oliveira 6 Chen, Jianqing 6 Cheng, Bitao 6 Du, Miao 6 Figueiredo, Giovany Malcher 6 Guan, Wen 6 Han, Zhiqing 6 Huang, Yisheng 6 Lei, Chunyu 6 Lü, Dengfeng 6 Qin, Dongdong 6 Shi, Junping 6 Shuai, Wei 6 Suo, Hong-Min 5 Li, Fuyi 5 Liao, Jiafeng 5 Liu, Zeng 5 Nieto Roig, Juan Jose 5 Peng, Shuangjie 5 Santos, João R. jun. 5 Tang, Chun-Lei 5 Tian, Lixin 5 Wan, Lili 5 Xue, Yanfang 5 Zhang, Qingye 4 Chen, Huiwen 4 Chu, Changmu 4 He, Xiaoming 4 He, Zhimin 4 Jia, HuiFang 4 Li, Yongkun 4 Li, Yuhua 4 Liao, Fangfang 4 Liu, Chungen 4 Liu, Jiu 4 Liu, Weiming 4 Wu, Yuanze 4 Yang, Minghai 4 Yang, Zhipeng 4 Ye, Yiwei 4 Yu, Yuanyang 4 Zhang, Qiongfen 4 Zou, Wenming 3 Chen, Shaowei 3 Do Ó, João M. Bezerra 3 Germano, Geilson F. 3 Guo, Lun 3 He, Yi 3 Hu, Tingxi 3 Li, Lin 3 Li, Xinfu 3 Liu, Gaosheng 3 Liu, Wenbin 3 Lu, Lu 3 Lv, Ying 3 Mao, Anmin 3 Minhós, Feliz Manuel 3 Rădulescu, Vicenţiu D. 3 Severo, Uberlandio Batista 3 Shen, Zupei 3 Sreenadh, Konijeti 3 Sun, Dongdong 3 Sun, Jijiang 3 Xiang, Changlin 3 Xu, Liping 3 Yu, Jian-She 3 Zhang, Huabo 3 Zhang, Yuanyuan 3 Zhang, Zhitao 3 Zhong, Xiaojing ...and 361 more Authors
all top 5
### Cited in 98 Serials
43 Journal of Mathematical Analysis and Applications 35 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 24 Boundary Value Problems 22 Communications on Pure and Applied Analysis 21 Journal of Mathematical Physics 20 ZAMP. Zeitschrift für angewandte Mathematik und Physik 19 Journal of Differential Equations 17 Computers & Mathematics with Applications 15 Mathematical Methods in the Applied Sciences 15 Nonlinear Analysis. Real World Applications 12 Advances in Difference Equations 11 Applicable Analysis 10 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 10 Complex Variables and Elliptic Equations 9 Applied Mathematics Letters 9 Mediterranean Journal of Mathematics 8 Calculus of Variations and Partial Differential Equations 8 Discrete and Continuous Dynamical Systems 8 Advances in Nonlinear Analysis 7 Applied Mathematics and Computation 7 The Journal of Geometric Analysis 7 Topological Methods in Nonlinear Analysis 7 Abstract and Applied Analysis 7 Communications in Contemporary Mathematics 7 Journal of Applied Mathematics and Computing 6 Mathematische Nachrichten 6 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 6 Electronic Journal of Differential Equations (EJDE) 6 Advanced Nonlinear Studies 5 Chaos, Solitons and Fractals 4 Acta Mathematica Scientia. Series B. (English Edition) 3 Annali di Matematica Pura ed Applicata. Serie Quarta 3 NoDEA. Nonlinear Differential Equations and Applications 3 Acta Mathematica Sinica. English Series 3 Annales Henri Poincaré 3 Qualitative Theory of Dynamical Systems 3 Discrete and Continuous Dynamical Systems. Series S 3 Advances in Mathematical Physics 3 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 3 Journal of Function Spaces 3 AIMS Mathematics 2 Indian Journal of Pure & Applied Mathematics 2 Ukrainian Mathematical Journal 2 Manuscripta Mathematica 2 Results in Mathematics 2 Acta Mathematicae Applicatae Sinica. English Series 2 Asymptotic Analysis 2 Potential Analysis 2 Journal of Inequalities and Applications 2 International Journal of Nonlinear Sciences and Numerical Simulation 2 Journal of Applied Mathematics 2 Milan Journal of Mathematics 2 Journal of Nonlinear Science and Applications 2 Analysis and Mathematical Physics 2 Journal of Applied Analysis and Computation 2 Electronic Research Archive 2 SN Partial Differential Equations and Applications 1 Archive for Rational Mechanics and Analysis 1 Journal of Mathematical Biology 1 Lithuanian Mathematical Journal 1 Mathematical Notes 1 Nonlinearity 1 Rocky Mountain Journal of Mathematics 1 Acta Mathematica Vietnamica 1 Annales Polonici Mathematici 1 Archiv der Mathematik 1 Glasgow Mathematical Journal 1 Journal of Functional Analysis 1 Mathematica Slovaca 1 Mathematische Zeitschrift 1 Monatshefte für Mathematik 1 Proceedings of the American Mathematical Society 1 Proceedings of the Edinburgh Mathematical Society. Series II 1 Quaestiones Mathematicae 1 Studies in Applied Mathematics 1 Mathematical and Computer Modelling 1 Neural Networks 1 Journal of Dynamics and Differential Equations 1 Turkish Journal of Mathematics 1 Advances in Differential Equations 1 Annales Academiae Scientiarum Fennicae. Mathematica 1 Differential Equations and Dynamical Systems 1 Nonlinear Dynamics 1 Positivity 1 Taiwanese Journal of Mathematics 1 Discrete Dynamics in Nature and Society 1 Communications in Nonlinear Science and Numerical Simulation 1 Mathematical Modelling and Analysis 1 Discrete and Continuous Dynamical Systems. Series B 1 Central European Journal of Mathematics 1 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie V 1 Journal of Fixed Point Theory and Applications 1 Complex Analysis and Operator Theory 1 International Journal of Differential Equations 1 Annals of Functional Analysis 1 ISRN Mathematical Analysis 1 Axioms 1 International Journal of Analysis
all top 5
### Cited in 21 Fields
398 Partial differential equations (35-XX) 72 Ordinary differential equations (34-XX) 72 Dynamical systems and ergodic theory (37-XX) 65 Global analysis, analysis on manifolds (58-XX) 28 Operator theory (47-XX) 23 Mechanics of particles and systems (70-XX) 20 Calculus of variations and optimal control; optimization (49-XX) 13 Quantum theory (81-XX) 6 Mechanics of deformable solids (74-XX) 5 Integral equations (45-XX) 4 Difference and functional equations (39-XX) 3 Algebraic topology (55-XX) 2 Functional analysis (46-XX) 2 Differential geometry (53-XX) 2 Optics, electromagnetic theory (78-XX) 1 Numerical analysis (65-XX) 1 Fluid mechanics (76-XX) 1 Statistical mechanics, structure of matter (82-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Biology and other natural sciences (92-XX) 1 Systems theory; control (93-XX)
|
# Magnetic $H$ field unit in cgs system
I'm studying an old E&M textbook which uses cgs unit system. I'm re-writing the formulas in SI unit.
The book says $\vec{H}=\vec{B}-4\pi\vec{M}$.
So I guessed that $H$ should have same unit as $B$ in cgs system.
But I found that $H$ has unit of Oersted, which is same as $1cm^{0.5} g^{-0.5} s^{-1}$.
How is it possible?
In essence in the cgs system of units $\mu_0 = 1$
B is the magnetic flux density (or magnetic induction)and has the derived unit gauss which in base cgs units $\rm cm^{-\frac 12}g^{\frac 12}s^{-1}$
H is the magnetic field strength and has the derived unit oersted which in base cgs units $\rm cm^{-\frac 12}g^{\frac 12}s^{-1}$
You may also be interested in the fact that E is the electric field with the derived unit $\rm statV\, cm^{-1}$ which in terms of cgs base units is $\rm cm^{-\frac 12}g^{\frac 12}s^{-1}$, the same as that for B and H.
There are very many examples of cgs units and Gaussian units being explained and discussed on the Internet and in textbooks. For example look at this PDF.
• Oh... Thanks. I was confused and thought that g in the unit means gauss, not gram. Units in E&M always seem messy and confuse me. I'd read the linked page. Maybe later I can find some enlightening article about the reasons behind the units. – Septacle Jul 20 '18 at 8:05
• @Septacle Somewhere out there on the Internet there is a really good pdf file about Gaussian units which I found some time ago but was unable to find it for you. If you do find a better article I would be grateful if you would post the link. – Farcher Jul 20 '18 at 8:16
• I though one is gradient of other? – marshal craft Jul 20 '18 at 11:31
Dimensional analysis with the standard cgs or SI dimensions will not reveal the nature of where the $4\pi$s ought to go. Instead, you have to use an extra dimension, which turns the $4\pi$ into a variable, which becomes either $4\pi$ or $1$, according to the system.
I use the 'rule of substance' here. It is in my physics pdf, but i shall describe it here.
Space, time, fields and fluxes of all kinds, potentials of all kinds, are not substances, and are left unaltered.
Mass, and mechanical quantities with Mass in the dimension (forces, energy, pressure, density, power), charge, dipoles of all kinds, and their respective densities, capacitances and conductances, susceptances and susceptabilities, are quantities of substance and represent a dimension $S^1$.
Resistances and inductances represent a dimension of $S^{-1}$.
In order to change a formula, you tick the substances in the equation, and cross the inverse substances (ie $S^{-1}$). If there is an inbalance of ticks and crosses, this is corrected by a ticked $4\pi$ (cgs->si), or cross $4\pi$ (si->cgs).
So in the equation of the question: $H=B-4\pi M$
$H$ is a field, and thus not a substance. Likewise $B$ is a flux density, and is also not a substance. The $M$ is a magnetic polarisation and is thus a substance. To counter the imbalance of the substabce dimension, you need to divide the $M$ by a 'ticked $4\pi$', which cancels out the $4\pi$, giving $H=B-M$.
This can be done at reading speed.
• In markdown, we begin and end mathjax commands with the $ sign, not $ and [\math]. I've edited it here. A quick Mathjax tutorial can be found here. – user191954 Jul 20 '18 at 9:35 • I also answer a lot of questions on quora, which uses [math] and$ as mathjax markers. Yes, i do know Latex. – wendy.krieger Jul 20 '18 at 11:00 As you will have found, in SI units$B$is measured in teslas ($T$) and magnetic flux$\Phi_B$(magnetic flux) is measured in webers ($Wb$), thus a flux density of$1\ Wb/m^2$is$1\ T$. The SI unit of tesla is equivalent to newton·second/coulomb·metre. In Gaussian-cgs units,$B$is measured in gauss ($G$). The conversion is$1\ T = 10000\ G$. One nanotesla is equivalent to 1 gamma ($\gamma$). The$H$-field is measured in amperes per metre ($A/m$) in SI units, and in oersteds ($Oe$) in cgs units. It is equivalent to 1 dyne per maxwell. And, as you have found, its Gaussian base is$1 cm^{−0.5}\cdot g^{0.5}\cdot s^{−1}$. [1] The oersted is closely related to the gauss as you might have inferred from the units. In a vacuum, if the magnetizing field strength is$1\ Oe$, then the magnetic field density is$1\ G$, whereas, in a medium having permeability$\mu_r$(relative to permeability of vacuum), their relation is: $$B({\text{G}})=\mu _{r}H({\text{Oe}})$$ Because oersteds are used to measure magnetizing field strength, they are also related to the magnetomotive force (mmf) of current in a single-winding wire-loop: $$\displaystyle 1{\text{ Oe}}={\frac {1000}{4\pi }}{\text{A}}/{\text{m}}$$ Since Amperes are a measure of unit-charge per unit-time, you can probably work out from units of$A/m$how to get units of$cm^{-0.5}\cdot g^{0.5}\cdot s^{-1}\$ :-)
• I'm not sure yet. How can someone write the first equation I quoted when they have different dimension? – Septacle Jul 20 '18 at 7:44
• Sorry I thought g in the unit means gauss, not gram. Thanks for the answer. – Septacle Jul 20 '18 at 8:06
|
Keras Application for Pre-trained Model
In earlier posts, we learned about classic convolutional neural network (CNN) architectures (LeNet-5, AlexNet, VGG16, and ResNets). We created all the models from scratch using Keras but we didn’t train them because training such deep neural networks to require high computation cost and time. But thanks to transfer learning where a model trained on one task can be applied to other tasks. In other words, a model trained on one task can be adjusted or finetune to work for another task without explicitly training a new model from scratch. We refer such model as a pre-trained model.
Pre-trained Model
The pre-trained classical models are already available in Keras as Applications. These models are trained on ImageNet dataset for classifying images into one of 1000 categories or classes. The pre-trained models are available with Keras in two parts, model architecture and model weights. Model architectures are downloaded during Keras installation but model weights are large size files and can be downloaded on instantiating a model.
Following are the models available in Keras:
• Xception
• VGG16
• VGG19
• ResNet50
• InceptionV3
• InceptionResNetV2
• MobileNet
• DenseNet
• NASNet
• MobileNetV2
All of these architectures are compatible with all the backends (TensorFlow, Theano, and CNTK).
For more details and reference, please visit: https://keras.io/applications
First, we will import the Keras and required model from keras.applications and then we will instantiate the model architecture along with the imagenet weights. If you want only model architecture then instantiate the model with weights as ‘None’.
Note: For below exercise, we have shared the code for 4 different models but you can use only the required one.
We will use the Keras functions for loading and pre-processing the image. Following steps shall be carried out:
2- Keras loads the image in PIL format (width, height) which shall be converted into NumPy format (height, width, channels) using image_to_array() function.
3- Then the input image shall be converted to a 4-dimensional Tensor (batchsize, height, width, channels) using NumPy’s expand_dims function.
You can print the size of image after each processing.4- Normalizing the image
Some models use images with values ranging from 0 to 1 or from -1 to +1 or “caffe” style. The input_image is further to be normalized by subtracting the mean of the ImageNet data. We don’t need to worry about internal details and we can use the preprocess_input() function from each model to normalize the image.
Note: For below exercise, we have shared the code for 4 different models but you can use only the required one.
#preprocess for vgg16 processed_image_vgg16 = vgg16.preprocess_input(input_image.copy())
#preprocess for inception_v3 processed_image_inception_v3 = inception_v3.preprocess_input(input_image.copy())
#preprocess for resnet50 processed_image_resnet50 = resnet50.preprocess_input(input_image.copy())
#preprocess for mobilenet processed_image_mobilenet = mobilenet.preprocess_input(input_image.copy())
Predict the Image
Use model.predict() function to get the classification results and convert it into labels using decode_predictions() function.
Note: For below exercise, we have shared the code for 4 different models but you can use only the required one.
# vgg16 predictions_vgg16 = vgg_model.predict(processed_image_vgg16) label_vgg16 = decode_predictions(predictions_vgg16)
print (‘label_vgg16 = ‘, label_vgg16)
# inception_v3 predictions_inception_v3 = inception_model.predict(processed_image_inception_v3) label_inception_v3 = decode_predictions(predictions_inception_v3)
print (‘label_inception_v3 = ‘, label_inception_v3)
# resnet50 predictions_resnet50 = resnet_model.predict(processed_image_resnet50) label_resnet50 = decode_predictions(predictions_resnet50)
print (‘label_resnet50 = ‘, label_resnet50)
# mobilenet predictions_mobilenet = mobilenet_model.predict(processed_image_mobilenet) label_mobilenet = decode_predictions(predictions_mobilenet)
print (‘label_mobilenet = ‘, label_mobilenet)
Summary:
We can use pre-trained models as a starting point for our training process, instead of training our own model from scratch.
|
### Author Topic: 9.3 problem 18 (Read 3692 times)
#### Brian Bi
• Moderator
• Full Member
• Posts: 31
• Karma: 13
##### 9.3 problem 18
« on: March 25, 2013, 12:32:43 AM »
I'm having some trouble getting this problem to work out. There are four critical points: (0,0), (2, 1), (-2, 1), and (-2, -4). At the critical point (-2, -4), the Jacobian is \begin{pmatrix} 10 & -5 \\ 6 & 0 \end{pmatrix} with eigenvalues $5 \pm i\sqrt{5}$. Therefore it looks like it should be an unstable spiral point. However, when I plotted it, it looked like a node. Has anyone else done this problem?
http://www.math.psu.edu/melvin/phase/newphase.html
« Last Edit: March 25, 2013, 12:46:22 AM by Brian Bi »
#### Victor Ivrii
• Elder Member
• Posts: 2572
• Karma: 0
##### Re: 9.3 problem 18
« Reply #1 on: March 25, 2013, 06:57:38 AM »
I'm having some trouble getting this problem to work out. There are four critical points: (0,0), (2, 1), (-2, 1), and (-2, -4). At the critical point (-2, -4), the Jacobian is \begin{pmatrix} 10 & -5 \\ 6 & 0 \end{pmatrix} with eigenvalues $5 \pm i\sqrt{5}$. Therefore it looks like it should be an unstable spiral point. However, when I plotted it, it looked like a node. Has anyone else done this problem?
http://www.math.psu.edu/melvin/phase/newphase.html
Explanation:
http://weyl.math.toronto.edu/MAT244-2011S-forum/index.php?topic=48.msg159#msg159
#### Brian Bi
• Moderator
• Full Member
• Posts: 31
• Karma: 13
##### Re: 9.3 problem 18
« Reply #2 on: March 25, 2013, 01:28:57 PM »
So it is a spiral point but I didn't zoom in closely enough?
#### Victor Ivrii
No, the standard spiral remains the same under any zoom. However your spiral rotates rather slowly in comparison with moving away and as it makes one rotation ($\theta$ increases by $2\pi$) the exponent increases by $5 \times 2\pi/\sqrt{5}\approx 14$ and the radius increases $e^{14}\approx 1.2 \cdot 10^6$ times. If the initial distance was 1 mm, then after one rotation it becomes 1.2 km.
Try plotting $x'=a x- y$, $y'=x+ ay$ for $a=.001, .1, .5, 1, 1.5, 2$ to observe that at for some $a$ you just cannot observe rotation.
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 16 Nov 2018, 16:34
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in November
PrevNext
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
2526272829301
Open Detailed Calendar
• ### Free GMAT Strategy Webinar
November 17, 2018
November 17, 2018
07:00 AM PST
09:00 AM PST
Nov. 17, 7 AM PST. Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT.
• ### GMATbuster's Weekly GMAT Quant Quiz # 9
November 17, 2018
November 17, 2018
09:00 AM PST
11:00 AM PST
Join the Quiz Saturday November 17th, 9 AM PST. The Quiz will last approximately 2 hours. Make sure you are on time or you will be at a disadvantage.
# How many ways are there of placing 6 marbles in 4 bowls, if any number
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Manager
Joined: 15 Jan 2011
Posts: 101
How many ways are there of placing 6 marbles in 4 bowls, if any number [#permalink]
### Show Tags
12 Aug 2012, 12:21
3
8
00:00
Difficulty:
65% (hard)
Question Stats:
46% (01:10) correct 54% (01:13) wrong based on 255 sessions
### HideShow timer Statistics
How many ways are there of placing 6 marbles in 4 bowls, if any number of them can be placed in each bowl?
A. 6C4
B. 6P4
C. 4^6
D. 6^4
E. 6!
source:gogmat
Director
Joined: 22 Mar 2011
Posts: 601
WE: Science (Education)
Re: How many ways are there of placing 6 marbles in 4 bowls, if any number [#permalink]
### Show Tags
12 Aug 2012, 22:48
6
4
Galiya wrote:
How many ways are there of placing 6 marbles in 4 bowls, if any number of them can be placed in each bowl?
A. 6C4
B. 6P4
C. 4^6
D. 6^4
E. 6!
whats wrong with D?
source:gogmat
Forget about permutations and combinations, I mean what the process is called.
Think how would you do it:
Take the first marble and think what you can do with it. Where can you place it? - you have 4 options, as there are 4 bowls
Take the second marble - 4 options again, you don't care about the previous one already placed
Third marble - still 4 bowls available, still have the freedom to chose any one
...
You are choosing a bowl for each marble.
This will give you $$4^6$$ possibilities.
D would be the correct answer for example if we have 6 bowls and 4 marbles.
first marble to place - 6 choices
second marble - again 6 choices
...
This will give you $$6^4$$ possibilities.
_________________
PhD in Applied Mathematics
Love GMAT Quant questions and running.
##### General Discussion
Math Expert
Joined: 02 Sep 2009
Posts: 50619
Re: How many ways are there of placing 6 marbles in 4 bowls, if any number [#permalink]
### Show Tags
12 Aug 2012, 12:26
1
2
Galiya wrote:
How many ways are there of placing 6 marbles in 4 bowls, if any number of them can be placed in each bowl?
A. 6C4
B. 6P4
C. 4^6
D. 6^4
E. 6!
whats wrong with D?
source:gogmat
Each marble has 4 options, so there are total of 4*4*4*4*4*4=4^6 ways.
_________________
Manager
Joined: 15 Jan 2011
Posts: 101
Re: How many ways are there of placing 6 marbles in 4 bowls, if any number [#permalink]
### Show Tags
12 Aug 2012, 12:28
why cant i use permutation?
have 4 slots: - - - - and for each slot there are 6 marbles, hence 6^4
Manager
Joined: 05 Mar 2012
Posts: 52
Schools: Tepper '15 (WL)
Re: How many ways are there of placing 6 marbles in 4 bowls, if any number [#permalink]
### Show Tags
12 Aug 2012, 15:56
2
1
Galiya wrote:
why cant i use permutation?
have 4 slots: - - - - and for each slot there are 6 marbles, hence 6^4
Because there are more than "4 slots". By doing the slot method and saying that there are 4 bowls, hence 4 slots, you are saying that each bowl can only hold 1 marble. Problem is that a bowl can hold 1,2,3,4,5, or all 6 marbles. So instead of calculating how many options there are for the bowls to hold, it's simpler to find how many options each marble has.
Manager
Joined: 05 Jul 2012
Posts: 69
Location: India
Concentration: Finance, Strategy
GMAT Date: 09-30-2012
GPA: 3.08
WE: Engineering (Energy and Utilities)
Re: How many ways are there of placing 6 marbles in 4 bowls, if any number [#permalink]
### Show Tags
12 Aug 2012, 18:17
1
Galiya wrote:
why cant i use permutation?
have 4 slots: - - - - and for each slot there are 6 marbles, hence 6^4
You are distributing marbles to slots .. not slots to marbles !
The reason you cant use permutations is that you are not permuting ! It is a pure distribution problem. The distribution problems have a different strategy than permutation and combinations.
Manager
Joined: 05 Jul 2012
Posts: 69
Location: India
Concentration: Finance, Strategy
GMAT Date: 09-30-2012
GPA: 3.08
WE: Engineering (Energy and Utilities)
Re: How many ways are there of placing 6 marbles in 4 bowls, if any number [#permalink]
### Show Tags
12 Aug 2012, 23:09
2
EvaJager wrote:
Galiya wrote:
How many ways are there of placing 6 marbles in 4 bowls, if any number of them can be placed in each bowl?
A. 6C4
B. 6P4
C. 4^6
D. 6^4
E. 6!
whats wrong with D?
source:gogmat
Forget about permutations and combinations, I mean what the process is called.
Think how would you do it:
Take the first marble and think what you can do with it. Where can you place it? - you have 4 options, as there are 4 bowls
Take the second marble - 4 options again, you don't care about the previous one already placed
Third marble - still 4 bowls available, still have the freedom to chose any one
...
You are choosing a bowl for each marble.
This will give you $$4^6$$ possibilities.
D would be the correct answer for example if we have 6 bowls and 4 marbles.
first marble to place - 6 choices
second marble - again 6 choices
...
This will give you $$6^4$$ possibilities.
Lets have a different problem
Their are 4 alphabets Set A ( A,B,C,D) and 6 Numbers SET N ( 1,2,3,4,5,6)
Q.1 : In how many ways can 4 alphabets be assigned a number from 1 to 6 without any restrictions
Q.2 : In how many ways can 6 numbers be assigned an alphabet from A to D without any restriction
These two should clear your doubt
and to make these easy start adding restrictions to them like use all elements of set A, Set N, map uniquely , no element used twice, etc
And these two questions can be formed in any way to show all concepts of Permutations, combinations, distributions, De-distribution and even to bose einstien distribution.
and can you think of a condition that will make this mapping a function ?
Intern
Joined: 18 Feb 2012
Posts: 3
Re: How many ways are there of placing 6 marbles in 4 bowls, if any number [#permalink]
### Show Tags
15 Aug 2012, 13:23
Bunuel wrote:
Galiya wrote:
How many ways are there of placing 6 marbles in 4 bowls, if any number of them can be placed in each bowl?
A. 6C4
B. 6P4
C. 4^6
D. 6^4
E. 6!
whats wrong with D?
source:gogmat
Each marble has 4 options, so there are total of 4*4*4*4*4*4=4^6 ways.
Bunuel:
In another excercise (I cannot attach the link because this is my second post and the system is not allowing me) you explained this:
The total number of ways of dividing n identical items among r persons, each one of whom, can receive 0,1,2 or more items is (n+r -1)C(r-1).
Following this statement, taking in mind Persons = Bowls, I have to think that the answer to this question is 9C3 = 9! / 3!6! = 84. But it is incorrect according to your post. Could you please explain a little further?
Thanks a lot, José
Director
Joined: 22 Mar 2011
Posts: 601
WE: Science (Education)
Re: How many ways are there of placing 6 marbles in 4 bowls, if any number [#permalink]
### Show Tags
15 Aug 2012, 22:59
4
2
Josefeg wrote:
Bunuel wrote:
Galiya wrote:
How many ways are there of placing 6 marbles in 4 bowls, if any number of them can be placed in each bowl?
A. 6C4
B. 6P4
C. 4^6
D. 6^4
E. 6!
whats wrong with D?
source:gogmat
Each marble has 4 options, so there are total of 4*4*4*4*4*4=4^6 ways.
Bunuel:
In another excercise (I cannot attach the link because this is my second post and the system is not allowing me) you explained this:
The total number of ways of dividing n identical items among r persons, each one of whom, can receive 0,1,2 or more items is (n+r -1)C(r-1).
Following this statement, taking in mind Persons = Bowls, I have to think that the answer to this question is 9C3 = 9! / 3!6! = 84. But it is incorrect according to your post. Could you please explain a little further?
Thanks a lot, José
It wasn't stated explicitly, but we all assumed in our solutions, that all the marbles are distinct/different (think of different colors or numbered marbles). Then the above solutions are correct. The number of possibilities to place 6 distinct/different marbles in 4 bowls is $$4^6.$$
If the marbles are all identical, the bowls are distinct, then what is different between the distributions is the particular number of marbles in each bowl.
In this case, the above formula you mentioned should be used. For example, 6 identical marbles can be placed in 4 bowls in (6 + 4 - 1)C(4 - 1) = 9C3 = 84 ways. In the original question, since none of the listed answers is 84, the hidden assumption was that the marbles are non-identical, which I think it should have been stated explicitly.
For $$n$$ identical marbles and $$r$$ bowls, a way to justify the formula is as follows: think of the of the marbles placed in slots instead of bowls. The slots are aligned, created such that there are $$r-1$$ dividing internal walls, something like this: [o|ooo|...| |o| ], where [ and ] represent the two outer walls of the slots. In the first slot there is one ball, in the second three balls,..., there is an empty slot, just one ball, and the last one is also an empty slot.
In each slot, we can place any number of marbles between 0 and r.
Imagine that we have $$n+r-1$$ places, because we have $$n$$ marbles and $$r-1$$ dividing walls, and we just have to decide in this string of length $$n+r-1$$ where to place the walls (or equivalently, where to place the marbles). This can be done in (n + r - 1)C(r - 1) different ways, or equivalently, (n + r - 1)Cn.
_________________
PhD in Applied Mathematics
Love GMAT Quant questions and running.
Intern
Joined: 18 Feb 2012
Posts: 3
Re: How many ways are there of placing 6 marbles in 4 bowls, if any number [#permalink]
### Show Tags
16 Aug 2012, 08:56
EvaJager wrote:
It wasn't stated explicitly, but we all assumed in our solutions, that all the marbles are distinct/different (think of different colors or numbered marbles). Then the above solutions are correct. The number of possibilities to place 6 distinct/different marbles in 4 bowls is $$4^6.$$
Eva, thanks a lot for your explanation. I also think that it should have been stated.
Current Student
Joined: 25 Sep 2012
Posts: 247
Location: India
Concentration: Strategy, Marketing
GMAT 1: 660 Q49 V31
GMAT 2: 680 Q48 V34
Re: How many ways are there of placing 6 marbles in 4 bowls, if any number [#permalink]
### Show Tags
13 Jul 2014, 07:20
Quote:
Lets have a different problem
Their are 4 alphabets Set A ( A,B,C,D) and 6 Numbers SET N ( 1,2,3,4,5,6)
Q.1 : In how many ways can 4 alphabets be assigned a number from 1 to 6 without any restrictions
Q.2 : In how many ways can 6 numbers be assigned an alphabet from A to D without any restriction
These two should clear your doubt
and to make these easy start adding restrictions to them like use all elements of set A, Set N, map uniquely , no element used twice, etc
And these two questions can be formed in any way to show all concepts of Permutations, combinations, distributions, De-distribution and even to bose einstien distribution.
and can you think of a condition that will make this mapping a function ?
Wow! Nice explanation EvaJagger, Bunuel and mandyrhtdm!
A lot of doubts cleared from one single thread! Kudos to all
@mandrhtdm just to confirm..the answers are Q1 $$4^6$$ Q2 $$6^4$$ right?
Senior Manager
Status: love the club...
Joined: 24 Mar 2015
Posts: 267
Re: How many ways are there of placing 6 marbles in 4 bowls, if any number [#permalink]
### Show Tags
06 Jan 2018, 04:27
Galiya wrote:
How many ways are there of placing 6 marbles in 4 bowls, if any number of them can be placed in each bowl?
A. 6C4
B. 6P4
C. 4^6
D. 6^4
E. 6!
whats wrong with D?
source:gogmat
at first glance it is very essential to notice that "any number of marbles can be placed in each bowl"
since "0" is a number, the statement actually dictates that any bowl can get no marbles, and any bowl can get more than 1 marbles, since there are 6 marbles in total. So far so good.
now, for each marble, there are 4 ways to be distributed, as there are 4 bowls
1st marble can be distributed in 4 ways
2nd marble can be distributed in 4 ways
3rd marble can be distributed in 4 ways
4th marble can be distributed in 4 ways
5th marble can be distributed in 4 ways
6th marble can be distributed in 4 ways
which implies
4 * 4 * 4 * 4 * 4 * 4
= 4 ^ 6 is the answer
thanks
Manager
Joined: 23 Oct 2017
Posts: 64
Re: How many ways are there of placing 6 marbles in 4 bowls, if any number [#permalink]
### Show Tags
06 Jan 2018, 17:11
Galiya wrote:
How many ways are there of placing 6 marbles in 4 bowls, if any number of them can be placed in each bowl?
A. 6C4
B. 6P4
C. 4^6
D. 6^4
E. 6!
source:gogmat
------------
6 marbles to be placed in 4 bowls. There are no restriction on the number of the marbles that the bowl can have:
so each marble has 4 options/bowls to choose from.
thus in total 4^6 no.of ways
Re: How many ways are there of placing 6 marbles in 4 bowls, if any number &nbs [#permalink] 06 Jan 2018, 17:11
Display posts from previous: Sort by
# How many ways are there of placing 6 marbles in 4 bowls, if any number
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
# Start a new discussion
## Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
## Discussion Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
• CommentRowNumber1.
• CommentAuthorUrs
• CommentTimeNov 25th 2015
• (edited Nov 25th 2015)
I have created an entry on the quaternionic Hopf fibration and then I have tried to spell out the argument, suggested to me by Charles Rezk on MO, that in $G$-equivariant stable homotopy theory it represents a non-torsion element in
$[\Sigma^\infty_G S^7 , \Sigma^\infty_G S^4]_G \simeq \mathbb{Z} \oplus \cdots$
for $G$ a finite and non-cyclic subgroup of $SO(3)$, and $SO(3)$ acting on the quaternionic Hopf fibration via automorphisms of the quaternions.
I have tried to make a rigorous and self-contained argument here by appeal to Greenlees-May decomposition and to tom Dieck splitting. But check.
• CommentRowNumber2.
• CommentAuthorUrs
• CommentTimeMar 10th 2019
• (edited Mar 10th 2019)
added the following fact, which I didn’t find so easy to see:
Consider
1. the Spin(5)-action on the 4-sphere $S^4$ which is induced by the defining action on $\mathbb{R}^5$ under the identification $S^4 \simeq S(\mathbb{R}^5)$;
2. the Spin(5)-action on the 7-sphere $S^7$ which is induced under the exceptional isomorphism $Spin(5) \simeq Sp(2) = U(2,\mathbb{H})$ by the canonical left action of $U(2,\mathbb{H})$ on $\mathbb{H}^2$ via $S^7 \simeq S(\mathbb{H}^2)$.
Then the complex Hopf fibration $S^7 \overset{h_{\mathbb{H}}}{\longrightarrow} S^4$ is equivariant with respect to these actions.
This is almost explicit in Porteous 95, p. 263
• CommentRowNumber3.
• CommentAuthorUrs
• CommentTimeMar 22nd 2019
• (edited Mar 22nd 2019)
Of the resulting action of Sp(2)$\times$Sp(1) on the 7-sphere (from this Prop.), only the quotient group Sp(n).Sp(1) acts effectively.
• CommentRowNumber4.
• CommentAuthorUrs
• CommentTimeMar 31st 2019
added pointer to Table 1 in
• Machiko Hatsuda, Shinya Tomizawa, Coset for Hopf fibration and Squashing, Class.Quant.Grav.26:225007, 2009 (arXiv:0906.1025)
for the coset presentation
$\array{ S^3 &\overset{fib(h_{\mathbb{H}})}{\longrightarrow}& S^{7} &\overset{h_{\mathbb{H}}}{\longrightarrow}& S^4 \\ = && = && = \\ \frac{Spin(4)}{Spin(3)} &\longrightarrow& \frac{Spin(5)}{Spin(3)} &\longrightarrow& \frac{Spin(5)}{Spin(4)} }$
• CommentRowNumber5.
• CommentAuthorUrs
• CommentTimeMar 31st 2019
• (edited Mar 31st 2019)
• Herman Gluck, Frank Warner, Wolfgang Ziller, The geometry of the Hopf fibrations, L’Enseignement Mathématique, t.32 (1986), p. 173-198
which in its Prop. 4.1 explicitly states and proves the $Spin(5)$-equivariance of the quaternionic Hopf fibration
(but fails to mention the coset representation that makes this manifest)
• CommentRowNumber6.
• CommentAuthorUrs
• CommentTimeApr 27th 2019
• (edited Apr 27th 2019)
added this in the list of references:
Noteworthy fiber products with the quaternionic Hopf fibration, notably exotic 7-spheres, are discussed in
• Llohann D. Sperança, Explicit Constructions over the Exotic 8-sphere (pdf)
• CommentRowNumber7.
• CommentAuthorUrs
• CommentTimeAug 22nd 2019
• (edited Aug 22nd 2019)
Does the $Spin(5)$-equivariance of the quaternionic Hopf fibration lift to $Pin(5)$-equivariance?
(say for $Pin \coloneqq Pin^+$)
Theorem 4.1 in Gluck-Warner-Ziller says “No.” if the action on the ambient $\mathbb{R}^8 = \mathbb{H}^2$ is quaternionic linear. But may we drop this assumption?
• CommentRowNumber8.
• CommentAuthorDavid_Corfield
• CommentTimeAug 22nd 2019
What happens to the coset space as you replace Spin by Pin in $Spin(5)/Spin(3) \simeq S^7$? The 3- and 4-spheres still work (is that for any version of Pin?).
• CommentRowNumber9.
• CommentAuthorDavidRoberts
• CommentTimeAug 22nd 2019
Added link to free copy of The geometry of the Hopf fibrations on Ziller’s ResearchGate page.
• CommentRowNumber10.
• CommentAuthorDavidRoberts
• CommentTimeAug 22nd 2019
• (edited Aug 22nd 2019)
@David there’s issues with the number of connected components, if $Pin(4) \simeq Pin(3) \times Pin(3)$ (analogously to how $Spin(4) \simeq Spin(3)\times Spin(3)$. I thought this was the case, but I didn’t check the details, so I might be wrong. [Edit In fact this can’t be right, since by how $Pin(n)$ is defined it has two connected components.]
@Urs the definition of Pin(5) naturally involves some complex vector space underlying the Clifford algebra, IIRC, so my idea was to look at this using that representation.
• CommentRowNumber11.
• CommentAuthorDavidRoberts
• CommentTimeAug 22nd 2019
• (edited Aug 22nd 2019)
No, hang on. The Wikipedia page on the $Pin$ groups says that $Pin_+(3) = SO(3)\times C_4$ and $Pin_-(3) = SU(2)\times C_2$. So there’s a convention mismatch, I think, if we want the version of $Pin$ that contains $Spin$.
(Edited earlier incorrect comments, was tired and not quite paying attention)
• CommentRowNumber12.
• CommentAuthorDavid_Corfield
• CommentTimeAug 22nd 2019
• (edited Aug 22nd 2019)
$Spin(3) = SU(2)$, and according to Wikipedia $Pin_{-}(3) = SU(2) \times C_2$, so that works.
Then it has $Pin_+(3)$ is isomorphic to $SO(3) \times C_4$.
[Didn’t update to see the editing above.]
• CommentRowNumber13.
• CommentAuthorDavidRoberts
• CommentTimeAug 22nd 2019
From Table 5.1 of Matrix Groups: An Introduction to Lie Group Theory by Baker, for instance, $Cl_5 = M_4(\mathbb{C})$, so we should be looking for a (possibly squashed) 7-sphere that is preserved by $Pin(5) \subset M_4(\mathbb{C})$.
• CommentRowNumber14.
• CommentAuthorUrs
• CommentTimeAug 22nd 2019
Thanks for all the reactions.
Meanwhile I was trying a different strategy, namely finding any orientation-reversing $\mathbb{Z}_2$-action under which the quaternionic Hopf fibration would be equivariant (a necessary condition for a full $Pin(5)$-equivariance).
A trap to beware of here is that the complex Hopf fibration is equivariant with respect to complex conjugation, with the latter being orientation reversing on the codomain 2-sphere. This gives the generator $\widehat \eta \in \pi^{st}_{1,0}$ from Araki-Iriye 82, p. 24.
This fact might make one feel that the quaternionic Hopf fibration should also be equivariant under quaternionic conjugation, which acts orientation-reversing on the codomain 4-sphere and which would evade the quaternion-linearity assumption in Gluck-Warner-Ziller, Theorem 4.1. But it is not the case: The relevant formula that works for $\mathbb{C}$ relies on commutativity. Fixing the formula for the quaternions requires performing an extra reflection, which makes everything be oriented again.
Indeed, the only way the quaternionic Hopf fibration appears with non-trivial $\mathbb{Z}_2$-action in Araki-Iriye 82 is with orientation-preserving action (their Prop. 10.1).
This doesn’t prove that there is no orientation-reversing equivariance, unstably, but it makes me worry.
|
# pyklip package¶
## pyklip.covars module¶
pyklip.covars.delta(x, y, sigmas, *args)[source]
Generates a diagonal covariance matrix
C_ij = sigma_i sigma_j delta_{ij}
Parameters: x (np.array) – 1-D array of x coordinates y (np.array) – 1-D array of y coordinates sigmas (np.array) – 1-D array of errors on each pixel
pyklip.covars.matern32(x, y, sigmas, corr_len)[source]
Generates a Matern (nu=3/2) covariance matrix that assumes x/y has the same correlation length
C_ij = sigma_i sigma_j (1 + sqrt(3) r_ij / l) exp(-sqrt(3) r_ij / l)
Parameters: x (np.array) – 1-D array of x coordinates y (np.array) – 1-D array of y coordinates sigmas (np.array) – 1-D array of errors on each pixel corr_len (float) – correlation length of the Matern function 2-D covariance matrix parameterized by the Matern function cov (np.array)
pyklip.covars.sq_exp(x, y, sigmas, corr_len)[source]
Generates square exponential covariance matrix that assumes x/y has the same correlation length
C_ij = sigma_i sigma_j exp(-r_ij^2/[2 l^2])
Parameters: x (np.array) – 1-D array of x coordinates y (np.array) – 1-D array of y coordinates sigmas (np.array) – 1-D array of errors on each pixel corr_len (float) – correlation length (i.e. standard deviation of Gaussian) mode (string) – either “numpy”, “cython”, or None, specifying the implementation of the kernel. 2-D covariance matrix parameterized by the Matern function cov (np.array)
## pyklip.empca module¶
pyklip.empca.np_calc_chisq(data, b, w, coef)[source]
Calculate chi squared
Parameters: im – nim x npix, single-precision numpy.ndarray. Data to be fit by the basis images b – nvec x npts, double precision numpy.ndarray. The nvec basis images. w – nim x npts, single-precision numpy.ndarray. Weights (inverse variances) of the data. coef – nvec x npts, double precision numpy.ndarray. The coefficients of the basis image fits. chisq, the total chi squared summed over all points and all images
pyklip.empca.set_pixel_weights(imflat, rflat, ivar=None, mode='standard', inner_sup=17, outer_sup=66)[source]
Parameters: imflat – array of flattend images, shape (N, number of section indices) rflat – radial component of the polar coordinates flattened to 1D, length = number of section indices mode – ‘standard’: assume poission statistics to calculate variance as sqrt(photon count) use inverse sqrt(variance) as pixel weights and multiply by a radial weighting inner_sup – radius within which to supress weights outer_sup – radius beyond which to supress weights pixel weights for empca
pyklip.empca.weighted_empca(data, weights=None, niter=25, nvec=5, randseed=1, maxcpus=1, silent=True)[source]
Perform iterative low-rank matrix approximation of data using weights.
Generated model vectors are not orthonormal and are not rotated/ranked by ability to model the data, but as a set they are good at describing the data.
Parameters: data – images to model weights – weights for every pixel niter – maximum number of iterations to perform nvec – number of vectors to solve (rank of the approximation) randseed – rand num generator seed; if None, don’t re-initialize maxcpus – maximum cpus to use for parallel programming silent – bool, whether to show chi_squared for each iteration returns the best low-rank approximation to the data in a weighted least-squares sense (dot product of coefficients and basis vectors).
## pyklip.fakes module¶
pyklip.fakes.LSQ_gauss2d(planet_image, x_grid, y_grid, a, x_cen, y_cen, sig)[source]
Calculate the squared norm of the residuals of the model with the data. Helper function for least square fit. The model is a 2d symmetric gaussian.
Parameters: planet_image – stamp image (y,x) of the satellite spot. x_grid – x samples grid as given by meshgrid. y_grid – y samples grid as given by meshgrid. a – amplitude of the 2d gaussian x_cen – x center of the gaussian y_cen – y center of the gaussian sig – standard deviation of the gaussian Squared norm of the residuals
pyklip.fakes.PSFcubefit(frame, xguess, yguess, searchrad=10, psfs_func_list=None, wave_index=None, residuals=False, rmbackground=True, add_background2residual=False)[source]
Estimate satellite spot amplitude (peak value) by fitting a symmetric 2d gaussian. Fit parameters: x,y position, amplitude, standard deviation (same in x and y direction)
Parameters: frame – the data - Array of size (y,x) xguess – x location to fit the 2d guassian to. yguess – y location to fit the 2d guassian to. searchrad – 1/2 the length of the box used for the fit psfs_func_list – List of spline fit function for the PSF_cube. wave_index – Index of the current wavelength. In [0,36] for GPI. Only used when psfs_func_list is not None. residuals – If True (Default = False) then calculate the residuals of the sat spot fit (gaussian or PSF cube). rmbackground – If true (default), remove any background slope to the data stamp. add_background2residual – If True (default is false) and if rmbackground was true, it adds the background that was removed to the returned residuals. scalar, Estimation of the peak flux of the satellite spot. ie Amplitude of the fitted gaussian. returned_flux
pyklip.fakes.airyfit2d(frame, xguess, yguess, searchrad=5, guessfwhm=3, guesspeak=1)[source]
Fits a 2d airy function to the data at point (xguess, yguess)
Parameters: frame – the data - Array of size (y,x) xguess,yguess – location to fit the 2d guassian to (should be pretty accurate) searchrad – 1/2 the length of the box used for the fit guessfwhm – approximate fwhm to fit to the peakflux of the Airy function fwhm: diameter between the first minima along one axis xfit: x position yfit: y position peakflux
pyklip.fakes.convert_pa_to_image_polar(pa, astr_hdr)[source]
Given a position angle (angle to North through East), calculate what polar angle theta (angle from +X CCW towards +Y) it corresponds to
Parameters: pa – position angle in degrees astr_hdr – wcs astrometry header (astropy.wcs) polar angle in degrees theta
pyklip.fakes.convert_polar_to_image_pa(theta, astr_hdr)[source]
Reversed engineer from covert_pa_to_image_polar by JB. Actually JB doesn’t quite understand how it works…
Parameters: theta – parallactic angle in degrees astr_hdr – wcs astrometry header (astropy.wcs) polar angle in degrees theta
pyklip.fakes.gauss2d(x0, y0, peak, sigma)[source]
2d symmetric guassian function for guassfit2d
Parameters: x0,y0 – center of gaussian peak – peak amplitude of guassian sigma – stddev in both x and y directions
pyklip.fakes.gaussfit2d(frame, xguess, yguess, searchrad=5, guessfwhm=3, guesspeak=1, refinefit=True)[source]
Fits a 2d gaussian to the data at point (xguess, yguess)
Parameters: frame – the data - Array of size (y,x) xguess,yguess – location to fit the 2d guassian to (should be pretty accurate) searchrad – 1/2 the length of the box used for the fit guessfwhm – approximate fwhm to fit to guesspeak – approximate flux refinefit – whether to refine the fit of the position of the guess the peakflux of the gaussian fwhm: fwhm of the PFS in pixels xfit: x position (only chagned if refinefit is True) yfit: y position (only chagned if refinefit is True) peakflux
pyklip.fakes.gaussfit2dLSQ(frame, xguess, yguess, searchrad=5, fit_centroid=False, residuals=False)[source]
Estimate satellite spot amplitude (peak value) by fitting a symmetric 2d gaussian. Fit parameters: x,y position, amplitude, standard deviation (same in x and y direction)
Parameters: frame – the data - Array of size (y,x) xguess – x location to fit the 2d guassian to. yguess – y location to fit the 2d guassian to. searchrad – 1/2 the length of the box used for the fit fit_centroid – If False (default), disable the centroid fit and only fit the amplitude and the standard deviation residuals – If True (Default = False) then calculate the residuals of the sat spot fit (gaussian or PSF cube). scalar, estimation of the peak flux of the satellite spot. ie Amplitude of the fitted gaussian. returned_flux
pyklip.fakes.generate_dataset_with_fakes(dataset, fake_position_dict, fake_flux_dict, spectrum=None, PSF_cube=None, PSF_cube_wvs=None, star_type=None, mute=False, SpT_file_csv=None, real_planets_pos=None, sep_skip_real_pl=None, pa_skip_real_pl=None, dn_per_contrast=None, star_spec=None)[source]
Generate spectral datacubes with fake planets. It will do a copy of the cubes read in GPIData after having injected fake planets in them. This new set of cubes can then be reduced in the same manner as the campaign data.
Doesn’t work with remove slice: assumes that the dataset is made of a list of similar datacubes or images.
Parameters: dataset – An object of type GPIData. The fakes are injected directly into dataset so you should make a copy of dataset prior to running this function. In order for the function to query simbad for the spectral type of the star, the attribute object_name needs to be defined in dataset. fake_position_dict – Dictionary defining the way the fake planets are positionned fake_position_dict[“mode”]=”sector”: Put a planet in each klip sector. Can actually generate several datasets in which the planets will be shifted in separation and position angle with respect to one another. It can be usefull for fake based contrast curve calculation. Several parameters needs to be defined. - fake_position_dict[“annuli”]: Number of annulis in the image - fake_position_dict[“subsections”]: Number of angular sections in the image - fake_position_dict[“sep_shift”]: separation shift from the center of the sectors - fake_position_dict[“pa_shift”]: position angle shift from the center of the sectors fake_position_dict[“mode”]=”custom”: Put planets at given (separation, position angle). The following parameter needs to be defined - fake_position_dict[“pa_sep_list”]: List of tuple [(r1,pa1),(r2,pa2),…] with each tuple giving the separation and position angle of each planet to be injected. fake_position_dict[“mode”]=”ROC”: Generate fake for ROC curves calculation. Use hard-coded parameters. fake_flux_dict: Dictionary defining the way in which the flux of the fake is defined. - fake_flux_dict[“mode”]=”contrast”: Defines the contrast value of the fakes. fake_flux_dict[“contrast”]: Contrast of the fake planets fake_flux_dict[“mode”]=”SNR”: Defines the brightness of the fakes relatively to the satellite spots. fake_flux_dict[“SNR”]: SNR wished for the fake planets. fake_flux_dict[“sep_arr”]: Separation sampling of the contrast curve in pixels. fake_flux_dict[“contrast_arr”]: 5 sigma contrast curve (planet to star ratio). PSF_cube – the psf of the image. A numpy array with shape (wv, y, x) PSF_cube_wvs – the wavelegnths that correspond to the input psfs spectrum – spectrum name (string) or array - “host_star_spec”: The spectrum from the star or the satellite spots is directly used. It is derived from the inverse of the calibrate_output() output. ”constant”: Use a constant spectrum np.ones(nl). other strings: name of the spectrum file in #pykliproot#/spectra/*/ with pykliproot the directory in which pyklip is installed. It that case it should be a spectrum from Mark Marley or one following the same convention. Spectrum will be corrected for transmission. ndarray: 1D array with a user defined spectrum. Spectrum will be corrected for transmission. star_type – Spectral type of the current star. If None, Simbad is queried. mute – If True prevent printed log outputs. suffix – Suffix to be added at the end of the filenames. SpT_file_csv – Filename of the table (.csv) contaning the spectral type of the stars. real_planets_pos – list of position of real point sources in the dataset that should be avoided when injecting fakes. [(sep1,pa1),(sep2,pa2),…] with the separation in pixels and the position angle in degrees. sep_skip_real_pl – Limit in seperation of how close a fake can be injected of a known GOI. pa_skip_real_pl – Limit in position angle of how close a fake can be injected of a known GOI. dn_per_contrast – array of the same size as spectrum giving the conversion between the peak flux of a planet in data number and its contrast. star_spec – 1D array stellar spectrum sampling dataset.wvs. Otherwise uses a pickles spectrum in which the temperature in interpolated from the spectral type.
pyklip.fakes.inject_disk(frames, centers, inputfluxes, astr_hdrs, pa, fwhm=3.5)[source]
Injects a fake disk into a dataset
Parameters: frames – array of (N,y,x) for N is the total number of frames centers – array of size (N,2) of [x,y] coordiantes of the image center intputfluxes – array of size N of the peak flux of the fake disk in each frame OR array of 2-D models (North up East left) to inject into the data. (Disk is assumed to be centered at center of image) astr_hdrs – array of size N of the WCS headers pa – position angles angle (in degrees) of disk plane fwhm – if injecting a Gaussian disk (i.e inputfluxes is an array of floats), fwhm of Gaussian saves result in input “frames” variable
pyklip.fakes.inject_planet(frames, centers, inputflux, astr_hdrs, radius, pa, fwhm=3.5, thetas=None, stampsize=None, field_dependent_correction=None)[source]
Injects a fake planet into a dataset either using a Gaussian PSF or an input PSF
Parameters: frames – array of (N,y,x) for N is the total number of frames centers – array of size (N,2) of [x,y] coordiantes of the image center inputflux – EITHER array of size N of the peak flux of the fake planet in each frame (will inject a Gaussian PSF) OR array of size (N,psfy,psfx) of template PSFs. The brightnesses should be scaled and the PSFs should be centered at the center of each of the template images astr_hdrs – array of size N of the WCS headers radius – separation of the planet from the star pa – position angle (in degrees) of planet fwhm – fwhm (in pixels) of gaussian thetas – ignore PA, supply own thetas (CCW angle from +x axis toward +y) array of size N stampsize – in pixels, the width of the square stamp to inject the image into. Defaults to 3*fwhm if None field_dependent_correction – a function of the form output_stamp = correction(input_stamp, input_dx, input_dy) where, input_stamp is a 2-D image of shape (y_stamp, x_stamp). input_dx and input_dy have the same shape and represent the offset of each pixel from the star (in pixel units). The function returns an output_stamp of the same shape, but corrected for any field dependent throughputs or distortions. saves result in input “frames” variable
pyklip.fakes.retrieve_planet(frames, centers, astr_hdrs, sep, pa, searchrad=7, guessfwhm=3.0, guesspeak=1, refinefit=True, thetas=None)[source]
Retrives the planet properties from a series of frames given a separation and PA
Parameters: frames – frames of data to retrieve planet. Can be a single 2-D image ([y,x]) for a series/cube ([N,y,x]) centers – coordiantes of the image center. Can be [2]-element lst or an array that matches array of frames [N,2] astr_hdrs – astr_hdrs, can be a single one or an array of N of them sep – radial distance in pixels PA – parallactic angle in degrees searchrad – 1/2 the length of the box used for the fit guessfwhm – approximate fwhm to fit to guesspeak – approximate flux refinefit – whether or not to refine the positioning of the planet thetas – ignore PA, supply own thetas (CCW angle from +x axis toward +y) single number or array of size N (peakflux, x, y, fwhm). A single tuple if one frame passed in. Otherwise an array of tuples measured
pyklip.fakes.retrieve_planet_flux(frames, centers, astr_hdrs, sep, pa, searchrad=7, guessfwhm=3.0, guesspeak=1, refinefit=False, thetas=None)[source]
Retrives the planet flux from a series of frames given a separation and PA
Parameters: frames – frames of data to retrieve planet. Can be a single 2-D image ([y,x]) for a series/cube ([N,y,x]) centers – coordiantes of the image center. Can be [2]-element lst or an array that matches array of frames [N,2] astr_hdrs – astr_hdrs, can be a single one or an array of N of them sep – radial distance in pixels PA – parallactic angle in degrees searchrad – 1/2 the length of the box used for the fit guessfwhm – approximate fwhm to fit to guesspeak – approximate flux refinefit – whether or not to refine the positioning of the planet thetas – ignore PA, supply own thetas (CCW angle from +x axis toward +y) single number or array of size N either a single peak flux or an array depending on whether a single frame or multiple frames where passed in peakflux
## pyklip.fitpsf module¶
class pyklip.fitpsf.FMAstrometry(guess_sep, guess_pa, fitboxsize, method='mcmc')[source]
Specifically for fitting astrometry of a directly imaged companion relative to its star. Extension of pyklip.fitpsf.FitPSF.
Parameters: guess_sep – the guessed separation (pixels) guess_pa – the guessed position angle (degrees) fitboxsize – fitting box side length (pixels) method (str) – either ‘mcmc’ or ‘maxl’ depending on framework you want. Defaults to ‘mcmc’.
guess_sep
(initialization) guess separation for planet [pixels]
Type: float
guess_pa
(initialization) guess PA for planet [degrees]
Type: float
guess_RA_offset
(initialization) guess RA offset [pixels]
Type: float
guess_Dec_offset
(initialization) guess Dec offset [pixels]
Type: float
raw_RA_offset
(result) the raw result from the MCMC fit for the planet’s location [pixels]
raw_Dec_offset
(result) the raw result from the MCMC fit for the planet’s location [pixels]
raw_flux
(result) factor to scale the FM to match the flux of the data
covar_params
(result) hyperparameters for the Gaussian process
Type: list of pyklip.fitpsf.ParamRange
raw_sep
(result) the inferred raw result from the MCMC fit for the planet’s location [pixels]
raw_PA
(result) the inferred raw result from the MCMC fit for the planet’s location [degrees]
RA_offset
(result) the RA offset of the planet that includes all astrometric errors [pixels or mas]
Dec_offset
(result) the Dec offset of the planet that includes all astrometric errors [pixels or mas]
sep
(result) the separation of the planet that includes all astrometric errors [pixels or mas]
PA
(result) the PA of the planet that includes all astrometric errors [degrees]
fm_stamp
(fitting) The 2-D stamp of the forward model (centered at the nearest pixel to the guessed location)
Type: np.array
data_stamp
(fitting) The 2-D stamp of the data (centered at the nearest pixel to the guessed location)
Type: np.array
noise_map
(fitting) The 2-D stamp of the noise for each pixel the data computed assuming azimuthally similar noise
Type: np.array
padding
amount of pixels on one side to pad the data/forward model stamp
Type: int
sampler
an instance of the emcee EnsambleSampler. Only for Bayesian fit. See emcee docs for more details.
Type: emcee.EnsembleSampler
fit_astrometry(nwalkers=100, nburn=200, nsteps=800, save_chain=True, chain_output='bka-chain.pkl', numthreads=None)[source]
Fits the PSF of the planet in either a frequentist or Bayesian way depending on initialization.
Parameters: nwalkers – number of walkers (mcmc-only) nburn – numbe of samples of burn-in for each walker (mcmc-only) nsteps – number of samples each walker takes (mcmc-only) save_chain – if True, save the output in a pickled file (mcmc-only) chain_output – filename to output the chain to (mcmc-only) numthreads – number of threads to use (mcmc-only)
Returns:
generate_data_stamp(data, data_center, data_wcs=None, noise_map=None, dr=4, exclusion_radius=10)[source]
Generate a stamp of the data_stamp ~centered on planet and also corresponding noise map
Parameters: data – the final collapsed data_stamp (2-D) data_center – location of star in the data_stamp. data_wcs – sky angles WCS object. To rotate the image properly [NOT YET IMPLMETNED] if None, data_stamp is already rotated North up East left noise_map – if not None, noise map for each pixel in the data_stamp (2-D). if None, one will be generated assuming azimuthal noise using an annulus widthh of dr dr – width of annulus in pixels from which the noise map will be generated exclusion_radius – radius around the guess planet location which doens’t get factored into noise estimate
Returns:
generate_fm_stamp(fm_image, fm_center, fm_wcs=None, extract=True, padding=5)[source]
Generates a stamp of the forward model and stores it in self.fm_stamp :param fm_image: full imgae containing the fm_stamp :param fm_center: [x,y] center of image (assuing fm_stamp is located at sep/PA) corresponding to guess_sep and guess_pa :param fm_wcs: if not None, specifies the sky angles in the image. If None, assume image is North up East left :param extract: if True, need to extract the forward model from the image. Otherwise, assume the fm_stamp is already
centered in the frame (fm_image.shape // 2)
Parameters: padding – number of pixels on each side in addition to the fitboxsize to extract to pad the fm_stamp (should be >= 1)
Returns:
propogate_errs(star_center_err=None, platescale=None, platescale_err=None, pa_offset=None, pa_uncertainty=None)[source]
Propogate astrometric error. Stores results in its own fields
Parameters: star_center_err (float) – uncertainity of the star location (pixels) platescale (float) – mas/pix conversion to angular coordinates platescale_err (float) – mas/pix error on the platescale pa_offset (float) – Offset, in the same direction as position angle, to set North up (degrees) pa_uncertainity (float) – Error on position angle/true North calibration (Degrees)
class pyklip.fitpsf.FitPSF(fitboxsize, method='mcmc')[source]
Bases: object
Base class to perform astrometry on direct imaging data_stamp using GP regression. Can utilize a Bayesian framework with MCMC or a frequentist framework with least squares.
Parameters: fitboxsize – fitting box side length (pixels) method (str) – either ‘mcmc’ or ‘maxl’ depending on framework you want. Defaults to ‘mcmc’. fmt (str) – either ‘seppa’ or ‘xy’ depending on how you want to input the guess coordiantes
guess_x
(initialization) guess x position [pixels]
Type: float
guess_y
(initialization) guess y positon [pixels]
Type: float
guess_flux
guess scale factor between model and data
Type: float
fit_x
(result) the result from the MCMC fit for the planet’s location [pixels]
fit_y
(result) the result from the MCMC fit for the planet’s location [pixels]
fit_flux
(result) factor to scale the FM to match the flux of the data
covar_params
(result) hyperparameters for the Gaussian processa
Type: list of pyklip.fitpsf.ParamRange
fm_stamp
(fitting) The 2-D stamp of the forward model (centered at the nearest pixel to the guessed location)
Type: np.array
data_stamp
(fitting) The stamp of the data (centered at the nearest pixel to the guessed location) (2-D unless there were NaNs in which 1-D)
Type: np.array
noise_map
(fitting) The stamp of the noise for each pixel the data computed assuming azimuthally similar noise (same dim as data_stamp)
Type: np.array
padding
amount of pixels on one side to pad the data/forward model stamp
Type: int
sampler
an instance of the emcee EnsambleSampler. Only for Bayesian fit. See emcee docs for more details.
Type: emcee.EnsembleSampler
best_fit_and_residuals(fig=None)[source]
Generate a plot of the best fit FM compared with the data_stamp and also the residuals :param fig: if not None, a matplotlib Figure object :type fig: matplotlib.Figure
Returns: the Figure object. If input fig is None, function will make a new one fig (matplotlib.Figure)
fit_psf(nwalkers=100, nburn=200, nsteps=800, save_chain=True, chain_output='bka-chain.pkl', numthreads=None)[source]
Fits the PSF to the data in either a frequentist or Bayesian way depending on initialization.
Parameters: nwalkers – number of walkers (mcmc-only) nburn – numbe of samples of burn-in for each walker (mcmc-only) nsteps – number of samples each walker takes (mcmc-only) save_chain – if True, save the output in a pickled file (mcmc-only) chain_output – filename to output the chain to (mcmc-only) numthreads – number of threads to use (mcmc-only)
Returns:
generate_data_stamp(data, guess_loc, noise_map, radial_noise_center=None, dr=4, exclusion_radius=10)[source]
Generate a stamp of the data_stamp ~centered on planet and also corresponding noise map :param data: the final collapsed data_stamp (2-D) :param guess_loc: guess location of where to fit the model in the data :param noise_map: if not None, noise map for each pixel (either same shape as input data, or shape of data stamp)
if None, one will be generated assuming azimuthal noise using an annulus widthh of dr. radial_noise_center MUST be defined.
Parameters: radial_noise_center – if we assume the noise is azimuthally symmetric and changes radially, this is the [x,y] center for it dr – width of annulus in pixels from which the noise map will be generated exclusion_radius – radius around the guess planet location which doens’t get factored into the radial noise estimate
Returns:
generate_fm_stamp(fm_image, fm_pos=None, fm_wcs=None, extract=True, padding=5)[source]
Generates a stamp of the forward model and stores it in self.fm_stamp :param fm_image: full image containing the fm_stamp :param fm_pos: [x,y] location of the forwrd model in the fm_image :param fm_wcs: if not None, specifies the sky angles in the image. If None, assume image is North up East left :param extract: if True, need to extract the forward model from the image. Otherwise, assume the fm_stamp is already
centered in the frame (fm_image.shape // 2)
Parameters: padding – number of pixels on each side in addition to the fitboxsize to extract to pad the fm_stamp (should be >= 1)
Returns:
guess_flux
make_corner_plot(fig=None)[source]
Generate a corner plot of the posteriors from the MCMC :param fig: if not None, a matplotlib Figure object
Returns: the Figure object. If input fig is None, function will make a new one fig
set_bounds(dx, dy, df, covar_param_bounds, read_noise_bounds=None)[source]
Set bounds on Bayesian priors. All paramters can be a 2 element tuple/list/array that specifies the lower and upper bounds x_min < x < x_max. Or a single value whose interpretation is specified below If you are passing in both lower and upper bounds, both should be in linear scale! :param dx: Distance from initial guess position in pixels. For a single value, this specifies the largest distance
form the initial guess (i.e. x_guess - dx < x < x_guess + dx)
Parameters: dy – Same as dx except with y df – Flux range. If single value, specifies how many orders of 10 the flux factor can span in one direction (i.e. log_10(guess_flux) - df < log_10(guess_flux) < log_10(guess_flux) + df covar_param_bounds – Params for covariance matrix. Like df, single value specifies how many orders of magnitude parameter can span. Otherwise, should be a list of 2-elem touples read_noise_bounds – Param for read noise term. If single value, specifies how close to 0 it can go based on powers of 10 (i.e. log_10(-read_noise_bound) < read_noise < 1 )
Returns:
set_kernel(covar, covar_param_guesses, covar_param_labels, include_readnoise=False, read_noise_fraction=0.01)[source]
Set the Gaussian process kernel used in our fit
Parameters: covar – Covariance kernel for GP regression. If string, can be “matern32” or “sqexp” or “diag” Can also be a function: cov = cov_function(x_indices, y_indices, sigmas, cov_params) covar_param_guesses – a list of guesses on the hyperparmeteres (size of N_hyperparams). This can be an empty list for ‘diag’. covar_param_labels – a list of strings labelling each covariance parameter include_readnoise – if True, part of the noise is a purely diagonal term (i.e. read/photon noise) read_noise_fraction – fraction of the total measured noise is read noise (between 0 and 1)
Returns:
class pyklip.fitpsf.ParamRange(bestfit, err_range)[source]
Bases: object
Stores the best fit value and uncertainities for a parameter in a neat fasion
Parameters: bestfit (float) – the bestfit value err_range – either a float or a 2-element tuple (+val1, -val2) and gives the 1-sigma range
bestfit
the bestfit value
Type: float
error
the average 1-sigma error
Type: float
error_2sided
[+error1, -error2] 2-element array with asymmetric errors
Type: np.array
class pyklip.fitpsf.PlanetEvidence(guess_sep, guess_pa, fitboxsize, sampling_outputdir, l_only=False, fm_basename='Planet', null_basename='Null')[source]
Specifically for nested sampling of the parameter space of a directly imaged companion relative to its star. Extension of pyklip.fitpsf.FitPSF.
Parameters: guess_sep – the guessed separation (pixels) guess_pa – the guessed position angle (degrees) fitboxsize – fitting box side length (pixels) fm_basename – Prefix of the foward model sampling files multinest saves in /chains/ null_basename – Prefix of the null hypothesis model sampling files multinest saves in /chains/
guess_sep
(initialization) guess separation for planet [pixels]
Type: float
guess_pa
(initialization) guess PA for planet [degrees]
Type: float
guess_RA_offset
(initialization) guess RA offset [pixels]
Type: float
guess_Dec_offset
(initialization) guess Dec offset [pixels]
Type: float
raw_RA_offset
(result) the raw result from the MCMC fit for the planet’s location [pixels]
raw_Dec_offset
(result) the raw result from the MCMC fit for the planet’s location [pixels]
raw_flux
(result) factor to scale the FM to match the flux of the data
covar_params
(result) hyperparameters for the Gaussian process
Type: list of pyklip.fitpsf.ParamRange
raw_sep
(result) the inferred raw result from the MCMC fit for the planet’s location [pixels]
raw_PA
(result) the inferred raw result from the MCMC fit for the planet’s location [degrees]
RA_offset
(result) the RA offset of the planet that includes all astrometric errors [pixels or mas]
Dec_offset
(result) the Dec offset of the planet that includes all astrometric errors [pixels or mas]
sep
(result) the separation of the planet that includes all astrometric errors [pixels or mas]
PA
(result) the PA of the planet that includes all astrometric errors [degrees]
fm_stamp
(fitting) The 2-D stamp of the forward model (centered at the nearest pixel to the guessed location)
Type: np.array
data_stamp
(fitting) The 2-D stamp of the data (centered at the nearest pixel to the guessed location)
Type: np.array
noise_map
(fitting) The 2-D stamp of the noise for each pixel the data computed assuming azimuthally similar noise
Type: np.array
padding
amount of pixels on one side to pad the data/forward model stamp
Type: int
sampler
function that runs the pymultinest sampling for both hypotheses
Type: pymultinest.run
fit_plots()[source]
fit_stats()[source]
fm_residuals()[source]
multifit()[source]
Nested sampling parameter estimation and evidence calculation for the forward model and correlated noise.
nested_corner_plots(posts, n_dim)[source]
pyklip.fitpsf.lnlike(fitparams, fma, cov_func, readnoise=False, negate=False)[source]
Likelihood function :param fitparams: array of params (size N). First three are [dRA,dDec,f]. Additional parameters are GP hyperparams
dRA,dDec: RA,Dec offsets from star. Also coordianaes in self.data_{RA,Dec}_offset f: flux scale factor to normalizae the flux of the data_stamp to the model
Parameters: fma (FMAstrometry) – a FMAstrometry object that has been fully set up to run cov_func (function) – function that given an input [x,y] coordinate array returns the covariance matrix e.g. cov = cov_function(x_indices, y_indices, sigmas, cov_params) readnoise (bool) – If True, the last fitparam fits for diagonal noise negate (bool) – if True, negatives the probability (used for minimization algos) log of likelihood function (minus a constant factor) likeli
pyklip.fitpsf.lnprior(fitparams, bounds, readnoise=False, negate=False)[source]
Bayesian prior
Parameters: fitparams – array of params (size N) bounds – array of (N,2) with corresponding lower and upper bound of params bounds[i,0] <= fitparams[i] < bounds[i,1] readnoise (bool) – If True, the last fitparam fits for diagonal noise negate (bool) – if True, negatives the probability (used for minimization algos) 0 if inside bound ranges, -inf if outside prior
pyklip.fitpsf.lnprob(fitparams, fma, bounds, cov_func, readnoise=False, negate=False)[source]
Function to compute the relative posterior probabiltiy. Product of likelihood and prior :param fitparams: array of params (size N). First three are [dRA,dDec,f]. Additional parameters are GP hyperparams
dRA,dDec: RA,Dec offsets from star. Also coordianaes in self.data_{RA,Dec}_offset f: flux scale factor to normalizae the flux of the data_stamp to the model
Parameters: fma – a FMAstrometry object that has been fully set up to run bounds – array of (N,2) with corresponding lower and upper bound of params bounds[i,0] <= fitparams[i] < bounds[i,1] cov_func – function that given an input [x,y] coordinate array returns the covariance matrix e.g. cov = cov_function(x_indices, y_indices, sigmas, cov_params) readnoise (bool) – If True, the last fitparam fits for diagonal noise negate (bool) – if True, negatives the probability (used for minimization algos)
Returns:
pyklip.fitpsf.quick_psf_fit(data, psf, x_guess, y_guess, fitboxsize)[source]
A wrapper for a quick maximum likelihood fit to a PSF to the data.
Parameters: data (np.array) – 2-D data frame psf (np.array) – 2-D PSF template. This should be smaller than the size of data and larger than the fitboxsize x_guess (float) – approximate x position of the location you are fitting the psf to y_guess (float) – approximate y position of the location you are fitting the psf to fitboxsize (int) – fitting region is a square. This is the lenght of one side of the square x_fit, y_fit, flux_fit x_fit (float): x position y_fit (float): y position flux_fit (float): multiplicative scale factor for the psf to match the data
## pyklip.fm module¶
pyklip.fm.calculate_fm(delta_KL_nospec, original_KL, numbasis, sci, model_sci, inputflux=None)[source]
Calculate what the PSF looks up post-KLIP using knowledge of the input PSF, assumed spectrum of the science target, and the partially calculated KL modes (Delta Z_k^lambda in Laurent’s paper). If inputflux is None, the spectral dependence has already been folded into delta_KL_nospec (treat it as delta_KL).
Note: if inputflux is None and delta_KL_nospec has three dimensions (ie delta_KL_nospec was calculated using pertrurb_nospec() or perturb_nospec_modelsBased()) then only klipped_oversub and klipped_selfsub are returned. Besides they will have an extra first spectral dimension.
Parameters: delta_KL_nospec – perturbed KL modes but without the spectral info. delta_KL = spectrum x delta_Kl_nospec. Shape is (numKL, wv, pix). If inputflux is None, delta_KL_nospec = delta_KL orignal_KL – unpertrubed KL modes (array of size [numbasis, numpix]) numbasis – array of KL mode cutoffs If numbasis is [None] the number of KL modes to be used is automatically picked based on the eigenvalues. sci – array of size p representing the science data model_sci – array of size p corresponding to the PSF of the science frame input_spectrum – array of size wv with the assumed spectrum of the model
If delta_KL_nospec does NOT include a spectral dimension or if inputflux is not None: :returns:
array of shape (b,p) showing the forward modelled PSF
Skipped if inputflux = None, and delta_KL_nospec has 3 dimensions.
klipped_oversub: array of shape (b, p) showing the effect of oversubtraction as a function of KL modes klipped_selfsub: array of shape (b, p) showing the effect of selfsubtraction as a function of KL modes Note: psf_FM = model_sci - klipped_oversub - klipped_selfsub to get the FM psf as a function of K Lmodes
(shape of b,p)
Return type: fm_psf
If inputflux = None and if delta_KL_nospec include a spectral dimension: :returns: Sum(<S|KL>KL) with klipped_oversub.shape = (size(numbasis),Npix)
klipped_selfsub: Sum(<N|DKL>KL) + Sum(<N|KL>DKL) with klipped_selfsub.shape = (size(numbasis),N_lambda or N_ref,N_pix)
Return type: klipped_oversub
pyklip.fm.calculate_fm_singleNumbasis(delta_KL_nospec, original_KL, numbasis, sci, model_sci, inputflux=None)[source]
Same function as calculate_fm() but faster when numbasis has only one element. It doesn’t do the mutliplication with the triangular matrix.
Calculate what the PSF looks up post-KLIP using knowledge of the input PSF, assumed spectrum of the science target, and the partially calculated KL modes (Delta Z_k^lambda in Laurent’s paper). If inputflux is None, the spectral dependence has already been folded into delta_KL_nospec (treat it as delta_KL).
Note: if inputflux is None and delta_KL_nospec has three dimensions (ie delta_KL_nospec was calculated using pertrurb_nospec() or perturb_nospec_modelsBased()) then only klipped_oversub and klipped_selfsub are returned. Besides they will have an extra first spectral dimension.
Parameters: delta_KL_nospec – perturbed KL modes but without the spectral info. delta_KL = spectrum x delta_Kl_nospec. Shape is (numKL, wv, pix). If inputflux is None, delta_KL_nospec = delta_KL orignal_KL – unpertrubed KL modes (array of size [numbasis, numpix]) numbasis – array of (ONE ELEMENT ONLY) KL mode cutoffs If numbasis is [None] the number of KL modes to be used is automatically picked based on the eigenvalues. sci – array of size p representing the science data model_sci – array of size p corresponding to the PSF of the science frame input_spectrum – array of size wv with the assumed spectrum of the model
If delta_KL_nospec does NOT include a spectral dimension or if inputflux is not None: :returns:
array of shape (b,p) showing the forward modelled PSF
Skipped if inputflux = None, and delta_KL_nospec has 3 dimensions.
klipped_oversub: array of shape (b, p) showing the effect of oversubtraction as a function of KL modes klipped_selfsub: array of shape (b, p) showing the effect of selfsubtraction as a function of KL modes Note: psf_FM = model_sci - klipped_oversub - klipped_selfsub to get the FM psf as a function of K Lmodes
(shape of b,p)
Return type: fm_psf
If inputflux = None and if delta_KL_nospec include a spectral dimension: :returns: Sum(<S|KL>KL) with klipped_oversub.shape = (size(numbasis),Npix)
klipped_selfsub: Sum(<N|DKL>KL) + Sum(<N|KL>DKL) with klipped_selfsub.shape = (size(numbasis),N_lambda or N_ref,N_pix)
Return type: klipped_oversub
pyklip.fm.calculate_validity(covar_perturb, models_ref, numbasis, evals_orig, covar_orig, evecs_orig, KL_orig, delta_KL)[source]
Calculate the validity of the perturbation based on the eigenvalues or the 2nd order term compared
to the 0th order term of the covariance matrix expansion
Parameters: evals_perturb – linear expansion of the perturbed covariance matrix (C_AS). Shape of N x N models_ref – N x p array of the N models corresponding to reference images. Each model should contain spectral information numbasis – array of KL mode cutoffs evevs_orig – size of [N, maxKL] perturbed KL modes. Shape is (numKL, wv, pix) delta_KL_nospec
pyklip.fm.find_id_nearest(array, value)[source]
Find index of the closest value in input array to input value :param array: 1D array :param value: scalar value
Returns: Index of the nearest value in array
pyklip.fm.klip_dataset(dataset, fm_class, mode='ADI+SDI', outputdir='.', fileprefix='pyklipfm', annuli=5, subsections=4, OWA=None, N_pix_sector=None, movement=None, flux_overlap=0.1, PSF_FWHM=3.5, minrot=0, padding=0, numbasis=None, maxnumbasis=None, numthreads=None, corr_smooth=1, calibrate_flux=False, aligned_center=None, psf_library=None, spectrum=None, highpass=False, annuli_spacing='constant', save_klipped=True, mute_progression=False, time_collapse='mean')[source]
Run KLIP-FM on a dataset object
Parameters: dataset – an instance of Instrument.Data (see instruments/ subfolder) fm_class – class that implements the the forward modelling functionality mode – some combination of ADI, SDI, and RDI (e.g. “ADI+SDI”, “RDI”). Note that note all FM classes support RDI. anuuli – Annuli to use for KLIP. Can be a number, or a list of 2-element tuples (a, b) specifying the pixel bondaries (a <= r < b) for each annulus subsections – Sections to break each annuli into. Can be a number [integer], or a list of 2-element tuples (a, b) specifying the positon angle boundaries (a <= PA < b) for each section [radians] OWA – if defined, the outer working angle for pyklip. Otherwise, it will pick it as the cloest distance to a nan in the first frame N_pix_sector – Rough number of pixels in a sector. Overwriting subsections and making it sepration dependent. The number of subsections is defined such that the number of pixel is just higher than N_pix_sector. I.e. subsections = floor(pi*(r_max^2-r_min^2)/N_pix_sector) Warning: There is a bug if N_pix_sector is too big for the first annulus. The annulus is defined from 0 to 2pi which create a bug later on. It is probably in the way pa_start and pa_end are defined in fm_from_eigen(). (I am taking about matched filter by the way) movement – minimum amount of movement (in pixels) of an astrophysical source to consider using that image for a refernece PSF flux_overlap – Maximum fraction of flux overlap between a slice and any reference frames included in the covariance matrix. Flux_overlap should be used instead of “movement” when a template spectrum is used. However if movement is not None then the old code is used and flux_overlap is ignored. The overlap is estimated for 1D gaussians with FWHM defined by PSF_FWHM. So note that the overlap is not exactly the overlap of two real 2D PSF for a given instrument but it will behave similarly. PSF_FWHM – FWHM of the PSF used to calculate the overlap (cf flux_overlap). Default is FWHM = 3.5 corresponding to sigma ~ 1.5. minrot – minimum PA rotation (in degrees) to be considered for use as a reference PSF (good for disks) padding – for each sector, how many extra pixels of padding should we have around the sides. numbasis – number of KL basis vectors to use (can be a scalar or list like). Length of b If numbasis is [None] the number of KL modes to be used is automatically picked based on the eigenvalues. maxnumbasis – Number of KL modes to be calculated from whcih numbasis modes will be taken. corr_smooth (float) – size of sigma of Gaussian smoothing kernel (in pixels) when computing most correlated PSFs. If 0, no smoothing numthreads – number of threads to use. If none, defaults to using all the cores of the cpu calibrate_flux – if true, flux calibrates the regular KLIP subtracted data. DOES NOT CALIBRATE THE FM aligned_center – array of 2 elements [x,y] that all the KLIP subtracted images will be centered on for image registration psf_library – a rdi.PSFLibrary object with a PSF Library for RDI spectrum – (only applicable for SDI) if not None, optimizes the choice of the reference PSFs based on the spectrum shape. an array: of length N with the flux of the template spectrum at each wavelength. a string: Currently only supports “methane” between 1 and 10 microns. Uses minmove to determine the separation from the center of the segment to determine contamination and the size of the PSF (TODO: make PSF size another quantity) (e.g. minmove=3, checks how much contamination is within 3 pixels of the hypothetical source) if smaller than 10%, (hard coded quantity), then use it for reference PSF highpass – if True, run a Gaussian high pass filter (default size is sigma=imgsize/10) can also be a number specifying FWHM of box in pixel units annuli_spacing – how to distribute the annuli radially. Currently three options. Constant (equally spaced), log (logarithmical expansion with r), and linear (linearly expansion with r) save_klipped – if True, will save the regular klipped image. If false, it wil not and sub_imgs will return None mute_progression – Mute the printing of the progression percentage. Indeed sometimes the overwriting feature doesn’t work and one ends up with thousands of printed lines. Therefore muting it can be a good idea. time_collapse – how to collapse the data in time. Currently support: “mean”, “weighted-mean”
pyklip.fm.klip_math(sci, refs, numbasis, covar_psfs=None, model_sci=None, models_ref=None, spec_included=False, spec_from_model=False)[source]
linear algebra of KLIP with linear perturbation disks and point source
Parameters: sci – array of length p containing the science data refs – N x p array of the N reference images that characterizes the extended source with p pixels numbasis – number of KLIP basis vectors to use (can be an int or an array of ints of length b) If numbasis is [None] the number of KL modes to be used is automatically picked based on the eigenvalues. covar_psfs – covariance matrix of reference images (for large N, useful). Normalized following numpy normalization in np.cov documentation The following arguments must all be passed in, or none of them for klip_math to work (#) – models_ref – N x p array of the N models corresponding to reference images. Each model should be normalized to unity (no flux information) model_sci – array of size p corresponding to the PSF of the science frame Sel_wv – wv x N array of the the corresponding wavelength for each reference PSF input_spectrum – array of size wv with the assumed spectrum of the model array of shape (p,b) that is the PSF subtracted data for each of the b KLIP basis cutoffs. If numbasis was an int, then sub_img_row_selected is just an array of length p KL_basis: array of KL basis (shape of [numbasis, p]) If models_ref is passed in (not None): delta_KL_nospec: array of shape (b, wv, p) that is the almost perturbed KL modes just missing spectral info Otherwise: evals: array of eigenvalues (size of max number of KL basis requested aka nummaxKL) evecs: array of corresponding eigenvectors (shape of [p, nummaxKL]) sub_img_rows_selected
pyklip.fm.klip_parallelized(imgs, centers, parangs, wvs, IWA, fm_class, OWA=None, mode='ADI+SDI', annuli=5, subsections=4, movement=None, flux_overlap=0.1, PSF_FWHM=3.5, numbasis=None, maxnumbasis=None, corr_smooth=1, aligned_center=None, numthreads=None, minrot=0, maxrot=360, spectrum=None, psf_library=None, psf_library_good=None, psf_library_corr=None, padding=0, save_klipped=True, flipx=True, N_pix_sector=None, mute_progression=False, annuli_spacing='constant', compute_noise_cube=False)[source]
multithreaded KLIP PSF Subtraction
pyklip.fm.pertrurb_nospec(evals, evecs, original_KL, refs, models_ref)[source]
Perturb the KL modes using a model of the PSF but with no assumption on the spectrum. Useful for planets.
By no assumption on the spectrum it means that the spectrum has been factored out of Delta_KL following equation (4) of Laurent Pueyo 2016 noted bold “Delta Z_k^lambda (x)”. In order to get the actual perturbed KL modes one needs to multpily it by a spectrum.
This function fits each cube’s spectrum independently. So the effective spectrum size is N_wavelengths * N_cubes.
Parameters: evals – array of eigenvalues of the reference PSF covariance matrix (array of size numbasis) evecs – corresponding eigenvectors (array of size [p, numbasis]) orignal_KL – unpertrubed KL modes (array of size [numbasis, p]) Sel_wv – wv x N array of the the corresponding wavelength for each reference PSF refs – N x p array of the N reference images that characterizes the extended source with p pixels models_ref – N x p array of the N models corresponding to reference images. Each model should be normalized to unity (no flux information) model_sci – array of size p corresponding to the PSF of the science frame perturbed KL modes but without the spectral info. delta_KL = spectrum x delta_Kl_nospec. Shape is (numKL, wv, pix) delta_KL_nospec
pyklip.fm.perturb_nospec_modelsBased(evals, evecs, original_KL, refs, models_ref_list)[source]
Perturb the KL modes using a model of the PSF but with no assumption on the spectrum. Useful for planets.
By no assumption on the spectrum it means that the spectrum has been factored out of Delta_KL following equation (4) of Laurent Pueyo 2016 noted bold “Delta Z_k^lambda (x)”. In order to get the actual perturbed KL modes one needs to multpily it by a spectrum.
Effectively does the same thing as pertrurb_nospec() but in a different way. It injects models with dirac spectrum (all but one vanishing wavelength) and because of the linearity of the problem allow one de get reconstruct the perturbed KL mode for any spectrum. The difference however in the pertrurb_nospec() case is that the spectrum here is the asummed to be the same for all
cubes while pertrurb_nospec() fit each cube independently.
Parameters: evals – evecs – original_KL – refs – models_ref – delta_KL_nospec
pyklip.fm.perturb_specIncluded(evals, evecs, original_KL, refs, models_ref, return_perturb_covar=False)[source]
Perturb the KL modes using a model of the PSF but with the spectrum included in the model. Quicker than the others
Parameters: evals – array of eigenvalues of the reference PSF covariance matrix (array of size numbasis) evecs – corresponding eigenvectors (array of size [p, numbasis]) orignal_KL – unpertrubed KL modes (array of size [numbasis, p]) refs – N x p array of the N reference images that characterizes the extended source with p pixels models_ref – N x p array of the N models corresponding to reference images. Each model should contain spectral informatoin model_sci – array of size p corresponding to the PSF of the science frame perturbed KL modes. Shape is (numKL, wv, pix) delta_KL_nospec
## pyklip.klip module¶
pyklip.klip.align_and_scale(img, new_center, old_center=None, scale_factor=1, dtype=<class 'float'>)[source]
Helper function that realigns and/or scales the image
Parameters: img – 2D image to perform manipulation on new_center – 2 element tuple (xpos, ypos) of new image center old_center – 2 element tuple (xpos, ypos) of old image center scale_factor – how much the stretch/contract the image. Will we scaled w.r.t the new_center (done after relaignment). We will adopt the convention >1: stretch image (shorter to longer wavelengths) <1: contract the image (longer to shorter wvs) This means scale factor should be lambda_0/lambda where lambda_0 is the wavelength you want to scale to shifted and/or scaled 2D image resampled_img
pyklip.klip.calc_scaling(sats, refwv=18)[source]
Helper function that calculates the wavelength scaling factor from the satellite spot locations. Uses the movement of spots diagonally across from each other, to calculate the scaling in a (hopefully? tbd.) centering-independent way. This method is definitely temporary and will be replaced by better scaling strategies as we come up with them. Scaling is calculated as the average of (1/2 * sqrt((x_1-x_2)**2+(y_1-y_2))), over the two pairs of spots.
Parameters: sats – [4 x Nlambda x 2] array of x and y positions for the 4 satellite spots refwv – reference wavelength for scaling (optional, default = 20) Nlambda array of scaling factors scaling_factors
pyklip.klip.collapse_data(data, pixel_weights=None, axis=1, collapse_method='mean')[source]
Function to collapse multi-dimensional data along axis using collapse_method
Parameters: data – (multi-dimension)arrays of 2D images or 3D cubes. pixel_weights – ones if collapse method is not weighted collapse axis – axis index along which to collapse collapse_method – currently support ‘median’, ‘mean’, ‘weighted-mean’, ‘trimmed-mean’, ‘weighted-median’ Collapsed data
pyklip.klip.define_annuli_bounds(annuli, IWA, OWA, annuli_spacing='constant')[source]
Defines the annuli boundaries radially
Parameters: annuli – number of annuli IWA – inner working angle (pixels) OWA – outer working anglue (pixels) annuli_spacing – how to distribute the annuli radially. Currently three options. Constant (equally spaced), log (logarithmical expansion with r), and linear (linearly expansion with r) array of 2-element tuples that specify the beginning and end radius of that annulus rad_bounds
pyklip.klip.estimate_movement(radius, parang0=None, parangs=None, wavelength0=None, wavelengths=None, mode=None)[source]
Estimates the movement of a hypothetical astrophysical source in ADI and/or SDI at the given radius and given reference parallactic angle (parang0) and reference wavelegnth (wavelength0)
Parameters: radius – the radius from the star of the hypothetical astrophysical source parang0 – the parallactic angle of the reference image (in degrees) parangs – array of length N of the parallactic angle of all N images (in degrees) wavelength0 – the wavelength of the reference image wavelengths – array of length N of the wavelengths of all N images NOTE – we expect parang0 and parangs to be either both defined or both None. Same with wavelength0 and wavelengths mode – one of [‘ADI’, ‘SDI’, ‘ADI+SDI’] for ADI, SDI, or ADI+SDI array of length N of the distance an astrophysical source would have moved from the reference image moves
pyklip.klip.high_pass_filter(img, filtersize=10)[source]
A FFT implmentation of high pass filter.
Parameters: img – a 2D image filtersize – size in Fourier space of the size of the space. In image space, size=img_size/filtersize the filtered image filtered
pyklip.klip.klip_math(sci, ref_psfs, numbasis, covar_psfs=None, return_basis=False, return_basis_and_eig=False)[source]
Helper function for KLIP that does the linear algebra
Parameters: sci – array of length p containing the science data ref_psfs – N x p array of the N reference PSFs that characterizes the PSF of the p pixels numbasis – number of KLIP basis vectors to use (can be an int or an array of ints of length b) covar_psfs – covariance matrix of reference psfs passed in so you don’t have to calculate it here return_basis – If true, return KL basis vectors (used when onesegment==True) return_basis_and_eig – If true, return KL basis vectors as well as the eigenvalues and eigenvectors of the covariance matrix. Used for KLIP Forward Modelling of Laurent Pueyo. array of shape (p,b) that is the PSF subtracted data for each of the b KLIP basis cutoffs. If numbasis was an int, then sub_img_row_selected is just an array of length p KL_basis: array of shape (max(numbasis),p). Only if return_basis or return_basis_and_eig is True. evals: Eigenvalues of the covariance matrix. The covariance matrix is assumed NOT to be normalized by (p-1). Only if return_basis_and_eig is True. evecs: Eigenvectors of the covariance matrix. The covariance matrix is assumed NOT to be normalized by (p-1). Only if return_basis_and_eig is True. sub_img_rows_selected
pyklip.klip.make_polar_coordinates(x, y, center=[0, 0])[source]
Parameters: x – meshgrid of x coordinates y – meshgrid of y coordinates center – new location of origin polar coordinates centered at the specified origin
pyklip.klip.meas_contrast(dat, iwa, owa, resolution, center=None, low_pass_filter=True)[source]
Measures the contrast in the image. Image must already be in contrast units and should be corrected for algorithm thoughput.
Parameters: dat – 2D image - already flux calibrated iwa – inner working angle owa – outer working angle resolution – size of noise resolution element in pixels (for speckle noise ~ FWHM or lambda/D) but it can be 1 pixel if limited by pixel-to-pixel noise. center – location of star (x,y). If None, defaults the image size // 2. low_pass_filter – if True, run a low pass filter. Can also be a float which specifices the width of the Gaussian filter (sigma). If False, no Gaussian filter is run tuple of separations in pixels and corresponding 5 sigma FPF (seps, contrast)
pyklip.klip.nan_gaussian_filter(img, sigma, ivar=None)[source]
Gaussian low-pass filter that handles nans
Parameters: img – 2-D image sigma – float specifiying width of Gaussian ivar – inverse variance frame for the image, optional 2-D image that has been smoothed with a Gaussian filtered
pyklip.klip.nan_map_coordinates_2d(img, yp, xp, mc_kwargs=None)[source]
scipy.ndimage.map_coordinates() that handles nans for 2-D transformations. Only works in 2-D!
Do NaN detection by defining any pixel in the new coordiante system (xp, yp) as a nan If any one of the neighboring pixels in the original image is a nan (e.g. (xp, yp) = (120.1, 200.1) is nan if either (120, 200), (121, 200), (120, 201), (121, 201) is a nan)
Parameters: img (np.array) – 2-D image that is looking to be transformed yp (np.array) – 2-D array of y-coordinates that the image is evaluated out xp (np.array) – 2-D array of x-coordinates that the image is evaluated out mc_kwargs (dict) – other parameters to pass into the map_coordinates function. 2-D transformed image. Each pixel is evaluated at the (yp, xp) specified by xp and yp. transformed_img (np.array)
pyklip.klip.rotate(img, angle, center, new_center=None, flipx=False, astr_hdr=None)[source]
Rotate an image by the given angle about the given center. Optional: can shift the image to a new image center after rotation. Also can reverse x axis for those left
handed astronomy coordinate systems
Parameters: img – a 2D image angle – angle CCW to rotate by (degrees) center – 2 element list [x,y] that defines the center to rotate the image to respect to new_center – 2 element list [x,y] that defines the new image center after rotation flipx – reverses x axis after rotation astr_hdr – wcs astrometry header for the image new 2D image resampled_img
## pyklip.parallelized module¶
pyklip.parallelized.generate_noise_maps(imgs, aligned_center, dr, IWA=None, OWA=None, numthreads=None, pool=None)[source]
Create a noise map for each image. The noise levels are computed using azimuthally averaged noise in the images
Parameters: imgs – array of shape (N,y,x) containing N images aligned_center – [x,y] location of the center. All images should be aligned to common center dr (float) – how mnay pixels wide the annulus to compute the noise should be IWA (float) – inner working angle (how close to the center of the image to start). If none, this is 0 OWA (float) – outer working angle (if None, it is the entire image.) numthreads – number of threads to be used pool – multiprocessing thread pool (optional). To avoid repeatedly creating one when processing a list of images. array of shape (N,y,x) containing N noise maps noise_maps
pyklip.parallelized.high_pass_filter_imgs(imgs, numthreads=None, filtersize=10, pool=None)[source]
filters a sequences of images using a FFT
Parameters: imgs – array of shape (N,y,x) containing N images numthreads – number of threads to be used filtersize – size in Fourier space of the size of the space. In image space, size=img_size/filtersize pool – multiprocessing thread pool (optional). To avoid repeatedly creating one when processing a list of images. array of shape (N,y,x) containing the filtered images filtered
pyklip.parallelized.klip_dataset(dataset, mode='ADI+SDI', outputdir='.', fileprefix='', annuli=5, subsections=4, movement=3, numbasis=None, numthreads=None, minrot=0, calibrate_flux=False, aligned_center=None, annuli_spacing='constant', maxnumbasis=None, corr_smooth=1, spectrum=None, psf_library=None, highpass=False, lite=False, save_aligned=False, restored_aligned=None, dtype=None, algo='klip', time_collapse='mean', wv_collapse='mean', verbose=True)[source]
run klip on a dataset class outputted by an implementation of Instrument.Data
Parameters: dataset – an instance of Instrument.Data (see instruments/ subfolder) mode – some combination of ADI, SDI, and RDI (e.g. “ADI+SDI”, “RDI”) outputdir – directory to save output files fileprefix – filename prefix for saved files anuuli – number of annuli to use for KLIP subsections – number of sections to break each annuli into movement – minimum amount of movement (in pixels) of an astrophysical source to consider using that image for a refernece PSF numbasis – number of KL basis vectors to use (can be a scalar or list like). Length of b numthreads – number of threads to use. If none, defaults to using all the cores of the cpu minrot – minimum PA rotation (in degrees) to be considered for use as a reference PSF (good for disks) calibrate_flux – if True calibrate flux of the dataset, otherwise leave it be aligned_center – array of 2 elements [x,y] that all the KLIP subtracted images will be centered on for image registration annuli_spacing – how to distribute the annuli radially. Currently three options. Constant (equally spaced), log (logarithmical expansion with r), and linear (linearly expansion with r) maxnumbasis – if not None, maximum number of KL basis/correlated PSFs to use for KLIP. Otherwise, use max(numbasis) corr_smooth (float) – size of sigma of Gaussian smoothing kernel (in pixels) when computing most correlated PSFs. If 0, no smoothing spectrum – (only applicable for SDI) if not None, optimizes the choice of the reference PSFs based on the spectrum shape. - an array: of length N with the flux of the template spectrum at each wavelength. - a string: Currently only supports “methane” between 1 and 10 microns. Uses minmove to determine the separation from the center of the segment to determine contamination and the size of the PSF (TODO: make PSF size another quantity) (e.g. minmove=3, checks how much contamination is within 3 pixels of the hypothetical source) if smaller than 10%, (hard coded quantity), then use it for reference PSF psf_library – if not None, a rdi.PSFLibrary object with a PSF Library for RDI highpass – if True, run a Gaussian high pass filter (default size is sigma=imgsize/10) can also be a number specifying FWHM of box in pixel units lite – if True, run a low memory version of the alogirhtm save_aligned – Save the aligned and scaled images (as well as various wcs information), True/False restore_aligned – The aligned and scaled images from a previous run of klip_dataset (usually restored_aligned = dataset.aligned_and_scaled) dtype – data type of the arrays. Should be either ctypes.c_float(default) or ctypes.c_double algo (str) – algorithm to use (‘klip’, ‘nmf’, ‘empca’, ‘none’). None will run no PSF subtraction. time_collapse – how to collapse the data in time. Currently support: “mean”, “weighted-mean”, ‘median’, “weighted-median” wv_collapse – how to collapse the data in wavelength. Currently support: ‘median’, ‘mean’, ‘trimmed-mean’ verbose (bool) – if True, print warning messages during KLIP process.
Returns
Saved files in the output directory Returns: nothing, but saves to dataset.output: (b, N, wv, y, x) 5D cube of KL cutoff modes (b), number of images
(N), wavelengths (wv), and spatial dimensions. Images are derotated. For ADI only, the wv is omitted so only 4D cube
pyklip.parallelized.klip_parallelized(imgs, centers, parangs, wvs, filenums, IWA, OWA=None, mode='ADI+SDI', annuli=5, subsections=4, movement=3, numbasis=None, aligned_center=None, numthreads=None, minrot=0, maxrot=360, annuli_spacing='constant', maxnumbasis=None, corr_smooth=1, spectrum=None, psf_library=None, psf_library_good=None, psf_library_corr=None, save_aligned=False, restored_aligned=None, dtype=None, algo='klip', compute_noise_cube=False, verbose=True)[source]
Multitprocessed KLIP PSF Subtraction
Parameters: imgs – array of 2D images for ADI. Shape of array (N,y,x) centers – N by 2 array of (x,y) coordinates of image centers parangs – N length array detailing parallactic angle of each image wvs – N length array of the wavelengths filenums (np.array) – array of length N of the filenumber for each image IWA – inner working angle (in pixels) OWA – outer working angle (in pixels) mode – one of [‘ADI’, ‘SDI’, ‘ADI+SDI’] for ADI, SDI, or ADI+SDI anuuli – number of annuli to use for KLIP subsections – number of sections to break each annuli into movement – minimum amount of movement (in pixels) of an astrophysical source to consider using that image for a refernece PSF numbasis – number of KL basis vectors to use (can be a scalar or list like). Length of b aligned_center – array of 2 elements [x,y] that all the KLIP subtracted images will be centered on for image registration numthreads – number of threads to use. If none, defaults to using all the cores of the cpu minrot – minimum PA rotation (in degrees) to be considered for use as a reference PSF (good for disks) maxrot – maximum PA rotation (in degrees) to be considered for use as a reference PSF (temporal variability) annuli_spacing – how to distribute the annuli radially. Currently three options. Constant (equally spaced), log (logarithmical expansion with r), and linear (linearly expansion with r) maxnumbasis – if not None, maximum number of KL basis/correlated PSFs to use for KLIP. Otherwise, use max(numbasis) corr_smooth (float) – size of sigma of Gaussian smoothing kernel (in pixels) when computing most correlated PSFs. If 0, no smoothing spectrum – if not None, a array of length N with the flux of the template spectrum at each wavelength. Uses minmove to determine the separation from the center of the segment to determine contamination and the size of the PSF (TODO: make PSF size another quanitity) (e.g. minmove=3, checks how much containmination is within 3 pixels of the hypothetical source) if smaller than 10%, (hard coded quantity), then use it for reference PSF psf_library – array of (N_lib, y, x) with N_lib PSF library PSFs psf_library_good – array of size N_lib indicating which N_good are good are selected in the passed in corr matrix psf_library_corr – matrix of size N_sci x N_good with correlation between the target franes and the good RDI PSFs save_aligned – Save the aligned and scaled images (as well as various wcs information), True/False restore_aligned – The aligned and scaled images from a previous run of klip_dataset (usually restored_aligned = dataset.aligned_and_scaled) dtype – data type of the arrays. Should be either ctypes.c_float(default) or ctypes.c_double algo (str) – algorithm to use (‘klip’, ‘nmf’, ‘empca’) compute_noise_cube – if True, compute the noise in each pixel assuming azimuthally uniform noise array of [array of 2D images (PSF subtracted)] using different number of KL basis vectors as specified by numbasis. Shape of (b,N,y,x). aligned_center: (x,y) specifying the common center the output images are aligned to sub_imgs
pyklip.parallelized.klip_parallelized_lite(imgs, centers, parangs, wvs, filenums, IWA, OWA=None, mode='ADI+SDI', annuli=5, subsections=4, movement=3, numbasis=None, aligned_center=None, numthreads=None, minrot=0, maxrot=360, annuli_spacing='constant', maxnumbasis=None, corr_smooth=1, spectrum=None, dtype=None, algo='klip', compute_noise_cube=False, **kwargs)[source]
multithreaded KLIP PSF Subtraction, has a smaller memory foot print than the original
Parameters: imgs – array of 2D images for ADI. Shape of array (N,y,x) centers – N by 2 array of (x,y) coordinates of image centers parangs – N length array detailing parallactic angle of each image wvs – N length array of the wavelengths filenums (np.array) – array of length N of the filenumber for each image IWA – inner working angle (in pixels) OWA – outer working angle (in pixels) mode – one of [‘ADI’, ‘SDI’, ‘ADI+SDI’] for ADI, SDI, or ADI+SDI anuuli – number of annuli to use for KLIP subsections – number of sections to break each annuli into movement – minimum amount of movement (in pixels) of an astrophysical source to consider using that image for a refernece PSF numbasis – number of KL basis vectors to use (can be a scalar or list like). Length of b annuli_spacing – how to distribute the annuli radially. Currently three options. Constant (equally spaced), log (logarithmical expansion with r), and linear (linearly expansion with r) maxnumbasis – if not None, maximum number of KL basis/correlated PSFs to use for KLIP. Otherwise, use max(numbasis) corr_smooth (float) – size of sigma of Gaussian smoothing kernel (in pixels) when computing most correlated PSFs. If 0, no smoothing aligned_center – array of 2 elements [x,y] that all the KLIP subtracted images will be centered on for image registration numthreads – number of threads to use. If none, defaults to using all the cores of the cpu minrot – minimum PA rotation (in degrees) to be considered for use as a reference PSF (good for disks) maxrot – maximum PA rotation (in degrees) to be considered for use as a reference PSF (temporal variability) spectrum – if not None, a array of length N with the flux of the template spectrum at each wavelength. Uses minmove to determine the separation from the center of the segment to determine contamination and the size of the PSF (TODO: make PSF size another quanitity) (e.g. minmove=3, checks how much containmination is within 3 pixels of the hypothetical source) if smaller than 10%, (hard coded quantity), then use it for reference PSF kwargs – in case you pass it stuff that we don’t use in the lite version dtype – data type of the arrays. Should be either ctypes.c_float (default) or ctypes.c_double algo (str) – algorithm to use (‘klip’, ‘nmf’, ‘empca’) compute_noise_cube – if True, compute the noise in each pixel assuming azimuthally uniform noise array of [array of 2D images (PSF subtracted)] using different number of KL basis vectors as specified by numbasis. Shape of (b,N,y,x). sub_imgs
pyklip.parallelized.rotate_imgs(imgs, angles, centers, new_center=None, numthreads=None, flipx=False, hdrs=None, disable_wcs_rotation=False, pool=None)[source]
derotate a sequences of images by their respective angles
Parameters: imgs – array of shape (N,y,x) containing N images angles – array of length N with the angle to rotate each frame. Each angle should be CCW in degrees. centers – array of shape N,2 with the [x,y] center of each frame new_centers – a 2-element array with the new center to register each frame. Default is middle of image numthreads – number of threads to be used flipx – flip the x axis after rotation if desired hdrs – array of N wcs astrometry headers array of shape (N,y,x) containing the derotated images derotated
## pyklip.rdi module¶
class pyklip.rdi.PSFLibrary(data, aligned_center, filenames, correlation_matrix=None, wvs=None, compute_correlation=False)[source]
Bases: object
This is an PSF Library to use for reference differential imaging
master_library
aligned library of PSFs. 3-D cube of dim = [N, y, x]. Where N is ALL files
Type: np.ndarray
aligned_center
a (x,y) coordinate specifying common center the library is aligned to
Type: array-like
master_filenames
array of N filenames for each frame in the library. Should match with pyklip.instruments.Data.filenames for cross-matching
Type: np.ndarray
master_correlation
N x N array of correlations between each 2 frames
Type: np.ndarray
master_wvs
N wavelengths for each frame
Type: np.ndarray
nfiles
the number of files in the PSF library
Type: int
dataset
correlation
N_data x M array of correlations between each 2 frames where M are the selected frames and N_data is the number of files in the dataset. Along the N_data dimension, files are ordered in the same way as the dataset object
Type: np.ndarray
isgoodpsf
array of N indicating which M PSFs are good for this dataset
Type: np.ndarray
add_new_dataset_to_library(dataset, collapse=False, verbose=False)[source]
Add all the files from a new dataset to the PSF library and add them to the correlation matrix. If a mask was used for the correlation matrix, use it here too.
NOTE: This routine already assumes that the data has been centered.
Parameters: dataset (pyklip.instruments.Instrument.Data) –
prepare_library(dataset, badfiles=None)[source]
Prepare the PSF Library for an RDI reduction of a specific dataset by only taking the part of the library we need.
Parameters: dataset (pyklip.instruments.Instrument.Data) – badfiles (np.ndarray) – a list of filenames corresponding to bad files we want to also exclude
Returns:
save_correlation(filename, overwrite=False, clobber=None, format='fits')[source]
Saves self.correlation to a file specified by filename :param filename: filepath to store the correlation matrix :type filename: str :param overwrite: if true overwrite the previous correlation matrix :type overwrite: bool :param clobber: same as overwrite, but deprecated in astropy. :type clobber: bool :param format: type of file to store the correlation matrix as. Supports numpy?/fits?/pickle? (TBD) :type format: str
## pyklip.spectra_management module¶
pyklip.spectra_management.LSQ_scale_model_PSF(PSF_template, planet_image, a)[source]
pyklip.spectra_management.calibrate_star_spectrum(template_spectrum, template_wvs, filter_name, magnitude, wvs)[source]
Scale the Pickles stellar spectrum of a star to the observed apparent magnitude and returns the stellar spectrum in physical units sampled at the specified wavelengths. Currently only supports 2MASS filters and magnitudes. TODO: implement a way to take magnitude error input and propogate the error to the final spectrum
Parameters: template_spectrum – 1D array, model spectrum of the star with arbitrary units template_wvs – 1D array, wavelengths at which the template_spectrum is smapled, in units of microns filter_name – string, 2MASS filter, ‘J’, ‘H’, or ‘Ks’ magnitude – scalar, observed apparent magnitude of the star mag_error – scalar or 1D array with 2 elements, error(s) of the magnitude, not yet implemented wvs – 1D array, the wvs at which to sample the scaled spectrum, in units of angstroms 1D array, scaled stellar spectrum sampled at wvs scaled_spectrum
pyklip.spectra_management.extract_planet_spectrum(cube_para, position, PSF_cube_para, method=None, filter=None, mute=True)[source]
pyklip.spectra_management.find_lower_nearest(array, value)[source]
Find the lower nearest element to value in array.
Parameters: array – Array of value value – Value for which one wants the lower value. (low_value, id) with low_value the closest lower value and id its index.
pyklip.spectra_management.find_nearest(array, value)[source]
Find the nearest element to value in array.
Parameters: array – Array of value value – Value for which one wants the closest value. (closest_value, id) with closest_value the closest lower value and id its index.
pyklip.spectra_management.find_upper_nearest(array, value)[source]
Find the upper nearest element to value in array.
Parameters: array – Array of value value – Value for which one wants the upper value. (up_value, id) with up_value the closest upper value and id its index.
pyklip.spectra_management.get_planet_spectrum(spectrum, wavelength, ori_wvs=None)[source]
Get the normalized spectrum of a planet for a GPI spectral band or any wavelengths array. Spectra are extraced from .flx files from Mark Marley et al’s models.
Parameters: spectrum – Path of the .flx file containing the spectrum. wavelength – array of wavelenths in microns (or string with GPI band ‘H’, ‘J’, ‘K1’, ‘K2’, ‘Y’). (When using GPI spectral band wavelength samples are linearly spaced between the first and the last wavelength of the band.) is the gpi sampling of the considered band in micrometer. spectrum: is the spectrum of the planet for the given band or wavelength array and normalized to unit mean. wavelengths
pyklip.spectra_management.get_specType(object_name, SpT_file_csv=None)[source]
Return the spectral type for a target based on Simbad or on the table in SpT_file
Parameters: object_name – Name of the target: ie “c_Eri” SpT_file – Filename (.csv) of the table containing the target names and their spectral type. Can be generated from quering Simbad. If None (default), the function directly tries to query Simbad. Spectral type
pyklip.spectra_management.get_star_spectrum(wvs_or_filter_name=None, star_type=None, temperature=None, mute=None)[source]
Get the spectrum of a star with given spectral type interpolating the pickles database. The spectrum is normalized to unit mean. It assumes type V star.
Inputs:
wvs_or_filter_name: array of wavelenths in microns (or string with GPI band ‘H’, ‘J’, ‘K1’, ‘K2’, ‘Y’).
(When using GPI spectral band wavelength samples are linearly spaced between the first and the last wavelength of the band.)
star_type: ‘A5’,’F4’,… Is ignored if temperature is defined.
If star_type is longer than 2 characters it is truncated.
temperature: temperature of the star. Overwrite star_type if defined.
Output:
(wavelengths, spectrum) where
wavelengths: Sampling in mum. spectrum: is the spectrum of the star for the given band.
|
Ajay Satsangi
2022-08-12
A stud manufacturer wants to determine the inner diameter of a certain
grade of tire. Ideally, the diameter would be 15mm.The data are as follows:
15, 16, 15, 14, 13, 15, 16, 14
(i) Find the sample mean and median.
(ii) Find the sample variance, standard deviation and range
(iii) Using the calculated statistics in parts (i) and (ii) Comment on
the quality of stud.
Do you have a similar question?
|
# Tag Info
14
General Stuff Would be nice to have inline documentation (///) on classes and public members, but everything is pretty simple and understandable. Some might question the name _hashSet, which tells you nothing about it's purpose, but it's private so not a massive concern. You've correctly identified that you can use a ReaderWriterLock to control access, as ...
13
public Key(string unitID, int address, int comPort, int id) Clearly this class represents something much more specialized than an all-purpose "key". Name the type for what it stands for! private Tuple<string, int, int, int> _impl; The private field should be readonly. But why don't you have this instead? private readonly string _unitId; private ...
10
Why build your own implementation when you can use the features available in the Java standard libraries? IdentityHashMap gives you all the features you need, and a Java Stream/collector will allow you to extract the map easily from your collection.... Collection<Test> tests = ..... Map<Test,Test> uniques = tests.stream().collect(Collectors....
8
After studying your code for a good long while, I realized that the reason you haven't gotten an answer is that there isn't really much to say about your code. Well done! I would recommend adding some thorough ScalaDoc though. It has been my experience that Scala library developers have far too much faith in the ability of library users to magically ...
8
Poor cohesion It's called logical cohesion when a routine that does multiple things of different logic. Your function will either insert records, or select records, or update records. These are very different operations, and their implementations should be in separate functions. If you refactored this way, it would be much better: if (Type == "INSERT") ... 7 static struct sigaction * handle_signal(int sig, int flags, void (*handler)()) { Two things here: The signal handler function should have the signature void handler(int signum); [man page for sigaction] . Your compiler should — at the least — be warning you about assigning a function pointer of a different type further down. ... 7 I heard you like wrappers So I wrapped your wrapper (jQuery) in a wrapper (your thing) so you can do ajax. Why do you do this again?? Instead of wrapping over (slow) jQuery you could just wrap the native ajax functionality. In general I expect that to be faster. Additionally this frees your API to be changed independently of jQuery and it removes that nasty ... 7 I'm the lead engineer for the MongoDB Perl driver with a couple of thoughts for you: you're using 'authenticate', which is for a very old version of the driver which is not recommended for use. In the v1.x series, you should provide username/password in the URI or in MongoClient parameters. Most of the value of this module seems to be setting parameters ... 6 This seems overcomplicated to me. Lazy-loading something doesn't require placement-new trickery and mucking around with alignments. Why not simply store a pointer to the stream which is only initialised when the user performs some kind of operation? This is the general way to do lazy-loading: struct error_stream { private: std::unique_ptr<std::... 6 Standard naming convention in C is usually lower_case_with_underscore. You should be using void * rather than char * I would consider making the return value deterministic when the new size is 0 by always returning NULL rather than the implementation defined behaviour (which could technically be a pointer which is not NULL but still not allowed to be ... 6 I would personally prefer to use HttpClient directly as it provides full control and perfect testability - see here. We can define our REST API as: // see https://jsonplaceholder.typicode.com/ // this site exposes some REST API example to automate public interface ITypicode { Task<BlogPost[]> GetPostsAsync(); Task<BlogPost> GetPostAsync(... 6 It is kind of hard to say without seeing the whole code but at a glance the whole thing seems kind of pointless to me. You're not making it object oriented but rather just adding your own "flavor" API over libusb. I can tell this by the fact that I see a DevHandlePtr deviceHandle being passed around just about everywhere. If you want to actually make a ... 6 One thing I don't see you address, is that INotifyPropertyChanged.PropertyChanged has the signature of: public delegate void PropertyChangedEventHandler(object sender, PropertyChangedEventArgs e); where object sender should be the owner of the property (typically this). You don't provide the code for ObservableObject, but from your code above it seems that ... 5 With the variety of behaviors exhibited by different implementations, I think the only safe thing is to test for zero size before the realloc call and set a minimum allocation (eg 1 byte). Note that realloc will probably set errno to ENOMEM when it fails, but I don't think this is guaranteed. 5 I haven't been on the site in a while so figured I'd give this one a go: My first point is debatable and not essential but reflects my own preference. Consider making the class sealed by default. Whenever I say this to people, most seem surprised. You're probably asking why, I mean, does it matter? Your class is not designed with inheritance in mind, it ... 5 Naming Either use m_ or _ or no prefix for your class variables, but don't mix the styles. Also if you use one of the first two, there is no need to use this. Based on the naming guidelines method names should be named using PascalCase casing. Properties should be named using PascalCase casing too. Method parameters should be named using camelCase ... 5 Are you sure you want your database wrapper to be static? What if you want to mock it in your unit tests? I would make it a plain old class, pass the connection string through the constructor and do any kind of checks that I want right there, because that means your database wrapper won't be allowed to even exist without a valid connection string. With your ... 5 There are big problems with your approach. Cloning BigDecimal with .doubleValue() is broken Cloning a BigDecimal bd using new BigDecimal(bd.doubleValue()) doesn't work in most practical cases. For example, try this code: BigDecimal bd = BigDecimal.valueOf(0.02d); BigDecimal clone = new BigDecimal(bd.doubleValue()); System.out.println(bd); System.out.... 5 Great that you provided a docstring! You made one minor formatting mistake though, there should be a blank line between your brief single line summary and the rest of the docstring. It's recommended in the style guide partially for readability and partially for script parsers. class MongoDB(object): """Provides a RAII wrapper for PyMongo db connections. ... 5 My first question looking at this is, as a developer, what would be the advantages for me to use your window class over just dealing directly with Windows? It seems like I'd still have to do most of the same work as if I didn't use the class. The way one would use this class appears to be: Create an instance of the class Optionally call registerClass (but ... 5 Construction In your constructor from iterators: template <class It> Simple_Array(It first, It last) : Simple_Array(std::distance(first, last)) { unchecked_copy(first, last); } You do two things: first you default-construct a n objects, and then you copy-assign them. This is both inefficient and reduces the usability of your class. What if ... 5 l is a terrible name for anything. Depending on the font used, it may be indistinguishable from the number one. It's generally recommended to avoid using the letter "l" for anything. (In some languages this recommendation is part of the official style guide.) log would be better, still short, and with a more obvious meaning than just "l". 5 As I commented on the question, this has no difference from running console.log without your function. As saying on the documentation of Function.prototype.apply(): The apply() method calls a function with a given this value and arguments provided as an array (or an array-like object). The arguments object is an "array-like object". What is an array-... 5 Design Issue Creating a new thread for every connection is not a good idea. Creating a thread is expensive. Also a single thread can easily handle thousands of connections, so utilizing a single thread for a single connection is very wasteful. Also the way you are using the threads is all wrong. This should never happen. busy = true; task->... 5 This is not much a wrapper. Rather call it a wannabe Query Builder. I don't know the reason, but many people are constantly trying to write something like this. And I don't understand why. Okay, for the DML queries it makes sense - it always makes a cool feeling when you automate a routine task using a template. So, for the insert query it asks for the ... 5 Some specific questions I have about this code/approach are, if applicable, OK. Error handling design decision: should the constructors throw, or should I leave error checking, if desired, to the caller? Either the object is correctly initialized and ready to use or it should throw. Two stage initialization (construct then check) is a bad idea as it ... 4 a condition to be met like if (someBoolean == true) can be simplified to if (someBoolean). For reverse check use the Not operator ! like if (!someBoolean). Dipsose() should not only call Close() but also get rid of session, sessionOptions and transferOptions by calling Dispose() if they implement IDisposible or at least setting them to null. SendFile() ... 4 I'm not too familiar with Android but I've found an issue with the code: both persist and retrieve methods contains the similar if-else chain. You should replace them with polymorphism: Refactoring: Improving the Design of Existing Code by Martin Fowler: Replacing the Conditional Logic on Price Code with Polymorphism Replace Conditional with Polymorphism ... 4 Two minor notes: The following is not too easy to read: curl_setopt(r, CURLOPT_ENCODING, 1); Readers have to be really familar with curl parameters or have to check the documentation. It says the following: The contents of the "Accept-Encoding: " header. This enables decoding of the response. Supported encodings are "identity", "deflate", and "gzip"....
|
# Modularity (networks)
For other uses, see Modularity.
Modularity is one measure of the structure of networks or graphs. It was designed to measure the strength of division of a network into modules (also called groups, clusters or communities). Networks with high modularity have dense connections between the nodes within modules but sparse connections between nodes in different modules. Modularity is often used in optimization methods for detecting community structure in networks. However, it has been shown that modularity suffers a resolution limit and, therefore, it is unable to detect small communities. Biological networks, including animal brains, exhibit a high degree of modularity.
## Motivation
Many scientifically important problems can be represented and empirically studied using networks. For example, biological and social patterns, the World Wide Web, metabolic networks, food webs, neural networks and pathological networks are real world problems that can be mathematically represented and topologically studied to reveal some unexpected structural features.[1] Most of these networks possess a certain community structure that has substantial importance in building an understanding regarding the dynamics of the network. For instance, a closely connected social community will imply a faster rate of transmission of information or rumor among them than a loosely connected community. Thus, if a network is represented by a number of individual nodes connected by links which signify a certain degree of interaction between the nodes, communities are defined as groups of densely interconnected nodes that are only sparsely connected with the rest of the network. Hence, it may be imperative to identify the communities in networks since the communities may have quite different properties such as node degree, clustering coefficient, betweenness, centrality.[2] etc., from that of the average network. Modularity is one such measure, which when maximized, leads to the appearance of communities in a given network.
## Definition
Modularity is the fraction of the edges that fall within the given groups minus the expected such fraction if edges were distributed at random. The value of the modularity lies in the range [−1/2,1). It is positive if the number of edges within groups exceeds the number expected on the basis of chance. For a given division of the network's vertices into some modules, modularity reflects the concentration of edges within modules compared with random distribution of links between all nodes regardless of modules.
There are different methods for calculating modularity.[1] In the most common version of the concept, the randomization of the edges is done so as to preserve the degree of each vertex. Let us consider a graph with $n$ nodes and $m$ links (edges) such that the graph can be partitioned into two communities using a membership variable $s$. If a node $v$ belongs to community 1, $s_v = 1$, or if $v$ belongs to community 2, $s_v = -1$. Let the adjacency matrix for the network be represented by $A$, where $A_{vw} = 0$ means there's no edge (no interaction) between nodes $v$ and $w$ and $A_{vw} = 1$ means there is an edge between the two. Also for simplicity we consider an undirected network. Thus $A_{vw} = A_{wv}$. (It is important to note that multiple edges may exist between two nodes, but here we assess the simplest case).
Modularity Q is then defined as the fraction of edges that fall within group 1 or 2, minus the expected number of edges within groups 1 and 2 for a random graph with the same node degree distribution as the given network.
The expected number of edges shall be computed using the concept of Configuration Models.[3] The configuration model is a randomized realization of a particular network. Given a network with n nodes, where each node v has a node degree kv, the configuration model cuts each edge into two halves, and then each half edge, called a stub, is rewired randomly with any other stub in the network even allowing self loops. Thus, even though the node degree distribution of the graph remains intact, the configuration model results in a completely random network.
Let the total number of stubs be ln
$l_{n}= \sum_{v}^{n} k_{v} =2m$
(1)
Now, if we randomly select two nodes v and w with node degrees kv and kw respectively and rewire the stubs for these two nodes, then,
$\text{Expectation of full edges between }v \text{ and }w ={\text{ (Full edges between }v\text{ and }w\text{)}}/{ \text{(Total number of rewiring possibilities)}}$
(2)
Total number of rewirings possible = number of stubs remaining after choosing a particular stub = ln-1= l n for large n
Thus, Expected [Number of full edges between v and w]=(kv* kw)/ln =(kv kw)/2m.
Hence, the actual number of edges between v and w minus expected number of edges between them is Avw-(kv kw )/2m. Thus using [1]
$Q = \frac{1}{2m} \sum_{vw} \left[ A_{vw} - \frac{k_v*k_w}{2m} \right] \frac{s_{v} s_{w}+1}{2}$
(3)
It is important to note that Eq. 3 holds good for partitioning into two communities only. Hierarchical partitioning (i.e. partitioning into two communities, then the two sub-communities further partitioned into two smaller sub communities only to maximize Q) is a possible approach to identify multiple communities in a network. Additionally, (3) can be generalized for partitioning a network into c communities.[4]
$Q = \sum_{vw} \left[ \frac {A_{vw}}{2m} - \frac{k_v*k_w}{(2m)(2m)} \right] \delta(c_{v}, c_{w}) =\sum_{i=1}^{c} (e_{ii}-a_{i}^2)$
(4)
where eii is the fraction of edges with both end vertices in the same community i:
$e_{ii}= \sum_{vw} \frac{A_{vw}}{2m} \delta(c_v,c_w)$
and ai is the fraction of ends of edges that are attached to vertices in community i:
$a_i=\frac{k_i}{2m} = \sum_{j} e_{ij}$
## Example of multiple community detection
We consider an undirected network with 10 nodes and 12 edges and the following adjacency matrix.
Fig 1. Sample Network corresponding to the Adjacency matrix with 10 nodes, 12 edges.
Fig 2. Network partitions that maximize Q. Maximum Q=0.4896
Node ID 1 2 3 4 5 6 7 8 9 10
1 0 1 1 0 0 0 0 0 0 1
2 1 0 1 0 0 0 0 0 0 0
3 1 1 0 0 0 0 0 0 0 0
4 0 0 0 0 1 1 0 0 0 1
5 0 0 0 1 0 1 0 0 0 0
6 0 0 0 1 1 0 0 0 0 0
7 0 0 0 0 0 0 0 1 1 1
8 0 0 0 0 0 0 1 0 1 0
9 0 0 0 0 0 0 1 1 0 0
10 1 0 0 1 0 0 1 0 0 0
The communities in the graph are represented by the red, green and blue node clusters in Fig 1. The optimal community partitions are depicted in Fig 2.
## Matrix formulation
An alternative formulation of the modularity, useful particularly in spectral optimization algorithms, is as follows.[1] Define Svr to be 1 if vertex v belongs to group r and zero otherwise. Then
$\delta(c_v,c_w) = \sum_r S_{vr} S_{wr}$
and hence
$Q = \frac{1}{2m} \sum_{vw} \sum_r \left[ A_{vw} - \frac{k_v k_w}{2m} \right] S_{vr} S_{wr} = \frac{1}{2m} \mathrm{Tr}(\mathbf{S}^\mathrm{T}\mathbf{BS}),$
where S is the (non-square) matrix having elements Svr and B is the so-called modularity matrix, which has elements
$B_{vw} = A_{vw} - \frac{k_v k_w}{2m}.$
All rows and columns of the modularity matrix sum to zero, which means that the modularity of an undivided network is also always zero.
For networks divided into just two communities, one can alternatively define sv = ±1 to indicate the community to which node v belongs, which then leads to
$Q = {1\over 2m} \sum_{vw} B_{vw} s_v s_w = {1\over 2m} \mathbf{s}^\mathrm{T}\mathbf{Bs},$
where s is the column vector with elements sv.[1]
This function has the same form as the Hamiltonian of an Ising spin glass, a connection that has been exploited to create simple computer algorithms, for instance using simulated annealing, to maximize the modularity. The general form of the modularity for arbitrary numbers of communities is equivalent to a Potts spin glass and similar algorithms can be developed for this case also.[5]
## Resolution limit
Modularity compares the number of edges inside a cluster with the expected number of edges that one would find in the cluster if the network were a random network with the same number of nodes and where each node keeps its degree, but edges are otherwise randomly attached. This random null model implicitly assumes that each node can get attached to any other node of the network. This assumption is however unreasonable if the network is very large, as the horizon of a node includes a small part of the network, ignoring most of it. Moreover, this implies that the expected number of edges between two groups of nodes decreases if the size of the network increases. So, if a network is large enough, the expected number of edges between two groups of nodes in modularity's null model may be smaller than one. If this happens, a single edge between the two clusters would be interpreted by modularity as a sign of a strong correlation between the two clusters, and optimizing modularity would lead to the merging of the two clusters, independently of the clusters' features. So, even weakly interconnected complete graphs, which have the highest possible density of internal edges, and represent the best identifiable communities, would be merged by modularity optimization if the network were sufficiently large.[6] For this reason, optimizing modularity in large networks would fail to resolve small communities, even when they are well defined. This bias is inevitable for methods like modularity optimization, which rely on a global null model.[7]
## Multiresolution methods
There are two main approaches which try to solve the resolution limit within the modularity context: the addition of a resistance r to every node, in the form of a self-loop, which increases (r>0) or decreases (r<0) the aversion of nodes to form communities;[8] or the addition of a parameter γ>0 in front of the null-case term in the definition of modularity, which controls the relative importance between internal links of the communities and the null model.[5] Optimizing modularity for values of these parameters in their respective appropriate ranges, it is possible to recover the whole mesoscale of the network, from the macroscale in which all nodes belong to the same community, to the microscale in which every node forms its own community, hence the name multiresolution methods. However, it has recently been demonstrated that these methods are intrinsically deficient and their use will not produce reliable solutions.[9]
|
# Build a chessboard
Saw this in a PHP challenge. The objective is to make a chessboard with 64 squares (8*8) with the minimum amount of code. Simple enough, I made mine in PHP in 356 bytes (not impressive, I know) and I would like to see some other aproaches. This can be made in a language of your choice, as long as you keep it vanilla, so no imports. Smallest byte count wins.
The output should look like this:
And my code:
<table><?php
$c='black';function p($c,$n){echo'<td style="width:50px;height:50px;background:'.$c.'"></td>';if($n==1){echo"<tr>";}}for($i=1;$i<=64;$i++){if($i%8==0&&$c=="black"){$c="white";$n=1;}elseif($i%8==0&&$c=="white"){$c="black";$n=1;}elseif(isset($n)&&$n==1){$n=0;}elseif($c=="black"){$n=0;$c="white";}elseif($c=="white"){$n=0;$c="black";}p($c,$n);} Or readable: <table><tr> <?php$color = 'black';
function printcolor($color,$nl) {
echo '<td style="width:50px; height:50px; background:' . $color . '"></td>'; if ($nl == true) {
echo "</tr><tr>";
}
}
for ($i=1;$i<=64;$i++) { if ($i % 8 == 0 && $color == "black") {$color = "white";
$nl = true; } elseif ($i % 8 == 0 && $color == "white") {$color = "black";
$nl = true; } elseif (isset($nl) && $nl == true) {$nl = false;
} elseif ($color == "black") {$nl = false;
$color = "white"; } elseif ($color == "white") {
$nl = false;$color = "black";
}
printcolor($color,$nl);
}
Edit:
Sorry I wasn't very specific at first:
• Squares should have 50px * 50px except for vectorial images.
• Output format or size is not relevant nor it needs to be an image.
• For evaluation purposes the output must be visible such as in an image file or a screenshot
• No libraries written after the challenge was posted
• Welcome to PPCG, as it stands, this challenge doesn't really have anything to do with PHP, so I changed your tags. Also, I believe your reference implementation belongs as an answer, not in your question. As Stewie brought up, you should specify the required size of the image output, as well as things like colour specifics and whether a lossy image is allowed. Feb 4, 2016 at 17:04
• So some ASCII-magic is not allowed? :( Feb 4, 2016 at 17:13
• How basic is basic? What is the definition of an "import"? Feb 4, 2016 at 17:59
• It doesn't need to be an image but each square must be at least 50px? That seems self-contradictory to me. Feb 4, 2016 at 19:48
• Programming languages here are very diverse, including some that are made specifically for golfing and have many builtin functions. Therefore, I recommend that the restriction to non-library functions be removed, and that this question instead follow the default (all imports counted in byte count; no libraries written after the challenge was posted). Feb 5, 2016 at 3:23
## vim, 474644 43
crossed out 44 is still regular 44...
iP1 400 <C-p><cr><esc>50i1<esc>YpVr0yk3PyG49PyG:m$<cr>p2GyG3P i enter insert mode P1 signal to NetPPM that we're using black&white (PBM) format 400 width <C-p> fancy trick that inserts the other 400 for height <cr><esc> exit insert mode on the next line 50i1<esc> insert 50 '1's (black) YpVr0 insert 50 '0's (white) by duplicating the line and replacing all chars yk copy both lines (yank-up) 3P paste three times; this leaves us on line two yG copy from line 2 to end of file (this is a full row of pixels now) 49P we need 50 rows of pixels to make a complete "row"; paste 49 times yG copy the entire row of the checkerboard :m$<cr> move line 2 (the line we're currently on) to the end of the file
this gives us the "alternating rows" effect
p we're now on the last line: paste the entire row we copied earlier
2G hop back to line 2 (beginning of image data)
yG3P copy the entire image data, paste 3 times
Outputs in NetPPM format (PBM):
• I love that you can complete a graphical output challenge with a text editor. Are there any other examples of PBMs from golfed vim? Feb 4, 2016 at 23:12
• @dohaqatar7 I dunno, but I've done TikZ with vim before, so graphics in vim is a thing for sure. Feb 4, 2016 at 23:38
• Wow, I never thought to try <C-p> in vim without having started typing a word... that is really handy! Feb 5, 2016 at 15:41
## CSS, 244 bytes
html{background:#fff}body{width:400px;height:400px;background:linear-gradient(45deg,#000 25%,transparent 25%,transparent 75%,#000 75%)0 0/100px 100px,linear-gradient(45deg,#000 25%,transparent 25%,transparent 75%,#000 75%)50px 50px/100px 100px}
html {
background: white;
}
body {
width: 400px;
height: 400px;
background:
linear-gradient(45deg, black 25%, transparent 25%, transparent 75%, black 75%) 0px 0px / 100px 100px,
linear-gradient(45deg, black 25%, transparent 25%, transparent 75%, black 75%) 50px 50px / 100px 100px
}
Explanation: A 100x100px diagonal linear gradient is created with four stops so that most of the gradient is transparent except for two 50px triangular corners. (See below snippet). Adding a second gradient with a 50x50px offset fills in the missing halves of the squares. Increasing the size of the body then allows the resulting pattern to repeat to fill the entire chessboard.
html {
background: white;
}
body {
width: 100px;
height: 100px;
background: linear-gradient(45deg, black 25%, transparent 25%, transparent 75%, black 75%) 0px 0px / 100px 100px
}
• Neat solution. It should work as well if you drop the last } . Feb 4, 2016 at 19:50
• Can you explain what's going on here? Feb 4, 2016 at 19:52
• @flawr I've added a second snippet showing the partial effect, I hope that helps.
– Neil
Feb 4, 2016 at 20:02
• Do you really need the html{background:#fff}? By default 99% of browsers set the background to white, afaik Feb 5, 2016 at 1:48
• @Doᴡɴɢᴏᴀᴛ If I don't do that then the body's background gets applied to the canvas, ruining the effect.
– Neil
Feb 5, 2016 at 8:37
## Mathematica, 34 bytes
ArrayPlot@Array[Mod[+##,2]&,{8,8}]
The output is a vector image and is surrounded in a frame.
Instead of correctly positioning 32 rectangles, we can just generate a binary matrix and make ArrayPlot work for us:
• Nice! Thanks for posting it. Feb 4, 2016 at 17:45
• Looking good. Can you please explain me where do you define each square as 50px? Also, is there an online emulator where I can test it? Feb 4, 2016 at 17:47
• @Bruno The output is a vector graphic, so there is no such thing as pixel sizes (the image has no intrinsic size - it can be scaled to and displayed at any size). That's why I asked. Feb 4, 2016 at 17:49
• Wait, GenerateChessBoardWithColorsAsParameters[ColorBlack, ColorWhite, 8, 8, 50, 50] doesn't work? ;) Feb 4, 2016 at 23:28
• @CᴏɴᴏʀO'Bʀɪᴇɴ It does, but that's 73 bytes. Feb 5, 2016 at 8:16
# Octave, 20 18 bytes
Thanks to @Bruno for shaving off 2 bytes.
imshow(invhilb(8))
Result:
This answer uses a technique found here. It also relies on the automatic scaling of images in Octave depending on the size of the figure window.
• @AlexA. I'm also not entirely convinced that the squares must be exactly 50x50 pixels, as the very next rule says "Output format or size is not relevant...". People have asked for clarification in the comments, but the question has not been updated. Feb 5, 2016 at 2:08
• Edited the question. Tested your code and it's working, so currently you have the lowest byte count :) Feb 5, 2016 at 10:14
• Also removed the >0 and it still works so you can shave 2 bytes there Feb 5, 2016 at 10:16
• @Bruno What? That is wild. So it's apparently clamping the values of the matrix (which are all <<0 or >>1) to 0 and 1. Thanks for the tip, I'll update! :D Feb 5, 2016 at 15:37
• Congrats on 2k! Aug 16, 2016 at 22:33
# Mathematica, 8172 55 bytes
Graphics[Rectangle/@Select[Range@8~Tuples~2,2∣Tr@#&]]
Image is of a previous version's evaluation, but still looks the same.
# Pure Bash (no external utilities), 133
I saw @Doorknob's comment as a bit of a challenge. Its a bit long, but here goes:
echo \# ImageMagick pixel enumeration:400,400,1,rgb
for((;x=p%400,y=p/400,c=1-(x/50^y/50)&1,p++<160000;));{
echo "$x,$y:($c,$c,$c)" } Output is in Imagemagick's .txt format. Note this is pure Bash. Neither Imagemagick nor any other external utilities are spawned to generate this output. However, the output may be redirected to a .txt file and viewed with the ImageMagick display utility: This image format is nice because not only is it pure text, it is little more than a list of all pixels (x, y and colour value), one per line. It is a fairly simple matter to derive all pixel values arithmetically in one big loop. # Previous answer, 167 echo "\"400 400 2 1\" \" c white\" \"b c black\"" printf -vf %50s a="$f${f// /b}" o=("\"$a$a$a$a\"" "\"${f// /b}$a$a$a$f\"")
for i in {0..399};{
echo "${o[i/50%2]}" } Output is in the X_PixMap text image file format, which may also be viewed with the ImageMagick display utility. Note I've taken as much out of the XPM format as I could such that display would still accept it. I was able to take out all the boilerplate with the exception of the " double quotes around each line. No idea what other - if any - utilities will accept this. • Oooh, interesting ;) Nov 25, 2020 at 13:35 # Octave, 48 bytes imwrite(kron(mod((t=0:7)+t',2),ones(50)),'.png') This works exactly the same as my Matlab answer, but there is no spiral in Octave. Instead we use a feature that Matlab does not have: We can use the assignment of t already as an expression, and later use t again in the same expression. (This is the rescaled version, I do not want to clutter the answers here=) • The top left corner should be white, not black. Feb 4, 2016 at 20:28 • The output should be a checkerboard, the orientation was not specified. Feb 4, 2016 at 20:57 • Sorry, flawr, the output should be a chessboard. A chessboard is always "Queen on her color, white on the right" (meaning the right hand of each player has a white corner square). Feb 4, 2016 at 21:38 • Then imagine one player sitting to the right, one to the left. Again: this was not specified by the challenge, that is just your interpretation. Feb 4, 2016 at 21:52 ## PowerShell + browser of your choice, 149 143 bytes The inability to use imports is really tough, as all of the GDI calls (i.e., the stuff PowerShell uses to draw) are buried behind imports in .NET ... "<table><tr>"+((1..8|%{$a=$_;-join(1..8|%{'<td style="width:50px;height:50px'+("",";background:#000")[($a+$_)%2]+'"></td>'})})-join'</tr><tr>') Edit - saved six bytes thanks to @NotThatCharles This uses two for-loops from 1..8 to generate a big-ol' HTML string, similar to the PHP example provided, and output it onto the pipeline. Each time through we calculate whether to append ;background:#000 for the black backgrounds by taking our current position on the board modulo 2. To use, redirect the output into the file of your choice (e.g., with something like > chessboard.htm) and then launch that in the browser of your choice. For the screenshot below, I used "c.htm" and Firefox. • This one was unespected but I quite like it somehow :) Feb 4, 2016 at 18:34 • white and black can be #fff and #000... but why bother specifying white? Feb 4, 2016 at 20:53 • try (";background:#000","")[($a+$_)%2] instead. Feb 4, 2016 at 22:27 • @NotthatCharles Durr, had my white and black flip-flopped, so it was only outputting white squares. Corrected for an additional 4 bytes saved. Feb 5, 2016 at 13:27 # Python 3, 365, 288, 270, 256 bytes Thanks @Razetime, @mehbark and @Amazeryogo for the suggestions from turtle import * B=range H=Screen() A=Turtle() def F(): for C in B(4):A.fd(30);A.lt(90) A.fd(30) for D in B(8): A.up();A.setpos(0,30*D);A.pd() for G in B(8): if(D+G)%2==0:E='black' else:E='white' A.fillcolor(E);A.begin_fill();F();A.end_fill() Try it online! Admittedly not good:) first contribution • forward can become fd, left to lt, down to pd. You can probably remove the definition of G since it has no parameters as well, including in main program. Since the background is white, you don't need the else statement. Dec 1, 2020 at 6:27 • oh, thanks :) will edit Dec 1, 2020 at 6:32 • 269 by refactoring the if statement Dec 1, 2020 at 10:09 • @Lyxal It gives an error i.imgur.com/KOaiOEl.png Dec 1, 2020 at 12:47 • @agent355 just change the last ' to ) Dec 1, 2020 at 15:04 # PHP + CSS + HTML, 136 bytes Taking the table aproach to a higher level: <table><?for(;++$i<9;)echo'<tr>',str_repeat(["<td b><td>","<td><td b>"][$i&1],4);?><style>td{width:50px;height:50px}[b]{background:#000} It generates the following code: <table><tr><td><td b><td><td b><td><td b><td><td b><tr><td b><td><td b><td><td b><td><td b><td><tr><td><td b><td><td b><td><td b><td><td b><tr><td b><td><td b><td><td b><td><td b><td><tr><td><td b><td><td b><td><td b><td><td b><tr><td b><td><td b><td><td b><td><td b><td><tr><td><td b><td><td b><td><td b><td><td b><tr><td b><td><td b><td><td b><td><td b><td><style>td{width:50px;height:50px}[b]{background:#000} It relies heavily on browsers' kindness and CSS. • Good solution. Tho I had to include php after <? and include$i=0 as the first for parameter to get it working properly, giving a final result of 144 bytes. Feb 5, 2016 at 10:08
• @Bruno If you refer to the warning it gives, warnings are disregarded here. However, there's a trillion ways of disabling them. One of them is to replace ++$i<9 with @++$i<9. Also, for it to work without <?php, one must have the directive short_open_tags=On, which is default on some environments. Read more on stackoverflow.com/a/2185331/2729937 Feb 5, 2016 at 10:26
# MATL, 11 (27) bytes
8:t!+Q2\TYG
This produces the following figure. It doesn't have an intrinsic size; it's automatically scaled depending on the size of the figure window. This seems to be allowed by the challenge.
### Explanation
8: % row vector [1,2,...8]
t! % duplicate and transpose into column vector
+ % 8x8 matrix with all pairwise additions
2\ % modulo 2. Gives 8x8 matrix of zeros and ones
TYG % draw image
If autoscaling is not allowed:
'imshow'8:t!+Q2\50t3$Y"0#X$
produces the following figure with 50x50-pixel squares
### Explanation
'imshow' % name of Matlab function
8:t!+Q2\ % same as above. Produces 8x8 matrix of zeros and ones
50t3$Y" % repeat each element 50 times in each dimension 0#X$ % call imshow function with above matrix as input
# Pyth, 28 26 bytes
J*4+*50]255*50]0.wm_mxkdJJ
Explanation
J - Autoassign J = V
*50]0 - 50*[0]
*50]255 - 50*[255]
+ - ^^+^
*4 - 4*^
.w - write_greyscale(V)
m J - [V for d in J]
_ - reversed(V)
m J - [V for k in J]
xkd - k^d
Python equivalent
J = 4*(50*[255]+50*[0])
write_greyscale([[k^d for k in J][::-1] for d in J])
Try it here (just the colour values)
Output:
• Nice job on the byte count but I need a valid output with visible squares :) Feb 4, 2016 at 18:19
• @Bruno Output added! I installed PIL just for you :O (I hadn't actually tested it before)
– Blue
Feb 4, 2016 at 18:27
• @muddyfish sorry for the trouble and thanks. The board must start with and end with a white square tho :) Feb 4, 2016 at 18:36
# FFmpeg, 78 82 100 bytes
Finally got around to cleaning the board.
ffplay -f lavfi color=s=400x400,geq='255*mod(trunc(X/50)+trunc(Y/50)+1,2):128'
Older:
ffmpeg -f lavfi -i "color=tan@0:256x256,format=ya8" -vf "scale=400:-1:alphablend=checkerboard" .jpg
Will exit with error, but after producing image below.
(board's collected some dust)
# Jelly, 26 bytes
400R%2ẋ€50FU;$ẋ4;;;1j⁶;”PU Since Jelly has no support for images built in, we print a PPM image. Try it online! (smaller board for speed, raw PPM) ### Results ### How it works 400R%2ẋ€50FU;$ẋ4;;;1j⁶;”PU Main link. No arguments.
400 Set the left argument to 400.
R Yield [1, ..., 400].
%2 Compute the parity of each integer.
ẋ€50 Replace each parity by an array of 50 copies of itself.
F Flatten the resulting, nested list.
This creates the first rank of the board.
$Combine the two atoms to the left: U Reverse the array of parities. ; Concatenate the reversed array with the original. This creates the first two ranks of the board. ẋ4 Repeat the resulting list four times. This creates all eight ranks of the board. ; Append 400, the link's left argument. ; Append 400, the link's left argument. ;1 Append 1. j⁶ Join, separating by spaces. ;”P Append the character 'P'. U Reverse the resulting list. ### Non-competing version (24 bytes) The newest Jelly interpreter that predates this post didn't vectorize x properly. With the latest version, 2 additional bytes can be saved. 400R%2x50U;$ẋ4;;;1j⁶;”PU
The only difference is that x50 yields a flat list (with every original element repeated 50 times), so F is no longer necessary.
• It looks like you were writing a java answer and fell asleep slightly whilst typing a ;... ;) Feb 4, 2016 at 23:31
• @CᴏɴᴏʀO'Bʀɪᴇɴ Java? You must be on Java 10.0, Golfing Edition, cause that doesn't look like any Java I've seen.... Feb 5, 2016 at 22:55
# Matlab, 47 (24) bytes
imwrite(kron(mod(spiral(8),2),ones(50)),'.png')
This works exactly the same as my Octave answer, but I was able to use spiral which saved one byte. spiral(n) makes an nxn matrix and fills it spiraling with the first n^2 integers.
If vectorgraphics are allowed, we could do it in 24 bytes:
imshow(mod(spiral(8),2))
(This is the rescaled version, I do not want to clutter the answers here=)
# APL (Dyalog Extended), 51 30 bytes
'P1'
2⍴400
50/50⌿(⍳8)⌽8 8⍴'01'
Try it online!
Outputs a netpbm image. Save the result to a .pbm file and view here.
Since the output is too large for tio, here's the generated output.
Here's the output converted to png:
Try it online!
• 30
Nov 24, 2020 at 20:32
# CJam, 27 bytes
"P1"400__,2f%50e*_W%+4*~]S*
Try it online! (smaller board for speed, raw PPM)
### How it works
"P1" e# Push that string.
400__ e# Push three copies of 400.
, e# Turn the last one into [0 ... 399].
2f% e# Compute the parity of each integer.
50e* e# Repeat each parity 50 times.
e# This creates the first rank of the board.
_W% e# Create a reversed copy of the resulting array.
+ e# Concatenate the original with the reversed array.
e# This creates the first two ranks of the board.
4* e# Repeat the resulting array four times.
e# This creates all eight ranks of the board.
~ e# Dump all of its items (the pixels) on the stack.
] e# Wrap the entire stack in an array.
S* e# Join that array, separating them by spaces.
HTML with utf-8 - 66b
<div style="font:100 50px/48px serif">▚▚▚▚<br>▚▚▚▚<br>▚▚▚▚<br>▚▚▚▚
▚ is short-direct utf for entity &# 9626 ;
Unicode Character 'QUADRANT UPPER LEFT AND LOWER RIGHT' (U+259A)
silly me, was looking for a 1 utf-8 char solution -would have been... 1b!
• Seems like fontsize is wrong. Feb 6, 2016 at 20:20
• You should use ▞ instead so that the top-left square is white like on a standard chessboard. Also, use <pre> instead of <div> so that you can use newlines instead of <br>.
– Neil
Jul 17, 2016 at 10:14
• Your bytecount appears to be wrong, it should be 98 bytes, as ▚ counts for 3 bytes using UTF-8 encoding. In the future, you can use this to check your UTF-8 Byte Count Oct 11, 2017 at 17:30
# Desmos, 136 bytes
ceil(sin(x))\left\{6<x<29\right\}>=ceil(sin(y))\left\{0<y<26\right\}
ceil(sin(x))\left\{3<x<27\right\}<=ceil(sin(y))\left\{0<y<24\right\}
This was the first thing I thought of when I saw this challenge, hopefully it's not stretching it too far.
• Nice answer! You don't need to worry so long as your answer fits the guidelines. (This question isn't very specific, so it's alright) Nov 30, 2020 at 17:18
• The code provided wasn't in the correct format for Desmos, per consensus here. The issue is with the inequalities, which don't work as nicely as they look like they should. I've gone ahead and fixed it for you, and using >= instead of ≥ saves a byte for each line as well. Dec 3, 2020 at 5:30
• 60-byter: sin(\pi x)\sin\pi y>0\left\{max(abs(x-4),abs(y-4))<4\right\} Dec 3, 2020 at 5:47
## PHP, 166158 155 bytes
Works in PHP 7.0.2 (short-tags enabled) and Chrome 48.0.2564.97 m
<table><tr><? while(++$i<=8){while(++$j<=8){echo"<td style=background-color:".($i%2==0?($j%2==1?0:""):($j%2==0?0:"")).";padding:9></td>";}echo"<tr>";$j=0;}
• You can use the property bgcolor=0 to generate the black background. That should shave off a ton of bytes! And instead of $v%2==0, use $v&1, which should shave a few bytes. Feb 6, 2016 at 12:19
# iKe, 24 bytes
,(;cga;t=\:t:2!-20!!160)
The core of the technique is to generate a list of x coordinates, divmod them and then take an equality cross-product to generate an appropriate bitmap. Using smaller examples for illustrative purposes:
!8
0 1 2 3 4 5 6 7
-2!!8
0 0 1 1 2 2 3 3
2!-2!!8
0 0 1 1 0 0 1 1
t=\:t:2!-2!!8
(1 1 0 0 1 1 0 0
1 1 0 0 1 1 0 0
0 0 1 1 0 0 1 1
0 0 1 1 0 0 1 1
1 1 0 0 1 1 0 0
1 1 0 0 1 1 0 0
0 0 1 1 0 0 1 1
0 0 1 1 0 0 1 1)
try it here. Technically iKe works on a logical 160x160 pixel canvas, but in full-screen mode (the default when following a saved link) this is upscaled by 3x. I think this is still following the spirit of the question, as the program could assemble a much larger bitmap with the same character count; it just comes down to an arbitrary display limitation.
## Update:
iKe isn't primarily designed for golf, but livecoding still benefits from brevity and sane defaults. As a result of tinkering with this problem, I've decided to permit it to use a default palette if none is provided. This particular solution could now be expressed with:
,(;;t=\:t:2!-20!!160)
Saving (an ineligible) 3 bytes.
## PHP >=5.4, 175159149 116 Bytes
<table><tr><? for(;@++$i<65;)echo'<td width=50 height=50 ',$i+@$m&1?:'bgcolor=0','>',$i%8<1?'<tr '.($m=@!$m).'>':'';
<table><tr><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><tr 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><tr ><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><tr 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><tr ><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><tr 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><tr ><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><tr 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><td width=50 height=50 bgcolor=0><td width=50 height=50 1><tr >
## Notes
• Shaved 16 bytes - Thanks @insertusernamehere
• Shaved 10 bytes - Thanks @msh210
• Shaved 30 bytes - Thanks @Ismael Miguel
• Probably this can be golfed even more, but here you go (152 bytes): <table><tr><?php for(;++$i<65;){echo'<td style="width:50px;height:50px;background:#'.(($i+$m)%2?'000':'').'"></td>';if($i%8<1){echo"</tr><tr>";$m=!$m;}} Feb 4, 2016 at 20:03
• While i didn't remove the initial assignments(Works, personal quirk won't let me do it), Thank for this Feb 4, 2016 at 21:20
• According to even the strict version of HTML 4, you can skip the end tag for TR. Feb 5, 2016 at 0:07
• Replace ++$i<65 with @++$i<65, since you are worried about the warnings. This means that you can reduce $m=$i=0 to just $m=0, saving you 2 bytes. Instead of echo 'a'.'b'.'c';, you can do echo 'a','b','c';. This means that your echo can be echo'<td style="width:50px;height:50px;background:#',($i+$m)%2?'':'000','">'; saving you more 2 bytes. Also, HTML attributes don't require quotes. Remove them and sabe 2 bytes. Also, there's a much shorter bgcolor attribute, that reduces more bytes! You can use a print() in the for to save even more bytes! Feb 6, 2016 at 11:53 • To save even more, I've replaced ($i+$m)%2 with $i+@$m&1, which allowed me to remove that $m=0. Ahead, I've been able to remove your if, and replaced it with a trenary operation. To save even more, I've removed your style and added the properties width and height. To get even more into the hacky side, I've figured that Chrome 48.0.2564.103 m uses black if the background color is 0, using the property bgcolor. That allowed me to ever reduce more! More reductions is better! Feb 6, 2016 at 12:14
# GIMP, 539 bytes
gimp -i -b '(let* ((i (car (gimp-image-new 400 400 1))) (d (car (gimp-layer-new i 400 400 2 "b" 100 0)))) (gimp-image-insert-layer i d 0 -1) (define (t x y) (gimp-selection-translate i x y)) (define (x) (t 100 0)) (define (X) (t -100 0)) (define (y) (t 50 50)) (define (Y) (t -50 50)) (define (f) (gimp-edit-fill d 1)) (define (r) (f) (x) (f) (x) (f) (x) (f) (y)) (define (R) (f) (X) (f) (X) (f) (X) (f) (Y)) (gimp-image-select-rectangle i 2 0 0 50 50) (r) (R) (r) (R) (r) (R) (r) (R) (gimp-file-save 1 i d "c.png" "c.png") (gimp-quit 0))'
Ungolfed Scheme script-fu:
(let* ((i (car (gimp-image-new 400 400 GRAY)))
(d (car (gimp-layer-new i 400 400 GRAY-IMAGE "b" 100 NORMAL-MODE))))
(gimp-image-insert-layer i d 0 -1)
(define (t x y) (gimp-selection-translate i x y))
(define (x) (t 100 0))
(define (X) (t -100 0))
(define (y) (t 50 50))
(define (Y) (t -50 50))
(define (f) (gimp-edit-fill d BACKGROUND-FILL))
(define (r) (f) (x) (f) (x) (f) (x) (f) (y))
(define (R) (f) (X) (f) (X) (f) (X) (f) (Y))
(gimp-image-select-rectangle i CHANNEL-OP-REPLACE 0 0 50 50)
(r) (R) (r) (R) (r) (R) (r) (R)
(gimp-file-save RUN-NONINTERACTIVE i d "c.png" "c.png")
(gimp-quit 0))
In batch mode, create a blank image, create a 50x50 rectangular selection, fill it, and then repeatedly move it around the image, filling in squares. Then save to c.png and exit.
Output:
# R, 40 38 bytes
Edit: -2 bytes thanks to Robin Ryder
image(matrix(1:0,9,8)[-1,],c=1:0,ax=F)
Try it at rdrr.io
This nicely exploits R's argument-recycling to fill a matrix() with dimensions of 9,8 with values of 1:0 recycled to fill it. We then use negative indexing [-1,] to chop-off the first row.
The image() function converts a matrix directly into a coloured image. Here we specify the color as 1:0 (black & white), and the axes as FALSE.
• How does this fit the 50 pixel per square rule? Sep 6, 2020 at 15:19
• 'except for vectorial images'. If the image isn't saved, it's rendered (on most setups) in a resizeable graphics window (I took a screenshot to upload the image, so that would have 'fixed' it at a random size). Sep 6, 2020 at 15:23
• image(matrix(1:0,9,8)[-1,],c=1:0,ax=F) should work for -2 bytes. Nov 25, 2020 at 22:32
• @RobinRyder - good catch! Thanks! Nov 26, 2020 at 10:12
# AppleSoft Basic, 146 bytes
The best (standard) resolution available on the Apple II is HGR2 mode, at 280x192 pixels, so the squares are 24x24. Verified at https://www.calormen.com/jsbasic/
0HGR2:HCOLOR=3:A=0:B=167:GOSUB8:A=24:B=191:GOSUB8:END
8FORI=A TOB STEP48:FORJ=A TOB STEP48:FORK=0TO23:HPLOTI+K,J TOI+K,J+23:NEXT:NEXT:NEXT:RETURN
image credit: Apple II Basic Bot on twitter
If we totally ignore resolution & use GR mode, 107 bytes:
1GR:COLOR=15:A=0:B=7:GOSUB8:A=1:B=8:GOSUB8:END
8FORI=A TOB STEP2:FORJ=A TOB STEP2:PLOTI,J:NEXT:NEXT:RETURN
# JavaScript, 150
This can definitely be golfed. It creates HTML.
for(i=0;i<8;)console.log(<b style=margin-${['lef','righ'][i++%2]}t:50;width:50;height:50;display:inline-block;background:#000></b>.repeat(4)+'<br>') • Huh, I never knew about template strings in JavaScript. Cool. Feb 5, 2016 at 1:04 # Perl 5 - 80 Generates a .PBM file: print 'P1'.' 400'x2 .$".(((0 x50 .1 x50)x4 .$")x50 .((1 x50 .0 x50)x4 .$")x50)x4
# Ruby with Shoes, 97 characters
Shoes.app(width:400){64.times{|i|stack(width:50,height:50){background [white,black][(i/8+i)%2]}}}
Sample output:
• Should start and end with white. Otherwise good job :) Feb 5, 2016 at 14:35
• Oops. Thanks @Bruno. Fixed. Feb 5, 2016 at 16:48
• Great, upvoted :) Feb 5, 2016 at 16:49
# Lua + LÖVE, 138113112 106 characters
function love.draw()for i=0,31 do
love.graphics.rectangle("fill",i%8*50,(i-i%8)/8*100+i%2*50,50,50)end
end
Sample output:
• Grr! Lua 5.3 has // integer division operator, but apparently there is still no LÖVE built with a LuaJIT featuring it. ☹ Feb 5, 2016 at 16:53
## PowerShell + GDI, 346 bytes
Add-Type -AssemblyName System.Windows.Forms
$f=New-Object Windows.Forms.Form$f.width=$f.height=450$g=$f.CreateGraphics()$f.add_paint({0..7|%{$y=$_;0..7|%{$g.FillRectangle((New-Object Drawing.SolidBrush ("white","black")[($_+$y)%2]),(new-object Drawing.Rectangle ($_*50),($y*50),50,50))}}})$f.showDialog()
(newlines count same as semicolon, so newlines for readability)
As opposed to my other answer, this one uses the .NET assemblies to call GDI+ function calls. Interestingly, it's about twice the length.
The first two lines load the System.Windows.Forms and System.Drawing assemblies. The first is used for the literal window and the canvas thereon, the second is used for the drawing object (in this code, a brush) that create the graphics on the canvas.
We then create our form $f with the next line, and set its width and height to be 450. Note that this isn't 50*8, since these numbers correspond to the border-to-border edge of the forms window, including titlebar, the close button, etc. The next line creates our canvas $g by calling the empty constructor. This defaults to the upper-left of the non-system area of the form being equal to 0,0 and increasing to the right and downward, perfect for our needs.
The next line is the actual call that draws the graphics, with $f.add_paint({...}). We construct the graphics calls by double-for looping from 0..7 and carrying a helper variable $y through each outer loop. Each inner loop, we tell our canvas to .FillRectangle(...,...) to draw our squares. The first parameter constructs a new SolidBrush with a color based on where we're at on the board. Other options here could be a hatch, a gradient, etc. The second parameter is a new Rectangle object starting at the specified x $_*50 and $y*50 coordinates and extending for 50 in each direction. Remember that 0,0 is the top-left.
The final line just displays the output with .showDialog().
Note that since we're creating a form object, and PowerShell is all about the pipeline, closing the pop-up form will pass along a System.Enum.DialogResult object of Cancel, since that's technically what the user did. Since we're not capturing or otherwise doing anything with that result, the word Cancel will be displayed to STDOUT when the program concludes, as it was left on the pipeline.
|
# Odds of being dealt 4 of a kind in 5 card draw
Odds. Royal Flush dealt, 1 / Royal Flush ending hand, 1 / Four of a kind pair into a quad.... 1 / 360” means there's a one in 360 chance of getting that four of a kind. Drawing one card to make a flush, 1 / 5. Drawing two.
Calculate the probability I get a poker hand of four-of-a-kind. including four of a kind ; the probability of being dealt one in a five - card deal is.
After dividing by 5), the probability is FOUR OF A KIND This is five cards in a sequence (e.g., 4, 5 with aces allowed to be either 1 or 13 (low or high) and with the cards allowed to be of the. SN or NS decomposition. The odds against making a royal flush are the same as a straight flush in similar conditions. In each of the two ranks there are four cards 21-372 choose. You have a Three of a kind. Daily direct entry satelite qualifyings to WSOP. $odds of being dealt 4 of a kind in 5 card draw$
|
ColorTools - Maple Programming Help
Home : Support : Online Help : Graphics : Packages : ColorTools : ColorTools/RGBGrid
ColorTools
RGBGrid
plot grids of RGB colors
Calling Sequence RGBGrid(red=r); RGBGrid(green=g); RGBGrid(blue=b);
Parameters
r, g, b - value between 0 and 1.
Description
• The RGBGrid command displays a grid of the colors with the given RGB color channel fixed at the specified value and the other two varying from 0 to 1.
Examples
> $\mathrm{with}\left(\mathrm{ColorTools}\right):$
> $\mathrm{RGBGrid}\left(\mathrm{red}=0.5\right)$
> $\mathrm{RGBGrid}\left(\mathrm{green}=0.75\right)$
> $\mathrm{RGBGrid}\left(\mathrm{blue}=0.25\right)$
> $\mathrm{RGBGrid}\left(\right)$
> $\mathrm{RGBGrid}\left(\mathrm{green}=0.25,\mathrm{blue}=0.25\right)$
Compatibility
• The ColorTools[RGBGrid] command was introduced in Maple 16.
|
Follow
Yulong Lu
Verified email at math.umass.edu - Homepage
Title
Cited by
Cited by
Year
A universal approximation theorem of deep neural networks for expressing probability distributions
Y Lu, J Lu
Advances in neural information processing systems 33, 3094-3105, 2020
732020
A Bayesian level set method for geometric inverse problems
MA Iglesias, Y Lu, AM Stuart
Interfaces and free boundaries 18 (2), 181-217, 2016
702016
Scaling limit of the Stein variational gradient descent: The mean field regime
J Lu, Y Lu, J Nolen
SIAM Journal on Mathematical Analysis 51 (2), 648-671, 2019
67*2019
A mean field analysis of deep resnet and beyond: Towards provably optimization via overparameterization from depth
Y Lu, C Ma, Y Lu, J Lu, L Ying
International Conference on Machine Learning, 6426-6436, 2020
632020
A priori generalization analysis of the deep Ritz method for solving high dimensional elliptic partial differential equations
Y Lu, J Lu, M Wang
Conference on learning theory, 3196-3241, 2021
48*2021
Gaussian Approximations for Probability Measures on
Y Lu, A Stuart, H Weber
SIAM/ASA Journal on Uncertainty Quantification 5 (1), 1136-1165, 2017
312017
The factorization method for inverse elastic scattering from periodic structures
G Hu, Y Lu, B Zhang
Inverse Problems 29 (11), 115005, 2013
272013
Accelerating langevin sampling with birth-death
Y Lu, J Lu, J Nolen
arXiv preprint arXiv:1905.09863, 2019
232019
Quantitative propagation of chaos in a bimolecular chemical reaction-diffusion model
TS Lim, Y Lu, JH Nolen
SIAM Journal on Mathematical Analysis 52 (2), 2098-2133, 2020
222020
Geometric ergodicity of Langevin dynamics with Coulomb interactions
Y Lu, JC Mattingly
Nonlinearity 33 (2), 675, 2019
192019
Uniform-in-time weak error analysis for stochastic gradient descent algorithms via diffusion approximation
Y Feng, T Gao, L Li, JG Liu, Y Lu
arXiv preprint arXiv:1902.00635, 2019
192019
Gaussian approximations for transition paths in Brownian dynamics
Y Lu, AM Stuart, H Weber
SIAM Journal on Mathematical Analysis 49 (4), 3005–3047, 2017
192017
On the bernstein-von mises theorem for high dimensional nonlinear bayesian inverse problems
Y Lu
arXiv preprint arXiv:1706.00289, 2017
152017
Exponential decay of Rényi divergence under Fokker–Planck equations
Y Cao, J Lu, Y Lu
Journal of Statistical Physics 176, 1172-1184, 2019
142019
On the representation of solutions to elliptic pdes in barron spaces
Z Chen, J Lu, Y Lu
Advances in neural information processing systems 34, 6454-6465, 2021
132021
A priori generalization error analysis of two-layer neural networks for solving high dimensional Schrödinger eigenvalue problems
J Lu, Y Lu
Communications of the American Mathematical Society 2 (01), 1-21, 2022
92022
Solving multiscale steady radiative transfer equation using neural networks with uniform stability
Y Lu, L Wang, W Xu
Research in the Mathematical Sciences 9 (3), 45, 2022
72022
Continuum limit and preconditioned Langevin sampling of the path integral molecular dynamics
J Lu, Y Lu, Z Zhou
Journal of Computational Physics 423, 109788, 2020
72020
On the rate of convergence of empirical measure in Wasserstein distance for unbounded density function
A Liu, JG Liu, Y Lu
arXiv preprint arXiv:1807.08365, 2018
52018
An operator splitting scheme for the fractional kinetic Fokker-Planck equation
MH Duong, Y Lu
arXiv preprint arXiv:1806.06127, 2018
52018
The system can't perform the operation now. Try again later.
Articles 1–20
|
# Greedy choice and matroids (greedoids)
As I was going through the material about the greedy approach, I came to know that a knowledge on matroids (greedoids) will help me approaching the problem properly. After reading about matroids I have roughly understood what matroids are. But how do you use the concept of a matroid for solving a given optimisation problem?
Take, for example, the activity selection problem. What are the steps to use matroid theory for solving the problem?
• Take a look at the Cormen et al. book. There's a chapter on greedy algorithms and matroids. – Juho Jul 20 '12 at 10:02
• I am not sure how to read your question in this regard, but note that matroids and greedoids are not the same. – Raphael Jul 21 '12 at 5:24
The connection is that if a you can represent the structure underlying your optimisation problem as matroid, you can use the canonical greedy algorithm to optimise the sum of any positive weight function. If your optimisation goal fits this paradigm, you can solve your problem with the greedy approach.
### Example
Consider the minimum spanning tree problem with positive edge weights¹. We will show that there is a matroid corresponding to that problem, implying that it can be solved greedily, that is by the canonical greedy algorithm on said matroid.
Let $G = (V,E,c)$ be an undirected graph with $c : E \to \mathbb{R}_+$ the edge-cost function. Then, $(E,I)$ with
$\qquad \displaystyle I = \{F \subseteq E \mid (V|_F, F) \text{ is a forest}\}$²
is a matroid. Thus, we can find the element of $I$ maximising the sum of edge weights $c'(e) = (\max_{e \in E}c(e)) - c(e)$. This happens to be a minimum spanning tree. Note that the canonical greedy algorithm is called Kruskal's algorithm in this context for historical reasons.
### Proofs
1. To show: $(E,I)$ is a matroid. We have to verify three properties:
• $\emptyset \in I$ -- the empty graph is a forest.
• If $F \in I$, every subset of $F$ is in $I$ -- given an arbitrary forest, removing edges can not introduce cycles. Therefore, every subgraph of a forest is a forest.
• For every $F_1,F_2 \in I$, $|F_1| > |F_2|$ implies that there is $e \in F_1 \setminus F_2$ so that $F_2 \cup \{e\} \in I$ -- consider an arbitrary forest $F_1$ and a smaller one $F_2$. Assume there is no such $e$. That means that all edges in $F_1$ lie in cuts induced by edges in $F_2$³. As there are only $|F_2|$ such cuts, at least one pair of edges in $F_1$ shares a cut; this contradicts that $F_1$ is a forest.
2. To show: any element $F^*$ with maximum weight in $I$ is a minimum spanning tree of $G$. First of all, it is clear $F^*$ has maximum weight according to $c'$, so by definition of $c'$, it also has minimum weight according to $c$. Now all we have to show is that it is a spanning tree: if it was not, it would not be maximal in the sense that we could still add edges (with positive weight), contradicting maximum weight.
1. We can deal with negative edge weights by adding the minimum weight plus one to all weights.
2. A forest is a disjoint union of trees.
3. A graph contains a cycle if and only if there is a cut with more than one edge.
|
# Code reference
## Module level aliases
For user convenience, the following objects are available at the module level.
class nanite.Indentation
class nanite.IndentationGroup
class nanite.IndentationRater
alias of nanite.rate.IndentationRater
class nanite.QMap
alias of nanite.qmap.QMap
## Force-indentation data
apply_preprocessing(preprocessing=None, options=None, ret_details=False)[source]
Perform curve preprocessing steps
Parameters
• preprocessing (list) – A list of preprocessing method identifiers that are stored in the nanite.preproc.PREPROCESSORS list. If set to None, self.preprocessing will be used.
• options (dict of dict) – Dictionary of keyword arguments for each preprocessing step (if applicable)
• ret_details – Return preprocessing details dictionary
compute_emodulus_mindelta(callback=None)[source]
Elastic modulus in dependency of maximum indentation
The fitting interval is varied such that the maximum indentation depth ranges from the lowest tip position to the estimated contact point. For each interval, the current model is fitted and the elastic modulus is extracted.
Parameters
callback (callable) – A method that is called with the emoduli and indentations as the computation proceeds every five steps.
Returns
emoduli, indentations – The fitted elastic moduli at the corresponding maximal indentation depths.
Return type
1d ndarrays
Notes
The information about emodulus and mindelta is also stored in self.fit_properties with the keys “optimal_fit_E_array” and “optimal_fit_delta_array”, if self.fit_model is called with the argument search_optimal_fit set to True.
estimate_contact_point_index(method='deviation_from_baseline')[source]
Estimate the contact point index
estimate_optimal_mindelta()[source]
Estimate the optimal indentation depth
This is a convenience function that wraps around compute_emodulus_mindelta and IndentationFitter.compute_opt_mindelta.
fit_model(**kwargs)[source]
Fit the approach-retract data to a model function
Parameters
• model_key (str) – A key referring to a model in nanite.model.models_available
• params_initial (instance of lmfit.Parameters or dict) – Parameters for fitting. If not given, default parameters are used.
• range_x (tuple of 2) – The range for fitting, see range_type below.
• range_type (str) –
One of:
• absolute:
Set the absolute fitting range in values given by the x_axis.
• relative cp:
In some cases it is desired to be able to fit a model only up until a certain indentation depth (tip position) measured from the contact point. Since the contact point is a fit parameter as well, this requires a two-pass fitting.
• preprocessing (list of str) – Preprocessing
• preprocessing_options (list of str) – Preprocessing
• segment (str) – Segment index (e.g. 0 for approach)
• weight_cp (float) – Weight the contact point region which shows artifacts that are difficult to model with e.g. Hertz.
• gcf_k (float) – Geometrical correction factor $$k$$ for non-single-contact data. The measured indentation is multiplied by this factor to correct for experimental geometries during fitting, e.g. gcf_k=0.5 for parallel-place compression.
• optimal_fit_edelta (bool) – Search for the optimal fit by varying the maximal indentation depth and determining a plateau in the resulting Young’s modulus (fitting parameter “E”).
get_ancillary_parameters(model_key=None)[source]
Compute ancillary parameters for the current model
get_initial_fit_parameters(model_key=None, common_ancillaries=True, model_ancillaries=True)[source]
Return the initial fit parameters
If there are not initial fit parameters set in self.fit_properties, then they are computed.
Parameters
• model_key (str) – Optionally set a model key. This will override the “model_key” key in self.fit_properties.
• common_ancillaries (bool) – Guess global ancillaries such as the contact point.
• model_ancillaries (bool) – Guess model-related ancillaries
Notes
global_ancillaries and model_ancillaries only have an effect if self.fit_properties[“params_initial”] is set.
get_rating_parameters()[source]
Return current rating parameters
rate_quality(regressor='Extra Trees', training_set='zef18', names=None, lda=None)[source]
Compute the quality of the obtained curve
Uses heuristic approaches to rate a curve.
Parameters
• regressor (str) – The regressor name used for rating.
• training_set (str) – A label for a training set shipped with nanite or a path to a training set.
• names (list of str) – Only use these features for rating
• lda (bool) – Perform linear discriminant analysis
Returns
rating – A value between 0 and 10 where 0 is the lowest rating. If no fit has been performed, a rating of -1 is returned.
Return type
float
Notes
The rating is cached based on the fitting hash (see IndentationFitter._hash).
property data
property fit_properties
Fitting results, see Indentation.fit_model())
preprocessing
Default preprocessing steps, see Indentation.apply_preprocessing().
preprocessing_options
Preprocessing options
## Groups
class nanite.group.IndentationGroup(path=None, meta_override=None, callback=None)[source]
Group of Indentation
Parameters
• path (str or pathlib.Path or None) – The path to the data file. The data format is determined and the file is loaded using index.
• meta_override (dict) – if specified, contains key-value pairs of metadata that should be used when loading the files (see afmformats.meta.META_FIELDS)
• callback (callable or None) – A method that accepts a float between 0 and 1 to externally track the process of loading the data.
append(afmdata)[source]
Append a new instance of AFMData
This subclassed method makes sure that “spring constant” is set if “tip position” has to be computed in the future.
Parameters
afmdata (afmformats.afm_data.AFMData) – AFM data
Parameters
• path (path-like) – Path to experimental data
• callback (callable) – function for tracking progress; must accept a float in [0, 1] as an argument.
• meta_override (dict) – if specified, contains key-value pairs of metadata that should be used when loading the files (see afmformats.meta.META_FIELDS)
Returns
group – Indentation group with force-distance data
Return type
nanite.IndetationGroup
Return list of data paths with force-distance data
DEPRECATED
Return a list with paths and their internal enumeration
Parameters
• path (str or pathlib.Path or list of str or list of pathlib.Path) – path to data files or directory containing data files; if directories are given, they are searched recursively
• skip_errors (bool) – skip paths that raise errors
Returns
path_enum – each entry in the list is a list of [pathlib.Path, int], enumerating all curves in each file
Return type
list of lists
Return imaging modality kwargs for afmformats.load_data
Returns
kwargs – keyword arguments for afmformats.load_data()
Return type
dict
Load data and return list of afmformats.AFMForceDistance
This is essentially a wrapper around afmformats.formats.find_data() and afmformats.formats.load_data() that returns force-distance datasets.
Parameters
• path (str or pathlib.Path or list of str or list of pathlib.Path) – path to data files or directory containing data files; if directories are given, they are searched recursively
• callback (callable) – function for progress tracking; must accept a float in [0, 1] as an argument.
• meta_override (dict) – if specified, contains key-value pairs of metadata that are used when loading the files (see afmformats.meta.META_FIELDS)
The default imaging modality when loading AFM data. Set this to None to also be able to load e.g. creep-compliance data. See issue https://github.com/AFM-analysis/nanite/issues/11 for more information. Note that especially the export of rating containers may not work with any imaging modality other than force-distance.
## Preprocessing
exception nanite.preproc.CannotSplitWarning[source]
class nanite.preproc.IndentationPreprocessor[source]
apply(**kwargs)
autosort(**kwargs)
available(**kwargs)
check_order(**kwargs)
get_func(**kwargs)
get_name(**kwargs)
get_steps_required(**kwargs)
nanite.preproc.apply(apret, identifiers=None, options=None, ret_details=False, preproc_names=None)[source]
Perform force-distance preprocessing steps
Parameters
• apret (nanite.Indentation) – The afm data to preprocess
• identifiers (list) – A list of preprocessing identifiers that will be applied (in the order given).
• options (dict of dict) – Preprocessing options for each identifier
• ret_details – Return preprocessing details dictionary
• preproc_names (list) – Deprecated - use identifiers instead
nanite.preproc.autosort(identifiers)[source]
Automatically sort preprocessing identifiers
This takes into account steps_required and steps_optional.
nanite.preproc.available()[source]
Return list of available preprocessor identifiers
nanite.preproc.check_order(identifiers)[source]
Check preprocessing steps for correct order
nanite.preproc.get_func(identifier)[source]
Return preprocessor function for identifier
nanite.preproc.get_name(identifier)[source]
Return preprocessor name for identifier
nanite.preproc.get_steps_required(identifier)[source]
Return requirement identifiers for identifier
nanite.preproc.preproc_compute_tip_position(apret)[source]
Perform tip-sample separation
Populate the “tip position” column by adding the force normalized by the spring constant to the cantilever height (“height (measured)”).
This computation correctly reproduces the column “Vertical Tip Position” as it is exported by the JPK analysis software with the checked option “Use Unsmoothed Height”.
nanite.preproc.preproc_correct_force_offset(apret)[source]
Correct the force offset with an average baseline value
nanite.preproc.preproc_correct_split_approach_retract(apret)[source]
Split the approach and retract curves (farthest point method)
Approach and retract curves are defined by the microscope. When the direction of piezo movement is flipped, the force at the sample tip is still increasing. This can be either due to a time lag in the AFM system or due to a residual force acting on the sample due to the bent cantilever.
To repair this time lag, we append parts of the retract curve to the approach curve, such that the curves are split at the minimum height.
nanite.preproc.preproc_correct_tip_offset(apret, method='deviation_from_baseline', ret_details=False)[source]
Estimate the point of contact
An estimate of the contact point is subtracted from the tip position.
nanite.preproc.preproc_smooth_height(apret)[source]
Make height data monotonic
For the columns “height (measured)”, “height (piezo), and “tip position”, this method ensures that the approach and retract segments are monotonic.
nanite.preproc.preprocessing_step(identifier, name, steps_required=None, steps_optional=None, options=None)[source]
Decorator for Indentation preprocessors
The name and identifier are stored as a property of the wrapped function.
Parameters
• identifier (str) – identifier of the preprocessor (e.g. “correct_tip_offset”)
• name (str) – human-readble name of the preprocessor (e.g. “Estimate contact point”)
• steps_required (list of str) – list of preprocessing steps that must be added before this step
• steps_optional (list of str) – unlike steps_required, these steps do not have to be set, but if they are set, they should come before this step
• options (list of dict) – if the preprocessor accepts optional keyword arguments, this list yields valid values or dtypes
nanite.preproc.PREPROCESSORS = [<function preproc_compute_tip_position>, <function preproc_correct_force_offset>, <function preproc_correct_tip_offset>, <function preproc_correct_split_approach_retract>, <function preproc_smooth_height>]
Available preprocessors
## Contact point estimation
Methods for estimating the point of contact (POC)
nanite.poc.compute_poc(force, method='deviation_from_baseline', ret_details=False)[source]
Compute the contact point from force data
Parameters
• force (1d ndarray) – Force data
• method (str) – Name of the method for computing the POC (see POC_METHODS)
• ret_details (bool) – Whether or not to return a dictionary with details alongside the POC estimate.
Notes
If the POC method returns np.nan, then the center of the force data is returned (to allow fitting algorithms to proceed).
nanite.poc.compute_preproc_clip_approach(force)[source]
Clip the approach part (discard the retract part)
This POC preprocessing method may be applied before applying the POC estimation method.
nanite.poc.poc(identifier, name, preprocessing)[source]
Decorator for point of contact (POC) methods
The name and identifier are stored as a property of the wrapped function.
Parameters
• identifier (str) – identifier of the POC method (e.g. “baseline_deviation”)
• name (str) – human-readble name of the POC method (e.g. “Deviation from baseline”)
• preprocessing (list of str) – list of preprocessing methods that should be applied; may contain [“clip_approach”].
nanite.poc.poc_deviation_from_baseline(force, ret_details=False)[source]
Deviation from baseline
1. Obtain the baseline (initial 10% of the gradient curve)
2. Compute average and maximum deviation of the baseline
3. The CP is the index of the curve where it exceeds twice of the maximum deviation
nanite.poc.poc_fit_constant_line(force, ret_details=False)[source]
Piecewise fit with constant and line
Fit a piecewise function (constant+linear) to the baseline and indentation part:
$F = \text{max}(d, m\delta + d)$
The point of contact is the intersection of a horizontal line at $$d$$ (baseline) and a linear function with slope $$m$$ for the indentation part.
The point of contact is defined as $$\delta=0$$ (It’s another fitting parameter).
nanite.poc.poc_fit_constant_polynomial(force, ret_details=False)[source]
Piecewise fit with constant and polynomial
Fit a piecewise function (constant + polynomial) to the baseline and indentation part.
$F = \frac{\delta^3}{a\delta^2 + b\delta + c} + d$
This function is defined for all $$\delta>0$$. For all $$\delta<0$$ the model evaluates to $$d$$ (baseline).
I’m not sure where this has been described initially, but it is used e.g. in [RZSK19].
For small indentations, this function exhibits a cubic behavior:
$y \approx \delta^3/c + d$
And for large indentations, this function is linear:
$y \approx \delta/a$
The point of contact is defined as $$\delta=0$$ (It’s another fitting parameter).
nanite.poc.poc_fit_line_polynomial(force, ret_details=False)[source]
Piecewise fit with line and polynomial
Fit a piecewise function (line + polynomial) to the baseline and indentation part.
The linear basline ($$\delta<0$$) is modeled with:
$F = m \delta + d$
The indentation part ($$\delta>0$$) is modeled with:
$F = \frac{\delta^3}{a\delta^2 + b\delta + c} + m \delta + d$
For small indentations, this function exhibits a linear and only slightly cubic behavior:
$y \approx \delta^3/c + m \delta + d$
And for large indentations, this function is linear:
$y \approx \left( \frac{1}{a} + m \right) \delta$
The point of contact is defined as $$\delta=0$$ (It’s another fitting parameter).
poc_fit_constant_polynomial
polynomial-only version
nanite.poc.poc_frechet_direct_path(force, ret_details=False)[source]
Fréchet distance to direct path
The indentation part is transformed to normalized coordinates (force and corresponding x in range [0, 1]). The point with the largest distance to the line from (0, 0) to (1, 1) is the contact point.
This method is robust with regard to tilted baselines and is a good initial guess for fitting-based POC estimation approaches.
Note that the length of the baseline influences the returned contact point. For shorter baselines, the contact point will be closer to the point of maximum indentation.
1. Apply a moving average filter to the curve
3. Cut off gradient at maximum with a 10 point reserve
4. Apply a moving average filter to the gradient
5. The POC is the index of the averaged gradient curve where the values are below 1% of the gradient maximum, measured from the indentation maximum (not from baseline).
nanite.poc.POC_METHODS = [<function poc_deviation_from_baseline>, <function poc_fit_constant_line>, <function poc_fit_constant_polynomial>, <function poc_fit_line_polynomial>, <function poc_frechet_direct_path>, <function poc_gradient_zero_crossing>]
List of all methods available for contact point estimation
## Modeling
### Methods and constants
nanite.model.compute_anc_parms(idnt, model_key)[source]
Compute ancillary parameters for a force-distance dataset
Ancillary parameters include parameters that:
• are unrelated to fitting: They may just be important parameters to the user.
• require the entire dataset: They cannot be extracted during fitting, because they require more than just the approach xor retract curve to compute (e.g. hysteresis, jump of retract curve at maximum indentation). They may, additionally, depend on initial fit parameters set by the user.
• require a fit: They are dependent on fitting parameters but are not required during fitting.
Notes
If an ancillary parameter name matches that of a fitting parameter, then it is assumed that it can be used for fitting. Please see nanite.indent.Indentation.get_initial_fit_parameters() and nanite.fit.guess_initial_parameters().
Ancillary parameters are set to np.nan if they cannot be computed.
Parameters
• idnt (nanite.indent.Indentation) – The force-distance data for which to compute the ancillary parameters
• model_key (str) – Name of the model
Returns
ancillaries – key-value dictionary of ancillary parameters
Return type
collections.OrderedDict
nanite.model.get_anc_parm_keys(model_key)[source]
Return the key names of a model’s ancillary parameters
nanite.model.get_anc_parms(idnt, model_key)[source]
nanite.model.get_init_parms(model_key)[source]
Get initial fit parameters for a model
nanite.model.get_model_by_name(name)[source]
Convenience function to obtain a model by name instead of by key
nanite.model.get_parm_name(model_key, parm_key)[source]
Return parameter label
Parameters
• model_key (str) – The model key (e.g. “hertz_cone”)
• parm_key (str) – The parameter key (e.g. “E”)
Returns
parm_name – The parameter label (e.g. “Young’s Modulus”)
Return type
str
nanite.model.get_parm_unit(model_key, parm_key)[source]
Return parameter unit
Parameters
• model_key (str) – The model key (e.g. “hertz_cone”)
• parm_key (str) – The parameter key (e.g. “E”)
Returns
parm_unit – The parameter unit (e.g. “Pa”)
Return type
str
### Modeling core class
exception nanite.model.core.ModelError[source]
exception nanite.model.core.ModelImplementationError[source]
exception nanite.model.core.ModelImplementationWarning[source]
exception nanite.model.core.ModelImportError[source]
exception nanite.model.core.ModelIncompleteError[source]
class nanite.model.core.NaniteFitModel(model_module)[source]
Initialize the model with an imported Python module
compute_ancillaries(fd)[source]
Compute ancillary parameters for a force-distance dataset
Ancillary parameters include parameters that:
• are unrelated to fitting: They may just be important parameters to the user.
• require the entire dataset: They cannot be extracted during fitting, because they require more than just the approach xor retract curve to compute (e.g. hysteresis, jump of retract curve at maximum indentation). They may, additionally, depend on initial fit parameters set by the user.
• require a fit: They are dependent on fitting parameters but are not required during fitting.
Notes
If an ancillary parameter name matches that of a fitting parameter, then it is assumed that it can be used for fitting. Please see nanite.indent.Indentation.get_initial_fit_parameters() and nanite.fit.guess_initial_parameters().
Ancillary parameters are set to np.nan if they cannot be computed.
Parameters
fd (nanite.indent.Indentation) – The force-distance data for which to compute the ancillary parameters
Returns
ancillaries – key-value dictionary of ancillary parameters
Return type
collections.OrderedDict
get_anc_parm_keys()[source]
Return the key names of a model’s ancillary parameters
get_parm_name(key)[source]
Return parameter label
Parameters
key (str) – The parameter key (e.g. “E”)
Returns
parm_name – The parameter label (e.g. “Young’s Modulus”)
Return type
str
get_parm_unit(key)[source]
Return parameter unit
Parameters
key (str) – The parameter key (e.g. “E”)
Returns
parm_unit – The parameter unit (e.g. “Pa”)
Return type
str
nanite.model.core.compute_anc_max_indent(fd)[source]
Compute ancillary parameter ‘Maximum indentation’
nanite.model.core.ANCILLARY_COMMON = {'max_indent': ('Maximum indentation', 'm', <function compute_anc_max_indent>)}
Common ancillary parameters
### Residuals and weighting
nanite.model.residuals.compute_contact_point_weights(cp, delta, weight_dist=5e-07)[source]
Compute contact point weights
Parameters
• cp (float) – Fitted contact point value
• delta (1d ndarray of length N) – The indentation array along which weights will be computed.
• weight_width (float) – The distance from cp until which weights will be applied.
Returns
weights – The weights.
Return type
1d ndarray of length N
Notes
All variables should be given in the same units. The weights increase linearly from increasing distances of delta-cp from 0 to 1 and are 1 outside of the weight width abs(delta-cp)>weight_width.
nanite.model.residuals.get_default_modeling_wrapper(model_function)[source]
Return a wrapper around the default nanite modeling function
nanite.model.residuals.get_default_residuals_wrapper(model_function)[source]
Return a wrapper around the default nanite residual function
nanite.model.residuals.model_direction_agnostic(model_function, params, delta)[source]
Call model_function while making sure that data are in correct order
TODO: Re-evaluate usefulness of this method.
nanite.model.residuals.residual(params, delta, force, model, weight_cp=5e-07)[source]
Compute residuals for fitting
Parameters
• params (lmfit.Parameters) – The fitting parameters for model
• delta (1D ndarray of lenght M) – The indentation distances
• force (1D ndarray of length M) – The corresponding force data
• model (callable) – A model function accepting the arguments params and delta
• weight_cp (positive float or zero/False) – The distance from the contact point until which linear weights will be applied. Set to zero to disable weighting.
nanite.model.weight.weight_cp(*args, **kwargs)[source]
### Models
Each model is implemented as a submodule in nanite.model. For instance nanite.model.model_hertz_parabolic. Each of these modules implements the following functions (which are not listed for each model in the subsections below), here with the (non-existent) example module model_submodule:
nanite.model.model_submodule.get_parameter_defaults()
Return the default parameters of the model.
nanite.model.model_submodule.model()
Wrap the actual model for fitting.
nanite.model.model_submodule.residual()
Compute the residuals during fitting (optional).
In addition, each submodule contains the following attributes:
nanite.model.model_submodule.model_doc
The doc-string of the model function.
nanite.model.model_submodule.model_key
The model key used in the command line interface and during scripting.
nanite.model.model_submodule.model_name
The name of the model.
nanite.model.model_submodule.parameter_keys
Parameter keys of the model for higher-level applications.
nanite.model.model_submodule.parameter_names
Parameter names of the model for higher-level applications.
nanite.model.model_submodule.parameter_units
Parameter units for higher-level applications.
Ancillary parameters may also be defined like so:
nanite.model.model_submodule.compute_ancillaries()
Function that returns a dictionary with ancillary parameters computed from an Indentation instance.
nanite.model.model_submodule.parameter_anc_keys
Ancillary parameter keys
nanite.model.model_submodule.parameter_anc_names
Ancillary parameter names
nanite.model.model_submodule.parameter_anc_units
Ancillary parameter units
#### conical indenter (Hertz)
model key hertz_cone model name conical indenter (Hertz) model location nanite.model.model_conical_indenter
nanite.model.model_conical_indenter.hertz_conical(delta, E, alpha, nu, contact_point=0, baseline=0)[source]
Hertz model for a conical indenter
$F = \frac{2\tan\alpha}{\pi} \frac{E}{1-\nu^2} \delta^2$
Parameters
• delta (1d ndarray) – Indentation [m]
• E (float) – Young’s modulus [N/m²]
• alpha (float) – Half cone angle [degrees]
• nu (float) – Poisson’s ratio
• contact_point (float) – Indentation offset [m]
• baseline (float) – Force offset [N]
Returns
F – Force [N]
Return type
float
Notes
These approximations are made by the Hertz model:
• The sample is isotropic.
• The sample is a linear elastic solid.
• The sample is extended infinitely in one half space.
• The indenter is not deformable.
• There are no additional interactions between sample and indenter.
• infinitely sharp probe
References
Love (1939) [Lov39], Sneddon (1965) [Sne65] (equation 6.4)
#### parabolic indenter (Hertz)
model key hertz_para model name parabolic indenter (Hertz) model location nanite.model.model_hertz_paraboloidal
nanite.model.model_hertz_paraboloidal.hertz_paraboloidal(delta, E, R, nu, contact_point=0, baseline=0)[source]
Hertz model for a paraboloidal indenter
$F = \frac{4}{3} \frac{E}{1-\nu^2} \sqrt{R} \delta^{3/2}$
Parameters
• delta (1d ndarray) – Indentation [m]
• E (float) – Young’s modulus [N/m²]
• R (float) – Tip radius [m]
• nu (float) – Poisson’s ratio
• contact_point (float) – Indentation offset [m]
• baseline (float) – Force offset [N]
Returns
F – Force [N]
Return type
float
Notes
$F = \frac{4}{3} \frac{E}{1-\nu^2} \sqrt{2k} \delta^{3/2},$
where $$k$$ is defined by the paraboloid equation
$\rho^2 = 4kz.$
As follows from the derivations in [LL59], this model is valid for either of the two cases:
• Indentation of a plane (infinite radius) with Young’s modulus $$E$$ by a sphere with infinite Young’s modulus and radius $$R$$, or
• Indentation of a sphere with radius $$R$$ and Young’s modulus $$E$$ by a plane (infinite radius) with infinite Young’s modulus.
These approximations are made by the Hertz model:
• The sample is isotropic.
• The sample is a linear elastic solid.
• The sample is extended infinitely in one half space.
• The indenter is not deformable.
• There are no additional interactions between sample and indenter.
• no surface forces
• If the indenter is spherical, then its radius $$R$$ is much larger than the indentation depth $$\delta$$.
References
Sneddon (1965) [Sne65] (equation 6.9), Theory of Elasticity by Landau and Lifshitz (1959) [LL59] (§9 Solid bodies in contact, equation 9.14)
#### pyramidal indenter, three-sided (Hertz)
model key hertz_pyr3s model name pyramidal indenter, three-sided (Hertz) model location nanite.model.model_hertz_three_sided_pyramid
nanite.model.model_hertz_three_sided_pyramid.hertz_three_sided_pyramid(delta, E, alpha, nu, contact_point=0, baseline=0)[source]
Hertz model for three sided pyramidal indenter
$F = 0.887 \tan\alpha \cdot \frac{E}{1-\nu^2} \delta^2$
Parameters
• delta (1d ndarray) – Indentation [m]
• E (float) – Young’s modulus [N/m²]
• alpha (float) – Inclination angle of the pyramidal face [degrees]
• nu (float) – Poisson’s ratio
• contact_point (float) – Indentation offset [m]
• baseline (float) – Force offset [N]
Returns
F – Force [N]
Return type
float
Notes
These approximations are made by the Hertz model:
• The sample is isotropic.
• The sample is a linear elastic solid.
• The sample is extended infinitely in one half space.
• The indenter is not deformable.
• There are no additional interactions between sample and indenter.
• The inclination angle of the pyramidal face (in radians) must be close to zero.
References
Bilodeau et al. 1992 [Bil92]
#### spherical indenter (Sneddon)
model key sneddon_spher model name spherical indenter (Sneddon) model location nanite.model.model_sneddon_spherical
nanite.model.model_sneddon_spherical.delta_of_a(a, R)
Compute indentation from contact area radius (wrapper)
nanite.model.model_sneddon_spherical.get_a(R, delta, accuracy=1e-12)
Compute the contact area radius (wrapper)
nanite.model.model_sneddon_spherical.hertz_spherical(delta, E, R, nu, contact_point=0.0, baseline=0.0)
Hertz model for Spherical indenter - modified by Sneddon
$\begin{split}F &= \frac{E}{1-\nu^2} \left( \frac{R^2+a^2}{2} \ln \! \left( \frac{R+a}{R-a}\right) -aR \right)\\ \delta &= \frac{a}{2} \ln \! \left(\frac{R+a}{R-a}\right)\end{split}$
($$a$$ is the radius of the circular contact area between bead and sample.)
Parameters
• delta (1d ndarray) – Indentation [m]
• E (float) – Young’s modulus [N/m²]
• R (float) – Tip radius [m]
• nu (float) – Poisson’s ratio
• contact_point (float) – Indentation offset [m]
• baseline (float) – Force offset [N]
Returns
F – Force [N]
Return type
float
Notes
These approximations are made by the Hertz model:
• The sample is isotropic.
• The sample is a linear elastic solid.
• The sample is extended infinitely in one half space.
• The indenter is not deformable.
• There are no additional interactions between sample and indenter.
• no surface forces
References
Sneddon (1965) [Sne65] (equations 6.13 and 6.15)
#### spherical indenter (Sneddon, truncated power series)
model key sneddon_spher_approx model name spherical indenter (Sneddon, truncated power series) model location nanite.model.model_sneddon_spherical_approximation
nanite.model.model_sneddon_spherical_approximation.hertz_sneddon_spherical_approx(delta, E, R, nu, contact_point=0, baseline=0)[source]
Hertz model for Spherical indenter - approximation
$F = \frac{4}{3} \frac{E}{1-\nu^2} \sqrt{R} \delta^{3/2} \left(1 - \frac{1}{10} \frac{\delta}{R} - \frac{1}{840} \left(\frac{\delta}{R}\right)^2 + \frac{11}{15120} \left(\frac{\delta}{R}\right)^3 + \frac{1357}{6652800} \left(\frac{\delta}{R}\right)^4 \right)$
Parameters
• delta (1d ndarray) – Indentation [m]
• E (float) – Young’s modulus [N/m²]
• R (float) – Tip radius [m]
• nu (float) – Poisson’s ratio
• contact_point (float) – Indentation offset [m]
• baseline (float) – Force offset [N]
Returns
F – Force [N]
Return type
float
Notes
These approximations are made by the Hertz model:
• The sample is isotropic.
• The sample is a linear elastic solid.
• The sample is extended infinitely in one half space.
• The indenter is not deformable.
• There are no additional interactions between sample and indenter.
• no surface forces
Truncated power series approximation:
This model is a truncated power series approximation of spherical indenter (Sneddon). The expected error is more than four magnitues lower than the signal (see e.g. Approximating the Hertzian model with a spherical indenter). The Bio-AFM analysis software by JPK/Bruker uses the same model.
References
Sneddon (1965) [Sne65] (equations 6.13 and 6.15), Dobler (personal communication, 2018) [Dob18]
## Fitting
exception nanite.fit.FitDataError[source]
exception nanite.fit.FitKeyError[source]
exception nanite.fit.FitWarning[source]
class nanite.fit.FitProperties[source]
Fit property manager class
Provide convenient access to fit properties as a dictionary and dynamically manage resets due to new initial parameters.
Dynamic properties include:
• set “params_initial” to None if the “model_key” changes
• remove all keys except those in FP_DEFAULT if a key that is in FP_DEFAULT changes (All other keys are considered to be obsolete fitting results).
reset()[source]
restore(props)[source]
update the dictionary without removing any keys
class nanite.fit.IndentationFitter(idnt, **kwargs)[source]
Fit force-distance curves
Parameters
• idnt (nanite.indent.Indentation) – The dataset to fit
• model_key (str) – A key referring to a model in nanite.model.models_available
• params_initial (instance of lmfit.Parameters) – Parameters for fitting. If not given, default parameters are used.
• range_x (tuple of 2) – The range for fitting, see range_type below.
• range_type (str) –
One of:
• absolute:
Set the absolute fitting range in values given by the x_axis.
• relative cp:
In some cases it is desired to be able to fit a model only up until a certain indentation depth (tip position) measured from the contact point. Since the contact point is a fit parameter as well, this requires a two-pass fitting.
• preprocessing (list of str) – Preprocessing step identifiers
• preprocessing_options (dict of dicts) – Preprocessing keyword arguments of steps (if applicable)
• segment (int) – Segment index (e.g. 0 for approach)
• weight_cp (float) – Weight the contact point region which shows artifacts that are difficult to model with e.g. Hertz.
• gcf_k (float) – Geometrical correction factor $$k$$ for non-single-contact data. The measured indentation is multiplied by this factor to correct for experimental geometries during fitting, e.g. gcf_k=0.5 for parallel-place compression.
• optimal_fit_edelta (bool) – Search for the optimal fit by varying the maximal indentation depth and determining a plateau in the resulting Young’s modulus (fitting parameter “E”).
• optimal_fit_num_samples (int) – Number of samples to use for searching the optimal fit
compute_emodulus_vs_mindelta(callback=None)[source]
Compute elastic modulus vs. minimal indentation curve
static compute_opt_mindelta(emoduli, indentations)[source]
Determine the plateau of an emodulus-indentation curve
The following procedure is performed:
1. Smooth the emodulus data with a Butterworth filter
2. Label sequences that have similar values by binning into ten regions between the min and max.
3. Ignore sequences with emodulus that is smaller than the binning size.
4. Determine the longest sequence.
fit()[source]
Fit the approach-retract data to a model function
get_initial_parameters(idnt=None, model_key='hertz_para')[source]
Get initial fit parameters for a specific model
nanite.fit.guess_initial_parameters(idnt=None, model_key='hertz_para', common_ancillaries=True, model_ancillaries=True)[source]
Guess initial fitting parameters
Parameters
• idnt (nanite.indent.Indentation) – The dataset to use for guessing initial fitting parameters using ancillary parameters
• model_key (str) – The model key
• common_ancillaries (bool) – Guess global ancillary parameters (such as contact point)
• model_ancillaries (bool) – Use model-related ancillary parameters
nanite.fit.obj2bytes(obj)[source]
Bytes representation of an object for hashing
## Rating
### Features
class nanite.rate.features.IndentationFeatures(dataset=None)[source]
static compute_features(idnt, which_type='all', names=None, ret_names=False)[source]
Compute the features for a data set
Parameters
• idnt (nanite.Indentation) – A dataset to rate
• names (list of str) – The names of the rating methods to use, e.g. [“rate_apr_bumps”, “rate_apr_mon_incr”]. If None (default), all available rating methods are used.
Notes
names may include features that are excluded by which_type. E.g. if a “bool” feature is in names but which_type is “float”, then the “bool” feature will be silently ignored.
feat_bin_apr_spikes_count()[source]
spikes during IDT
Sudden spikes in indentation curve
feat_bin_cp_position()[source]
CP outside of data range
Contact point position outside of range
feat_bin_size()[source]
dataset too small
Number of points in indentation curve
feat_con_apr_flatness()[source]
flatness of APR residuals
fraction of the positive-gradient residuals in the approach part
feat_con_apr_size()[source]
relative APR size
length of the approach part relative to the indentation part
feat_con_apr_sum()[source]
residuals of APR
absolute sum of the residuals in the approach part
feat_con_bln_slope()[source]
slope of BLN
slope obtained from a linear least-squares fit to the baseline
feat_con_bln_variation()[source]
variation in BLN
comparison of the forces at the beginning and at the end of the baseline
feat_con_cp_curvature()[source]
curvature at CP
curvature of the force-distance data at the contact point
feat_con_cp_magnitude()[source]
residuals at CP
mean value of the residuals around the contact point
feat_con_idt_maxima_75perc()[source]
maxima in IDT residuals
sum of the indentation residuals’ maxima in three intervals in-between 25% and 100% relative to the maximum indentation
feat_con_idt_monotony()[source]
monotony of IDT
change of the gradient in the indentation part
feat_con_idt_spike_area()[source]
area of IDT spikes
area of spikes appearing in the indentation part
feat_con_idt_sum()[source]
overall IDT residuals
sum of the residuals in the indentation part
feat_con_idt_sum_75perc()[source]
residuals in 75% IDT
sum of the residuals in the indentation part in-between 25% and 100% relative to the maximum indentation
classmethod get_feature_funcs(which_type='all', names=None)[source]
Return functions that compute features from a dataset
Parameters
• names (list of str) – The names of the rating methods to use, e.g. [“rate_apr_bumps”, “rate_apr_mon_incr”]. If None (default), all available rating methods are returned.
• which_type (str) – Which features to return: [“all”, “bool”, “float”].
Returns
raters – Each item in the list consists contains the name of the rating method and the corresponding rating method.
Return type
list of tuples (name, callable)
classmethod get_feature_names(which_type='all', names=None, ret_indices=False)[source]
Get features names
Parameters
• which_type (str or list of str) – Return only features that are of a certain type. See VALID_FEATURE_TYPES for valid strings.
• names (list of str) – Return only features that are in this list.
• ret_indices (bool) – If True, also return the internal feature indices.
Returns
name_list – List of feature names (callables of this class)
Return type
list of str
property contact_point
property datafit_apr
property datares_apr
dataset
current dataset from which features are computed
property datax_apr
property datay_apr
property has_contact_point
property is_fitted
property is_valid
property meta
nanite.rate.features.VALID_FEATURE_TYPES = ['all', 'binary', 'continuous']
Valid keyword arguments for feature types
### Rater
class nanite.rate.rater.IndentationRater(regressor=None, scale=None, lda=None, training_set=None, names=None, weight=True, sample_weight=None, *args, **kwargs)[source]
Rate quality
Parameters
• regressor (sciki-learn RegressorMixin) – The regressor used for rating
• scale (bool) – If True, apply a Standard Scaler. If a regressor based on decision trees is used, the Standard Scaler is not used by default, otherwise it is.
• lda (bool) – If True, apply a Linear Discriminant Analysis (LDA). If a regressor based on a decision tree is used, LDA is not used by default, otherwise it is.
• training_set (tuple of (X, y)) – The training set (samples, response)
• names (list of str) – Feature names to use
• weight (bool) – Weight the input samples by the number of occurrences or with sample_weight. For tree-based classifiers, set this to True to avoid bias.
• sample_weight (list-like) – The sample weights. If set to None sample weights are computed from the training set.
• *args (list) – Positional arguments for IndentationFeatures
• **kwargs – Keyword arguments for IndentationFeatures
static compute_sample_weight(X, y)[source]
Weight samples according to occurrence in y
static get_training_set_path(label='zef18')[source]
Return the path to a training set shipped with nanite
Training sets are stored in the nanite.rate module path with ts_ prepended to label.
classmethod load_training_set(path=None, names=None, which_type=['continuous'], remove_nan=True, ret_names=False)[source]
Load a training set from a directory
By default, only the “continuous” features are imported. The “binary” features are not needed for training; they are used to sort out new force-distance data.
rate(samples=None, datasets=None)[source]
Perform rating step
Parameters
• samples (1d or 2d ndarray (cast to 2d ndarray) or None) – Measured samples, if set to None, dataset must be given.
• dataset (list of nanite.Indentation) – Full, fitted measurement
Returns
ratings – Resulting ratings
Return type
list
names
feature names used by the regressor pipeline
pipeline
sklearn pipeline with transforms (and regressor if given)
nanite.rate.rater.get_available_training_sets()[source]
List of internal training sets
nanite.rate.rater.get_rater(regressor, training_set='zef18', names=None, lda=None, **reg_kwargs)[source]
Convenience method to get a rater
Parameters
• regressor (str or RegressorMixin) – If a string, must be in reg_names.
• training_set (str or pathlib.Path or tuple (X, y)) – A string label representing a training set shipped with nanite, the path to a training set, or a tuple representing the training set (samples, response) for use with sklearn.
• names (list of str) – Only use these features for rating
• lda (bool) – Perform linear discriminant analysis
Returns
irater – The rating instance.
Return type
nanite.IndentationRater
### Regressors
scikit-learn regressors and their keyword arguments
nanite.rate.regressors.reg_names = ['AdaBoost', 'Decision Tree', 'Extra Trees', 'Gradient Tree Boosting', 'Random Forest', 'SVR (RBF kernel)', 'SVR (linear kernel)']
List of available default regressor names
List of tree-based regressor class names (used for keyword defaults in IndentationRater)
### Manager
class nanite.rate.io.RateManager(path, verbose=0)[source]
Manage user-defined rates
export_training_set(path)[source]
get_cross_validation_score(regressor, training_set=None, n_splits=20, random_state=42)[source]
Regressor cross-validation scoring
Cross-validation is used to identify regressors that over-fit the train set by splitting the train set into multiple learn/test sets and quantifying the regressor performance for each split.
Parameters
• regressor (str or RegressorMixin) – If a string, must be in reg_names.
• training_set (X, y) – If given, do not use self.samples
Notes
A skimage.model_selection.KFold cross validator is used in combination with the mean squared error score.
Cross-validation score is computed from samples that are filtered with the binary features and only from samples that do not contain any nan values.
get_rates(which='user', training_set='zef18')[source]
which: str
Which rating to return: “user” or a regressor name
get_training_set(which_type='all', prefilter_binary=False, remove_nans=False, transform=False)[source]
Return (X, y) training set
property datasets
path
Path to the manual ratings (directory or .h5 file)
property ratings
property samples
The individual sample ratings computed by afmlib
verbose
verbosity level
nanite.rate.io.hash_file(path, blocksize=65536)[source]
Compute sha256 hex-hash of a file
Parameters
• path (str or pathlib.Path) – path to the file
• blocksize (int) – block size read from the file
Returns
hex – The first six characters of the hash
Return type
str
nanite.rate.io.hdf5_rated(h5path, indent)[source]
Test whether an indentation has already been rated
Returns
Return type
is_rated, rating, comment
Notes
The .fit_properties attribute of each Indentation instance is overridden by a simple dictionary, so its functionalities are not available anymore.
nanite.rate.io.save_hdf5(h5path, indent, user_rate, user_name, user_comment, h5mode='a')[source]
Store all relevant data of a user rating into an hdf5 file
Parameters
• h5path (str or pathlib.Path) – Path to HDF5 rating container where data will be stored
• indent (nanite.Indentation) – The experimental data processed and fitted with nanite
• user_rate (float) – Rating given by the user
• user_name (str) – Name of the rating user
## Quantitative maps
exception nanite.qmap.DataMissingWarning[source]
class nanite.qmap.QMap(path_or_group, meta_override=None, callback=None)[source]
Quantitative force spectroscopy map handling
Parameters
• path_or_group (str or pathlib.Path or afmformats.afm_group.AFMGroup) – The path to the data file or an instance of AFMGroup
• callback (callable or None) – A method that accepts a float between 0 and 1 to externally track the process of loading the data.
static feat_fit_contact_point(idnt)[source]
Contact point of the fit
static feat_fit_youngs_modulus(idnt)[source]
Young’s modulus
static feat_meta_rating(idnt)[source]
Rating
|
🎙️ Alfredo Canziani
## Introduction to generative adversarial networks (GANs)
Fig. 1: GAN Architecture
GANs are a type of neural network used for unsupervised machine learning. They are comprised of two adversarial modules: generator and cost networks. These modules compete with each other such that the cost network tries to filter fake examples while the generator tries to trick this filter by creating realistic examples $\vect{\hat{x}}$. Through this competition, the model learns a generator that creates realistic data. They can be used in tasks such as future predictions or for generating images after being trained on a particular dataset.
Fig. 2: GAN Mapping from Random Variable
GANs are examples of energy based models (EBMs). As such, the cost network is trained to produce low costs for inputs closer to the true data distribution denoted by the pink $\vect{x}$ in Fig. 2. Data from other distributions, like the blue $\vect{\hat{x}}$ in Fig. 2, should have high cost. A mean squared error (MSE) loss is typically used to calculate the cost network’s performance. It is worth noting that the cost function outputs a positive scalar value within a specified range i.e.$\text{cost} : \mathbb{R}^n \rightarrow \mathbb{R}^+ \cup {0}$). This is unlike a classic discriminator which uses discrete classification for its outputs.
Meanwhile, the generator network ($\text{generator} : \mathcal{Z} \rightarrow \mathbb{R}^n$) is trained to improve its mapping of random variable $\vect{z}$ to realistic generated data $\vect{\hat{x}}$ to trick the cost network. The generator is trained with respect to the cost network’s output, trying to minimize the energy of $\vect{\hat{x}}$. We denote this energy as $C(G(\vect{z}))$, where $C(\cdot)$ is the cost network and $G(\cdot)$ is the generator network.
The training of the cost network is based on minimizing the MSE loss, while the training of the generator network is through minimizing the cost network, using gradients of $C(\vect{\hat{x}})$ with respect to $\vect{\hat{x}}$.
To ensure that high cost is assigned to points outside the data manifold and low cost is assigned to points within it, the loss function for the cost network $\mathcal{L}_{C}$ is $C(x)+[m-C(G(\vect{z}))]^+$ for some positive margin $m$. Minimizing $\mathcal{L}_{C}$ requires that $C(\vect{x}) \rightarrow 0$ and $C(G(\vect{z})) \rightarrow m$. The loss for the generator $\mathcal{L}_{G}$ is simply $C(G(\vect{z}))$, which encourages the generator to ensure that $C(G(\vect{z})) \rightarrow 0$. However, this does create instability as $0 \leftarrow C(G(\vect{z})) \rightarrow m$.
## Difference between GANs and VAEs
Fig. 3: VAE (left) *vs.* GAN (right) - Architectural design
Compared to Variational Auto-Encoders (VAEs) from week 8, GANs create generators slightly differently. Recall, VAEs map inputs $\vect{x}$ to a latent space $\mathcal{Z}$ with an encoder and then map from $\mathcal{Z}$ back to the data space with a decoder to get $\vect{\hat{x}}$. They then use the reconstruction loss to push $\vect{x}$ and $\vect{\hat{x}}$ to be similar. GANs, on the other hand, train through an adversarial setting with the generator and cost networks competing as described above. These networks are successively trained through backpropagation through gradient based methods. Comparison of this architectural difference can be seen in Fig. 3.
Fig. 4: VAE (left) *vs.* GAN (right) - Mapping from Random Sample $\vect{z}$
GANs also differ from VAEs through how they produce and use $\vect{z}$. GANs start by sampling $\vect{z}$, similar to the latent space in a VAE. They then use a generative network to map $\vect{z}$ to $\vect{\hat{x}}$. This $\vect{\hat{x}}$ is then sent through a discriminator/cost network to evaluate how “real” it is. One of the main differences from VAE and GAN is that we do not need to measure a direct relationship (i.e. reconstruction loss) between the output of the generative network $\vect{\hat{x}}$ and real data $\vect{x}$. Instead, we force $\vect{\hat{x}}$ to be similar to $\vect{x}$ by training the generator to produce $\vect{\hat{x}}$ such that the discriminator/cost network produces scores that are similar to those of real data $\vect{x}$, or more “real”.
## Major pitfalls in GANs
While GANs can be powerful for building generators, they have some major pitfalls.
### 1. Unstable convergence
As the generator improves with training, the discriminator performance gets worse because the discriminator can no longer easily tell the difference between real and fake data. If the generator is perfect, then the manifold of the real and fake data will lie on top of each other and the discriminator will create many misclassifications.
This poses a problem for convergence of the GAN: the discriminator feedback gets less meaningful over time. If the GAN continues training past the point when the discriminator is giving completely random feedback, then the generator starts to train on junk feedback and its quality may collapse. [Refer to training convergence in GANs]
As a result of this adversarial nature between the generator and discriminator there is an unstable equilibrium point rather than an equilibrium.
Let’s consider using the binary cross entropy loss for a GAN:
$\mathcal{L} = \mathbb{E}_\boldsymbol{x}[\log(D(\boldsymbol{x}))] + \mathbb{E}_\boldsymbol{\hat{x}}[\log(1-D(\boldsymbol{\hat{x}}))] \text{.}$
As the discriminator becomes more confident, $D(\vect{x})$ gets closer to $1$ and $D(\vect{\hat{x}})$ gets closer to $0$. This confidence moves the outputs of the cost network into flatter regions where the gradients become more saturated. These flatter regions provide small, vanishing gradients that hinder the generator network’s training. Thus, when training a GAN, you want to make sure that the cost gradually increases as you become more confident.
### 3. Mode collapse
If a generator maps all $\vect{z}$’s from the sampler to a single $\vect{\hat{x}}$ that can fool the discriminator, then the generator will produce only that $\vect{\hat{x}}$. Eventually, the discriminator will learn to detect specifically this fake input. As a result, the generator simply finds the next most plausible $\vect{\hat{x}}$ and the cycle continues. Consequently, the discriminator gets trapped in local minima while cycling through fake $\vect{\hat{x}}$’s. A possible solution to this issue is to enforce some penalty to the generator for always giving the same output given different inputs.
## Deep Convolutional Generative Adversarial Network (DCGAN) source code
The source code of the example can be found here.
### Generator
1. The generator upsamples the input using several nn.ConvTranspose2d modules separated with nn.BatchNorm2d and nn.ReLU.
2. At the end of the sequential, the network uses nn.Tanh() to squash outputs to $(-1,1)$.
3. The input random vector has size $nz$. The output has size $nc \times 64 \times 64$, where $nc$ is the number of channels.
class Generator(nn.Module):
def __init__(self):
super().__init__()
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
# state size. (ngf) x 32 x 32
nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
nn.Tanh()
# state size. (nc) x 64 x 64
)
def forward(self, input):
output = self.main(input)
return output
### Discriminator
1. It is important to use nn.LeakyReLU as the activation function to avoid killing the gradients in negative regions. Without these gradients, the generator will not receive updates.
2. At the end of the sequential, the discriminator uses nn.Sigmoid() to classify the input.
class Discriminator(nn.Module):
def __init__(self):
super().__init__()
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
output = self.main(input)
return output.view(-1, 1).squeeze(1)
These two classes are initialized as netG and netD.
### Loss function for GAN
We use Binary Cross Entropy (BCE) between target and output.
criterion = nn.BCELoss()
### Setup
We set up fixed_noise of size opt.batchSize and length of random vector nz. We also create labels for real data and generated (fake) data called real_label and fake_label, respectively.
fixed_noise = torch.randn(opt.batchSize, nz, 1, 1, device=device)
real_label = 1
fake_label = 0
Then we set up optimizers for discriminator and generator networks.
optimizerD = optim.Adam(netD.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
### Training
Each epoch of training is divided into two steps.
Step 1 is to update the discriminator network. This is done in two parts. First, we feed the discriminator real data coming from dataloaders, compute the loss between the output and real_label, and then accumulate gradients with backpropagation. Second, we feed the discriminator data generated by the generator network using the fixed_noise, compute the loss between the output and fake_label, and then accumulate the gradient. Finally, we use the accumulated gradients to update the parameters for the discriminator network.
Note that we detach the fake data to stop gradients from propagating to the generator while we train the discriminator.
Also note that we only need to call zero_grad() once in the beginning to clear the gradients so the gradients from both the real and fake data can be used for the update. The two .backward() calls accumulate these gradients. We finally only need one call of optimizerD.step() to update the parameters.
# train with real
real_cpu = data[0].to(device)
batch_size = real_cpu.size(0)
label = torch.full((batch_size,), real_label, device=device)
output = netD(real_cpu)
errD_real = criterion(output, label)
errD_real.backward()
D_x = output.mean().item()
# train with fake
noise = torch.randn(batch_size, nz, 1, 1, device=device)
fake = netG(noise)
label.fill_(fake_label)
output = netD(fake.detach())
errD_fake = criterion(output, label)
errD_fake.backward()
D_G_z1 = output.mean().item()
errD = errD_real + errD_fake
optimizerD.step()
Step 2 is to update the Generator network. This time, we feed the discriminator the fake data, but compute the loss with the real_label! The purpose of doing this is to train the generator to make realistic $\vect{\hat{x}}$’s.
netG.zero_grad()
label.fill_(real_label) # fake labels are real for generator cost
output = netD(fake)
errG = criterion(output, label)
errG.backward()
D_G_z2 = output.mean().item()
optimizerG.step()
📝 William Huang, Kunal Gadkar, Gaomin Wu, Lin Ye
31 Mar 2020
|
Success with Mathematics and also Scientific research opens opportunities
2019年9月13日Success with Mathematics and also Scientific research opens opportunities
Number behaviour troubles using only addition businesses. Find our next term with the sequence
3 , Some , 10 , 16 , ___ As a result to have the up coming amount of a series we should instead subtract Just one on the previous quantity
So, we could publish another figures as
24 – A person = 12
Therefore another term from the collection can be 23 We could utilize it, as an example, in order to compute that this Thirteenth rectangle selection is actually , with no 1st choosing the prior 10 sq .
Basic Quantity Patterns
Thus to get the next range of the actual string we should instead increase in numbers the earlier amount through A couple of
So, we can generate your next quantities when
48 y 3 Equals Ninety six
Therefore the next name with the collection is 96 \begin
T_1 10 \\ 13 + (3)(-3) \\ 13 + (1-1)(-3) \end 4, 12 +6 , 04 +6 , Twenty two +6 , 28 +6 , +6 , +6 , Style: “Add Half a dozen to the preceding range to get the following one.” Number patterns using both equally improvement and also subtraction surgical procedures within the similar (more intricate) pattern. The free online games, a digital lessons, down-loadable worksheets, plus online game wordpress regarding mobile phone devices allow moms and dads, teachers as well as learners numerous demanding although fulfilling understanding options. The size of all these new rows also raises by means of one every time.
Unique techniques are able to use the data associated with recent offences to calculate where and when offenses might happens to the future. $$T_1$$ will be the initially expression on the sequence. We should publish an expression including the necessity of the normal difference ($$d Implies -3$$) as well as the posture of the phrase ($$n Implies 1$$). With Mathematics Video games, college students are able to acquire that critical expertise automatically as they simply have fun with game titles and enjoy yourself! \begin
T_n 13 + (n-1)(-3) \\ 10 -3n + 3\\ -3n + 14 \end First, while, let’s examine a little something contrasting: action string photography. Being able to foresee when the importance of a regular will increase or along can be hugely rewarding!
Пожаловаться на видео?
We will use it, such as, to calculate the 13th square number is actually , with no first finding the preceding 14 sq . Do you find their own shapes and evaluate another two terminology? Here are two exercises that demonstrate to the variety as well as fun with number habits. Teacher means and also specialist advancement through the curriculum
Number styles can also be a sensible way to steadily develop assurance knowing sequences in the not familiar circumstance. An explicit formula for a series claims the cost of this nth time period as a function of just n the earlier time period , with out speaking about various other terms and conditions inside the string. One example is, in the event that n Equates to Money , the formula gets to be back button = by Usd + Dollar . Usually the issue will ask a student to deliver the following figures from the sample, but some dilemma versions will even ask paper writing help this before volumes. \begin
T_2 Seven \\ 13 + (One particular)(-3) \\ 13 + (2-1)(-3) \end We must create a manifestation that includes value of the everyday change ($$debbie Implies -3$$) and the situation on the time period ($$in = 1$$). Here are several far more samples of steps string picture taking to your satisfaction:
Пожаловаться на видео?
Number designs employing equally add-on and subtraction procedures inside the similar (more advanced) structure. Even though this form of number style is definitely less common within Independence day score or perhaps Fifth score when multiplication is only remaining announced, you will see multiplication amount habits frequently in school front door examinations therefore building familiarity with these kind of designs is vital. In the given collection
8 , Eleven , Fourteen , Seventeen , ___
8 + Three or more Is equal to Eleven
11 + Several Means 17
14 + Three Equals 17
We have observed which, every last range is Three or more above the earlier selection. Thus to find the subsequent variety of a series we need to withhold One particular on the earlier quantity
So, we can easily create the following amounts seeing that
24 – One Is equal to 12
Therefore our next term from the sequence is actually 23
Multiplication Quantity Shapes in addition to Beyond
Would you find their particular shapes and assess another a couple of terms and conditions? For the triangle amounts many of us found your recursive formulation which lets you know a next period of your routine like a use of of the company’s previous words. In this offered sequence Discover the up coming period from the collection
27 , 27 , Twenty five , 24 , ___ This indiv terms , as well as displayed by simply factors just like x deborah . Examples regarding Amount Behaviour within Math, deals with many methods that happen to be seeing that beneath:- These multiplication variety designs worksheets can also help using mastery often table points. 6
Number Patterns using Typical Increments
To obtain the n-th triangular amount, we go ahead and take preceding very first next triangular selection and also include n. Most of these range shapes will start having a amount additionally up in the sequence (for example, in case the structure is definitely ‘add 3’, offering a string ’12, 20, 19. Choose a talent previously to get enjoying! In this parts you will see about several statistical series, unexpected habits, as well as unexpected purposes. Bankers furthermore have a look at historic details involving share values, rates and forex trading premiums, to estimate the way financial markets may improvement in the long run.
Action Sequence Photography
these fresh series in addition improves by simply 1 whenever. They are standard attributes on consistent tests and you will additionally obtain them included in the Prevalent Primary normal (especially 4.0A.C.Your five) in the us. Strengthening the method of patterns when they take place in any kind of application leaves college students able to identify most of these after they seem to be the two on math testing or in true to life. are one particular style of dilemma in which a https://paperhelpers.org/ college student is offered a string of quantities in addition to instructed to identify precisely how that list will be made as well as what your next ideals might be. They start having very simple in-out models in addition to progressively modify their being familiar with to your abstractions with geometry.
• Counting as well as skip-counting
• Decreasing Variety Patterns
• In Puzzle Operations you determine what an operation may by way of seeing examples.
• the value of a amount inside structure and
• Increasing Amount Patterns
• the position of a quantity while in the pattern
You can take a look by simply changing ideals with regard to $$n$$: Professional mathematicians make use of extremely sophisticated methods to find in addition to examine every one of these designs, although we will start with a thing more simple. So a lost quantity is definitely 20 As an example, if n Equates to Usd , the actual blueprint turns into times = by Money + Money . We focus on value of the initial phrase inside the pattern. When contemplating a variety pattern such as 2, 5, Half-dozen.
Linear sequences (EMBG4)
Pick a proficiency previously to obtain participating in! $$T_1$$ is the very first phrase of your routine. Strengthening the concept of habits as they quite simply happen in any software will leave individuals prepared to determine most of these every time they seem to be each for math concepts checks or maybe in real world. An explicit formula for a string tells you the necessity of a nth name as being a purpose of merely n the first sort phrase , without referring to alternative words inside the pattern. While variety behaviour are often tackled just briefly in many calculations curriculum, apply by using range styles is a fantastic approach to raise not merely test standing yet range fluency.
• In Puzzle Experditions you determine how much of an functioning can by means of witnessing illustrations.
• Determining principles with regard to input/output tables
• Increasing Quantity Patterns
• Do not take advantage of the solution pertaining to maths series.
And throughout aesthetic as well as authored forms of words, you might connect language and math to build up abilities intended for believing definitely, deliberately, significantly, along with imaginatively. Number pattern worksheets even now only using improvement operations, though gaps with starting or middle of the sequence. $$T_1$$ is the 1st name of a string. This indiv terms , and also displayed by issues just like times m .
Many protection under the law set aside. Number shapes can also be a sensible way to little by little create assurance discerning patterns in a not familiar wording. Number routine issues using only subtraction businesses. Number designs applying either improvement along with subtraction functions from the same (more difficult) routine. During this phase, i will understand more about quadratic series, the location where the among successive terminology will not be regular, however practices its very own design.
RESERVE -ご予約-
TEL
ACCESS
|
## APIO '22 P3 - Perm
View as PDF
Points: 20 (partial)
Time limit: 1.0s
Memory limit: 512M
Problem types
Allowed languages
C++
The Pharaohs use the relative movement and gravity of planets to accelerate their spaceships. Suppose a spaceship will pass by planets with orbital speeds in order. For each planet, the Pharaohs scientists can choose whether to accelerate the spaceship using this planet or not. To save energy, after accelerating by a planet with orbital speed , the spaceship cannot be accelerated using any planet with orbital speed . In other words, the chosen planets form an increasing subsequence of . A subsequence of p is a sequence that's derived from by deleting zero or more elements of . For example , , , and are subsequences of , but is not.
The scientists have identified that there are a total of different ways a set of planets can be chosen to accelerate the spaceship, but they have lost their record of all the orbital speeds (even the value of ). However, they remember that is a permutation of . A permutation is a sequence containing each integer from to exactly once. Your task is to find one possible permutation of sufficiently small length.
You need to solve the problem for different spaceships. For each spaceship , you get an integer , representing the number of different ways a set of planets can be chosen to accelerate the spaceship. Your task is to find a sequence of orbital speeds with a small enough length such that there are exactly ways a subsequence of planets with increasing orbital speeds can be chosen.
#### Implementation Details
You should implement the following procedure:
vector<int> construct_permutation(long long k)
• : is the desired number of increasing subsequences.
• This procedure should return an array of elements where each element is between and inclusive.
• The returned array must be a valid permutation having exactly increasing subsequences.
• This procedure is called a total of times. Each of these calls should be treated as a separate scenario.
#### Constraints
for all
1. ( points) (for all ). If all permutations you used have length at most and are correct, you receive points, otherwise .
2. ( points) No additional constraints. For this subtask, let be the maximum permutation length you used in any scenario. Then, your score is calculated according to the following table:
Condition Score
#### Examples
##### Example 1
Consider the following call:
construct_permutation(3)
This procedure should return a permutation with exactly increasing subsequences. A possible answer is , which has (empty subsequence), and as increasing subsequences.
##### Example 2
Consider the following call:
construct_permutation(8)
This procedure should return a permutation with exactly increasing subsequences. A possible answer is .
The sample grader reads the input in the following format:
• line :
• line ():
The sample grader prints a single line for each containing the return value of construct_permutation, or an error message if one has occurred.
|
### A Standard elements of a Starlink document
#### A.1 Code, date, filename
Every Starlink document has an identifier and a date of issue. Identifiers have a format like SGP/28.7 where SGP is the class code, 28 is the number within that class, and 7 is the version number starting at 1. Identifiers are allocated by the Document Librarian ([email protected]). When a note is revised, its version number should be incremented and the date of issue updated.
Files which store Starlink documents should have a name like sgp28.tex, where sgp shows the document class, 28 tells you the number, and tex tells you it is a TEX or LATEX source file. Files containing Starlink classified documents are stored in /star/docs.
#### A.2 Standard headers, type size, page size, layout
Use the LATEX template files like sun.tex which are stored in /star/docs. These have properly formatted headers and correct type size, page size, and layout. They also have the definitions required for you to produce hypertext, as described in SUN/199. Avoid using a type size smaller than 11 point.
#### A.3 Title
A user looking for information usually selects a note on the basis of its title, and this should, therefore, be concise and informative. It should contain the acronym used to refer to the software, and indicate its function. Remember, your note may have a long life, so phrases such as “A New…", which will quickly become either obsolete or positively misleading, should be avoided. An example title is:
ASPIC – A set of image processing programs
Don’t assume a reader already knows what your software does. For example, don’t have a title like:
MYPROG – An introduction
Say what MYPROG does:
MYPROG – An HTML editor: Introduction
|
# Properties
Label 9025.2.a.g Level $9025$ Weight $2$ Character orbit 9025.a Self dual yes Analytic conductor $72.065$ Analytic rank $1$ Dimension $1$ CM no Inner twists $1$
# Learn more
## Newspace parameters
Level: $$N$$ $$=$$ $$9025 = 5^{2} \cdot 19^{2}$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 9025.a (trivial)
## Newform invariants
Self dual: yes Analytic conductor: $$72.0649878242$$ Analytic rank: $$1$$ Dimension: $$1$$ Coefficient field: $$\mathbb{Q}$$ Coefficient ring: $$\mathbb{Z}$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 95) Fricke sign: $$1$$ Sato-Tate group: $\mathrm{SU}(2)$
## $q$-expansion
$$f(q)$$ $$=$$ $$q + 2 q^{3} - 2 q^{4} + 4 q^{7} + q^{9}+O(q^{10})$$ q + 2 * q^3 - 2 * q^4 + 4 * q^7 + q^9 $$q + 2 q^{3} - 2 q^{4} + 4 q^{7} + q^{9} + 3 q^{11} - 4 q^{12} - 2 q^{13} + 4 q^{16} - 6 q^{17} + 8 q^{21} - 4 q^{27} - 8 q^{28} - 3 q^{29} - 7 q^{31} + 6 q^{33} - 2 q^{36} - 8 q^{37} - 4 q^{39} - 6 q^{41} + 4 q^{43} - 6 q^{44} - 6 q^{47} + 8 q^{48} + 9 q^{49} - 12 q^{51} + 4 q^{52} + 6 q^{53} - 15 q^{59} + 5 q^{61} + 4 q^{63} - 8 q^{64} - 2 q^{67} + 12 q^{68} - 3 q^{71} - 8 q^{73} + 12 q^{77} + 5 q^{79} - 11 q^{81} - 12 q^{83} - 16 q^{84} - 6 q^{87} - 15 q^{89} - 8 q^{91} - 14 q^{93} - 8 q^{97} + 3 q^{99}+O(q^{100})$$ q + 2 * q^3 - 2 * q^4 + 4 * q^7 + q^9 + 3 * q^11 - 4 * q^12 - 2 * q^13 + 4 * q^16 - 6 * q^17 + 8 * q^21 - 4 * q^27 - 8 * q^28 - 3 * q^29 - 7 * q^31 + 6 * q^33 - 2 * q^36 - 8 * q^37 - 4 * q^39 - 6 * q^41 + 4 * q^43 - 6 * q^44 - 6 * q^47 + 8 * q^48 + 9 * q^49 - 12 * q^51 + 4 * q^52 + 6 * q^53 - 15 * q^59 + 5 * q^61 + 4 * q^63 - 8 * q^64 - 2 * q^67 + 12 * q^68 - 3 * q^71 - 8 * q^73 + 12 * q^77 + 5 * q^79 - 11 * q^81 - 12 * q^83 - 16 * q^84 - 6 * q^87 - 15 * q^89 - 8 * q^91 - 14 * q^93 - 8 * q^97 + 3 * q^99
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
1.1
0
0 2.00000 −2.00000 0 0 4.00000 0 1.00000 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Atkin-Lehner signs
$$p$$ Sign
$$5$$ $$1$$
$$19$$ $$1$$
## Inner twists
This newform does not admit any (nontrivial) inner twists.
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 9025.2.a.g 1
5.b even 2 1 1805.2.a.a 1
19.b odd 2 1 9025.2.a.e 1
19.c even 3 2 475.2.e.b 2
95.d odd 2 1 1805.2.a.b 1
95.i even 6 2 95.2.e.a 2
95.m odd 12 4 475.2.j.a 4
285.n odd 6 2 855.2.k.b 2
380.p odd 6 2 1520.2.q.c 2
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
95.2.e.a 2 95.i even 6 2
475.2.e.b 2 19.c even 3 2
475.2.j.a 4 95.m odd 12 4
855.2.k.b 2 285.n odd 6 2
1520.2.q.c 2 380.p odd 6 2
1805.2.a.a 1 5.b even 2 1
1805.2.a.b 1 95.d odd 2 1
9025.2.a.e 1 19.b odd 2 1
9025.2.a.g 1 1.a even 1 1 trivial
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(\Gamma_0(9025))$$:
$$T_{2}$$ T2 $$T_{3} - 2$$ T3 - 2 $$T_{7} - 4$$ T7 - 4 $$T_{11} - 3$$ T11 - 3 $$T_{29} + 3$$ T29 + 3
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$T$$
$3$ $$T - 2$$
$5$ $$T$$
$7$ $$T - 4$$
$11$ $$T - 3$$
$13$ $$T + 2$$
$17$ $$T + 6$$
$19$ $$T$$
$23$ $$T$$
$29$ $$T + 3$$
$31$ $$T + 7$$
$37$ $$T + 8$$
$41$ $$T + 6$$
$43$ $$T - 4$$
$47$ $$T + 6$$
$53$ $$T - 6$$
$59$ $$T + 15$$
$61$ $$T - 5$$
$67$ $$T + 2$$
$71$ $$T + 3$$
$73$ $$T + 8$$
$79$ $$T - 5$$
$83$ $$T + 12$$
$89$ $$T + 15$$
$97$ $$T + 8$$
show more
show less
|
Dimension of pluripolar sets - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-18T20:06:50Z http://mathoverflow.net/feeds/question/50914 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/50914/dimension-of-pluripolar-sets Dimension of pluripolar sets stefano Trapani 2011-01-02T08:31:36Z 2011-01-02T12:48:25Z <p>Let $\Omega$ be an open set in $\mathbb C^n$, and let $A$ be a closed pluripolar set in $\Omega$. Is there a notion of dimension of $A$ such that the following theorem is true?</p> <p><strong>Theorem.</strong> </p> <p>Let $\phi$ be a plurisubharmonic function on $\Omega \setminus A$ (not necessarly assumed to be locally bounded above near $A$), and assume that the (real) codimension of $A$ is at least $3$. Then the function $\phi$ extends to a plurisubharmonic function on $\Omega$. </p> <p>I think I can prove the theorem in case $A$ is complex analytic.</p>
|
# Triggering a relay with a pre-set voltage? (Car related).
#### KeithMac
Joined Jul 23, 2008
4
Hello, Im trying to figure out the best way to make 2 circuits for my car.
The first circuit needs to pull a relay trigger coil to ground (50ma?) when the input is at 4v or higher (max 5v), the input is from the Throttle Position Sensor on the car.
I need to make the circuit transparent to the car, ie not load up the TPS signal wire at all. I have a circuit working using a small transistor but it draws some voltage from the input line affecting the signal to the cars ECU (reads 10% lower than it should), this needs sorting out.
The second circuit needs to pull a relay trigger coil to ground when the input voltage is 1.5v or lower, again it needs to be transparent to the Widband AFR controller its conected to.
I have a circuit working for this using 2 transistors and a pull up resistor but again the input is wired to the transistor base via a potentiometer and I dont really want to "load up" the signal wire for fear of wrecking my Wideband controller.
I have +12v supply and ground from the car, I could hi-jack a stable +5v supply from the cars ECU if it would make things simpler.
Any thoughts or suggestions on the right components to use are welcome.
Cheers, Keith.
#### SgtWookie
Joined Jul 17, 2007
22,210
a dual FET-input comparator would do the trick.
Even an LM393 or LM2903 might do the trick for you. Is 25nA current too much?
LMC6772, National Semiconductor. Dual CMOS comparator. Input current 2pA. That's low. Rail-rail inputs, open drain output can sink up to 30mA.
Last edited:
#### SgtWookie
Joined Jul 17, 2007
22,210
See the attached. I used an LM339 in the simulation because I don't have a better one loaded in the library at the moment.
It's just the basic circuit. The Zener ensures a pretty stable 4.3v feed to the pots. You can set the input reference level anywhere from 0v to 4.3v. C1 and C2 are for noise supression.
There is no hysterisis in this circuit. It may not be a bad idea to add a bit.
#### Attachments
• 61.9 KB Views: 42
#### KeithMac
Joined Jul 23, 2008
4
Thanks for that, Ill have a look at your circuit and get my head round it.
The LMC6772 looks a good bet at 2pa input, I think an inline resistor from the relay coil to limit the current to 30ma will still throw the relay properly.
Thanks for your time, much appreciated!
#### SgtWookie
Joined Jul 17, 2007
22,210
Instead of using a current limiting resistor, I suggest using a transistor such as a PN2222A or perhaps a small Darlington such as an MPSA13, MPSA14, or 2N6426. You'll need a current limiting resistor to the base input; a 1k would work fine.
|
# Venturimeter | Measurement of Rate of Flow of Fluid
### Venturimeter
It is a device used to measure the rate of streamline flow of a fluid through a tube . A venturimeter works on Bernoulli’s principle.
It consists of a U-tube filled with mercury fitted to a straight tube, with varying cross-section, through which the rate of flow of the fluid is to be measured.
The area of cross-section in the broader section of the straight tube is A1 and that of the narrower section is A2 . The speed of flow of the fluid in the narrower section is v2 and that at the broader section is v1
Using the principle of continuity, we have
v1A1 = v2A2 ………(i)
$\large \frac{A_1}{A_2} = \frac{v_2}{v_1}$
The fluid is incompressible. Therefore,
v1 A1 ρ = v2 A2 ρ ……..(ii)
Suppose that the pressure in the broader section is P1 and in the narrower section is P2
Therefore, P1 – P2 = hρ0g ……..(iii)
where ρ0 is the density of mercury in the U-tube, and h represents the difference in the levels of mercury in the limbs of the U-tube.
Now, applying Bernoulli’s principle at the two positions, we have
$\displaystyle P_1 + \frac{1}{2}\rho v_1^2 = P_2 + \frac{1}{2}\rho v_2^2$
$\displaystyle P_1 – P_2 = \frac{1}{2}\rho (v_2^2 – v_1^2 )$
$\displaystyle \rho_0 g h = \frac{1}{2}\rho (v_2^2 – v_1^2 )$
$\displaystyle \rho_0 g h = \frac{1}{2}\rho v_1^2(\frac{v_2^2}{v_1^2} – 1)$
$\displaystyle \rho_0 g h = \frac{1}{2}\rho v_1^2(\frac{A_1^2}{A_2^2} – 1)$ …(iv)
Solving equations (1) and (4) for v1, we have
$\displaystyle v_1 = A_2 \sqrt{\frac{2\rho_0 g h}{\rho (A_1^2 -A_2^2)}}$ …….(v)
Therefore the rate of flow of fluid is given by,
Q = A1 v1
$\displaystyle Q = A_1 A_2 \sqrt{\frac{2\rho_0 g h}{\rho (A_1^2 -A_2^2)}}$ ……(vi)
Solved Example : The ratio of the radius of the tube of a venturimeter is η (η >1). The ratio of the densities of the liquid in the manometer and the moving fluid is η1. If the difference in heights of the liquid column in the manometer is h, find the minimum speed of flow of the fluid.
Solution: The speed of flow is minimum when the cross-sectional area of the tube is maximum. The equation for minimum speed of flow of the fluid in a venturimeter is given as
$\displaystyle v_1 = A_2 \sqrt{\frac{2\rho_0 g h}{\rho (A_1^2 -A_2^2)}}$
$\displaystyle v_1 = \sqrt{\frac{2\rho_0 g h}{\rho ((A_1/A_2)^2 – 1)}}$
Putting ρo/ρ = η1 and ,
$\displaystyle \frac{A_1}{A_2} = \frac{\pi r_1^2}{\pi r_2^2} = (\frac{r_1}{r_2})^2 = \eta^2$
we obtain ,
$\displaystyle v_1 = \sqrt{\frac{2\eta_1 g h}{\eta^4 -1}}$
Exercise : In a venturimeter, the diameter of the pipe is 4 cm and that of constriction is 3 cm. The heights of the liquid of density 1.5 gm/cc. in the pressure tube is 20 cm at the pipe and 5 cm at the constrictions. What is the discharge rate of water flowing inside the pipe?
|
# I Intensity of a Gaussian laser beam
1. Aug 15, 2018
### roam
I have two lasers with different intensity distributions (shown below) — one is Gaussian and the other one is rectangular (having the shape of a Fresnel diffraction pattern at the target).
I am trying to compare the efficacy of the two lasers for burning a certain material (I am really comparing their wavelengths). But the two lasers have different total powers. I was told that the only way I can get a more direct comparison is to adjust the spot sizes so that the two have similar "intensities", i.e.,
$$\frac{P_{\text{Gaussian}}}{A_{\text{Gaussian}}}\approx\frac{P_{\text{Rectangular}}}{A_{\text{Rectangular}}}.$$
In many places, I have encountered people stating an intensity value in this way (dividing the total power of the beam by the beam cross-sectional area: $I=P/A$). However, to me, it seems that this only works if you have a constant intensity distribution.
In fact, in optics textbooks, the intensity of a Gaussian beam is evaluated as the intensity of a single point within the x-y plane (power transferred per unit area):
$$I(x,y,z)=I_{0}\left[\frac{w_{0}}{w(z)}\right]^{2}\exp\left[-2\frac{\left(x^{2}+y^{2}\right)}{w(z)^{2}}\right].$$
where $w(z)$ is the radius of the beam at a distance $z$. The concept seems to be only applicable to a given point — we can't really speak about the intensity of a beam spot as a whole. So does the usage of $I=P/A$ have any validity?
Is there any way I can get a more like-for-like comparison when the distributions of the lasers are different?
Any suggestions would be greatly appreciated.
2. Aug 15, 2018
The intensity (irradiance) is better described by $\frac{dP}{dA}$. In many cases, the almost parallel rays are brought to focus using an off-axis paraboloidal mirror. You can then get a diffraction limited spot, but if the source (e.g a HeNe) is close to the focusing mirror, instead of getting focusing in the focal plane, the lensmakers' equation $\frac{1}{f}=\frac{1}{b}+ \frac{1}{m}$ will apply to determine where the beam gets focused=(at distance $m$). You might be interested in an Insights article I authored to get an understanding of the idea of diffraction-limited spot size: https://www.physicsforums.com/insights/diffraction-limited-spot-size-perfect-focusing/ $\\$ Less than perfect optics and less than perfect beam collimation will result in a spot size that is somewhat larger than the diffraction limit. You can also use a lens or spherical mirror to focus the beam, but the focused spot size will necessarily be larger than the diffraction limit. $\\$ And the intensity (irradiance watts/m^2) will not be uniform across the focused spot, but will normally peak in the center and fall-off in a manner similar to the far-field pattern. There are also some finer details about whether the focusing is in the focal plane, or at a distance $m$ or at a location that is called the beam "waist". A google of the topic "beam waist" might show some of these results.
Last edited: Aug 15, 2018
3. Sep 4, 2018
### roam
I have read your article last year but my question is not really about tight focusing or the effects of focusing itself. We can suppose that the beam comes straight out of the laser and is not being focused down by lenses.
A lot of people divide the total power by the area of the beam to get what they variously call intensity or power density. So for a Gaussian beam with a radius $w(z)$ at the point $z$ where the measurement is taken, we have:
$$\frac{P_{\text{Total}}}{\pi w(z)^{2}}$$
What do you think about this quantity? Does this make sense given that the distribution of power is not uniform?
4. Sep 5, 2018
It does not give the exact irradiance at an arbitrary location in the beam, but without any beam profile information, it is a reasonably good number.
5. Sep 5, 2018
### Cutter Ketch
The usefulness and validity of using the local intensity compared to using the average over the area depends on the question you are asking. In regards to laser damage, it depends on how the energy is coupled into the material and how quickly the material dissipates the energy.
Not always, but usually, the localized intensity determines the lowest energy at which damage occurs (threshold). However the averaged energy determines how much damage is done. (Integrated energy).
6. Sep 5, 2018
### Andy Resnick
One possible metric is the so-called 'M2 value', which parametrizes how closely a laser beam approximates a Gaussian beam.
https://en.wikipedia.org/wiki/M_squared
However, the metric you really care about, 'I am trying to compare the efficacy of the two lasers for burning a certain material', may not be simply related to M2. A metric you may be more interested in is simply the peak irradiance- the maximum amount of power/area incident on the sample.
|
# Dividing fractions and mixed numbers
### Dividing fractions and mixed numbers
In this section, we will use diagrams to divide fractions and mixed numbers (a.k.a. compound fractions) for the purpose of helping you getting a better understanding about the concepts of fraction division. We will also teach you how to divide fractions by using multiplications.
#### Lessons
In this lesson, we will learn:
• Dividing Fractions and Mixed Numbers Using Diagrams
• Dividing Fractions and Mixed Numbers Algebraically
• Word Problems: Dividing Fractions and Mixed Numbers
• Dividing Fractions and Mixed Numbers Involving Multiple-digit and Negative Numbers
• Introduction
a)
Simplify fractions: Method A - By using greatest common factors
b)
Simplify fractions: Method B - By using common factors
c)
How to divide fractions with fractions?
d)
How to do cross-cancelling?
e)
How to convert between mixed numbers and improper fractions?
f)
Review: How to divide fractions and mixed numbers?
• 1.
Dividing Fractions and Mixed Numbers Using Diagrams
Find each of the following quotients by using diagrams.
a)
$\frac{5}{6} \div \frac{3}{4}$
b)
$2\frac{1}{3} \div \frac{1}{8}$
c)
$\frac{7}{5} \div \frac{1}{2}$
• 2.
Dividing Fractions and Mixed Numbers Algebraically
Find each quotient by using multiplication.
a)
$\frac{5}{8} \div \frac{1}{4}$
b)
$2 \div \frac{3}{5}$
c)
$3\frac{2}{7} \div 1\frac{9}{{10}}$
• 3.
Word Problems: Dividing Fractions and Mixed Numbers
Three cups of water can fill up $\frac{2}{3}$ of a kettle. How many cups of water are required to fill up 5 kettles?
|
# partial fractions for polynomials
This entry precisely states and proves the existence and uniqueness of partial fraction decompositions of ratios of polynomials of a single variable, with coefficients over a field.
The theory is used for, for example, the method of partial fraction decomposition for integrating rational functions over the reals (http://planetmath.org/ALectureOnThePartialFractionDecompositionMethod).
The proofs involve fairly elementary algebra only. Although we refer to Euclidean domains in our proofs, the reader who is not familiar with abstract algebra may simply read that as “set of polynomials” (which is one particular Euclidean domain).
Also note that the proofs themselves furnish a method for actually computing the partial fraction decomposition, as a finite-time algorithm, provided the irreducible factorization of the denominator is known. It is not an efficient way to find the partial fraction decomposition; usually one uses instead the method of making substitutions into the polynomials, to derive linear constraints on the coefficients. But what is important is that the existence proofs here justify the substitution method. The uniqueness property proved here might also simplify some calculations: it shows that we never have to consider multiple solutions for the coefficients in the decomposition.
###### Theorem 1.
Let $p$ and $q\neq 0$ be polynomials over a field, and $n$ be any positive integer. Then there exist unique polynomials $\alpha_{1},\ldots,\alpha_{n},\beta$ such that
$\frac{p}{q^{n}}=\beta+\frac{\alpha_{1}}{q}+\frac{\alpha_{2}}{q^{2}}+\cdots+% \frac{\alpha_{n}}{q^{n}}\,,\quad\deg\alpha_{j}<\deg q\,.$ (1)
###### Proof.
Existence has already been proven as a special case of partial fractions in Euclidean domains; we now prove uniqueness. Suppose equation (1) has been given. Multiplying by $q^{n}$ and rearranging,
$p=\beta q^{n}+r_{1}\,,\quad r_{1}=\alpha_{1}q^{n-1}+\cdots+\alpha_{n}\,,\quad% \deg r_{1}<\deg q^{n}\,.$
But according to the division algorithm for polynomials (also known as long division), the quotient and remainder polynomial after a division (by $q^{n}$ in this case) are unique. So $\beta$ must be uniquely determined. Then we can rearrange:
$p-\beta\,q^{n}=\alpha_{1}q^{n-1}+r_{2}\,,\quad r_{2}=\alpha_{2}q^{n-2}+\cdots+% \alpha_{n}\,,\quad\deg r_{2}<\deg q^{n-1}\,.$
By uniqueness of division again (by $q^{n-1}$), $\alpha_{1}$ is determined. Repeating this process, we see that all the polynomials $\alpha_{j}$ and $\beta$ are uniquely determined. ∎
###### Theorem 2.
Let $p$ and $q\neq 0$ be polynomials over a field. Let $q=\phi_{1}^{n_{1}}\,\phi_{2}^{n_{2}}\,\cdots\,\phi_{k}^{n_{k}}$ be the factorization of $q$ to irreducible factors $\phi_{i}$ (which is unique except for the ordering and constant factors). Then there exist unique polynomials $\alpha_{ij},\beta$ such that
$\frac{p}{q}=\beta+\sum_{i=1}^{k}\sum_{j=1}^{n_{i}}\frac{\alpha_{ij}}{\phi_{i}^% {j}}\,,\quad\deg\alpha_{ij}<\deg\phi_{i}\,.$ (2)
###### Proof.
Existence has already been proven as a special case of partial fractions in Euclidean domains; we now prove uniqueness. Suppose equation (2) has been given. First, multiply the equation by $q$:
$p=\beta\,q+\sum_{i,j}\alpha_{ij}\,\frac{q}{\phi_{i}^{j}}\,.$
The polynomial sum on the far right of this equation has degree $, because each summand has degree $\deg(\alpha_{ij}\,q/\phi_{i}^{j})<\deg\phi_{i}+\deg q-j\cdot\deg\phi_{i}\leq\deg q$. So the polynomial sum is the remainder of a division of $p$ by $q$. Then the quotient polynomial $\beta$ is uniquely determined.
Now suppose $s_{i}$ and $s^{\prime}_{i}$ are polynomials of degree $<\phi_{i}^{n_{i}}$, such that
$\sum_{i=1}^{k}\frac{s_{i}}{\phi_{i}^{n_{i}}}=\sum_{i=1}^{k}\frac{s^{\prime}_{i% }}{\phi_{i}^{n_{i}}}\,.$ (3)
We claim that $s_{i}=s^{\prime}_{i}$. Let $q_{1}=\phi_{1}^{n_{1}}$ and $q_{2}=q/q_{1}$, and write
$\frac{s_{1}}{q_{1}}+\frac{u}{q_{2}}=\sum_{i=1}^{k}\frac{s_{i}}{\phi_{i}^{n_{i}% }}=\sum_{i=1}^{k}\frac{s^{\prime}_{i}}{\phi_{i}^{n_{i}}}=\frac{s^{\prime}_{1}}% {q_{1}}+\frac{u^{\prime}}{q_{2}}\,,$
for some polynomials $u$ and $u^{\prime}$. Rearranging, we get:
$(s_{1}-s^{\prime}_{1})\,q_{2}=(u^{\prime}-u)\,q_{1}\,.$
In particular, $q_{1}$ divides the left side. Since $q_{1}=\phi_{1}^{n_{1}}$ is relatively prime from $q_{2}$, it must divide the factor $(s_{1}-s^{\prime}_{1})$. But $\deg(s_{1}-s^{\prime}_{1})<\deg q_{1}$, hence $s_{1}-s^{\prime}_{1}$ must be the zero polynomial. That is, $s_{1}=s^{\prime}_{1}$.
So we can cancel the term $s_{1}/\phi_{1}^{n_{1}}=s^{\prime}_{1}/\phi_{1}^{n_{1}}$ on both sides of equation (3). And we could repeat the argument, and show that $s_{2}$ and $s^{\prime}_{2}$ are the same, $s_{3}$ and $s^{\prime}_{3}$ are the same, and so on. Therefore, we have shown that the polynomials $s_{i}$ in the following expression
$\frac{p}{q}-\beta=\sum_{i=1}^{k}\frac{s_{i}}{\phi_{i}^{n_{i}}}\,,\quad\deg s_{% i}<\deg\phi_{i}^{n_{i}}$
are unique. In particular, $s_{i}$ is the following numerator that results when the fractions $\alpha_{ij}/\phi_{i}^{j}$ are put under a common denominator $\phi^{i}_{n_{i}}$:
$s_{i}=\sum_{j=1}^{n_{i}}\alpha_{ij}\,\phi_{i}^{n_{i}-j}\,.$
But by the uniqueness part of Theorem 1, the decomposition
$\frac{s_{i}}{\phi_{i}^{n_{i}}}=\beta_{i}+\sum_{j=1}^{n_{i}}\frac{\alpha_{ij}}{% \phi_{i}^{j}}\,,\quad\deg\alpha_{ij}<\deg\phi_{i}$
uniquely determines $\alpha_{ij}$. (Note that the proof of Theorem 1 shows that $\beta_{i}=0$, as $\deg s_{i}<\deg\phi_{i}^{n_{i}}$.) ∎
Title partial fractions for polynomials PartialFractionsForPolynomials 2013-03-22 15:40:22 2013-03-22 15:40:22 stevecheng (10074) stevecheng (10074) 6 stevecheng (10074) Result msc 12E05 partial fraction decomposition of rational functions partial fractions for rational functions PartialFractionsOfExpressions ALectureOnThePartialFractionDecompositionMethod
|
category theory
# Contents
## Idea
The category theoretic notions of
on the one hand and of
on the other have been suggested (Lawvere) to usefully formalize, respectively, the heuristic notions
• “general” and “particular”
as well as
• “abstract” and “concrete”, respectively.
We have:
• a (syntactic category of a) Lawvere theory $T$ (or the equivalent in any doctrine) $T$ is an abstract general;
• the category $T Mod(E)$ of $T$-models/algebras in any context $E$ is a concrete general;
• an object of any $T Mod(E)$ is a particular.
That seems to be roughly what is suggested in Lawvere. Of course one could play with this further and consider further refinement such as
• a (generating) object in $T$ is an abstract particular ;
• an object of any $T Mod(E)$ is a concrete particular.
## Examples
### Groups
The syntactic category $T_{Grp}$ of the theory of groups is the “general abstract” of groups. Its essentially unique generating object is the abstract particular group.
The category $T_{Grp} Mod(Set) =$ Grp of all groups is the concrete general of groups.
An object in there is some group: a concrete particular.
## References
The category-theoretic formalization of these notions as proposed by Bill Lawvere is disussed in print for instance in
• Bill Lawvere, Categorical refinement of a Hegelian principle, section 1 of Bill Lawvere, Tools for the advancement of objective Logic: Closed categories and toposes, in John Macnamara, Gonzalo Reyes, the logical foundations of cognition, Oxford University Press (1994)
|
Moments in Graphics
A blog by Christoph Peters
# My toy renderer, part 3: Rendering basics
Published 2021-07-16
Part 3 of this series about my toy renderer covers some basic techniques for rendering. My renderer does nothing fundamentally new on this front but some choices are a bit exotic. It is ultimately about real-time ray tracing, so everything should play nicely with that. Besides I want it to be slick and fast. Disregarding dear ImGui, the whole thing only makes two draw calls per frame. It uses a visibility buffer [Burns2013] and the same reflectance model for all surfaces [Lagarde2015]. A 3D table of linearly transformed cosines [Heitz2016] approximates this reflectance model when needed. It almost has a Shadertoy vibe since the bulk of all work happens in a single fragment shader for the shading pass. To get stratified random numbers, it uses a recent work [Ahmed2020] for 2D polygonal lights and my blue noise textures for 1D linear lights.
## Rendering visibility buffers
Visibility buffers [Burns2013] are a flavor of deferred shading. In a way, a visibility buffer is the smallest conceivable G-buffer. In my renderer, its size is 32 bits per pixel. It stores nothing but the index of the visible triangle. Visibility buffers go together well with ray tracing for two reasons. Firstly, ray tracing is expensive, so it is wasteful to do it for occluded surfaces. Deferred shading guarantees that it happens at most once per pixel. Secondly, a ray may hit any triangle in the scene. Then you get an index and maybe some barycentric coordinates and have to figure out how to do shading with that. Visibility buffers force you to solve the same problem. Solving it once and benefiting twice feels like a good deal.
The selling point of visibility buffers is that they save bandwidth in more than just one way. The obvious saving is that the buffers themselves are small. Writing 32 bits per fragment is extremely affordable. You do not even have to keep the depth buffer. Common G-buffer layouts are closer to 160 bits. The approach is also known as deferred texturing because the geometry phase never samples the material textures. Overall, rasterization of the scene becomes extremely fast. The vertex shader transforms dequantized positions, the fragment shader writes gl_VertexIndex/3 and that's it. Creating the visibility buffer for the frame in Figure 1 takes 0.36 ms in spite of the fact that I did not implement any culling whatsoever.
The second saving, which is overlooked more easily, is that the shading pass accesses block-compressed textures directly. Creating the G-buffer decompresses all the textures, which increases bandwidth usage during the shading pass. BC1 compression makes textures eight times smaller, so that is a big saving.
Figure 1: The Lumberyard bistro lit by a long linear light with ray traced shadows. This frame renders in 1.6 ms at 1920×1080 on an NVIDIA RTX 2080 Ti. Most of this frame time goes to ray tracing. The scene has 2.9 million triangles.
## No alpha testing
Above I said that the geometry phase never samples material textures. This design is problematic for triangles that are mostly opaque but use an alpha test to make some parts fully transparent. Foliage is typically modeled like that. This problem is absolutely solvable with visibility buffers. I'd have to load the material index and texture coordinates in the vertex shader and sample the alpha channel in the fragment shader.
However, this problem is difficult to solve efficiently with ray tracing. Any hit shaders are one way to go about it but they incur a substantial cost. There is already research to overcome this problem [Gruen2020] but for now, I do not want to deal with it. If I cannot do it well, I have the luxury to not do it at all in my toy renderer. All triangles are fully opaque. That's not ideal but it's keeping things simple.
As explained in the previous post every triangle stores a material index, every material is fully characterized by three textures (base color, normal and specular) and every material has all of those (sometimes their size is just 4×4 to store a constant value). To implement shading with a visibility buffer, the shading pass must have random access to all of these textures. Vulkan makes that relatively easy. On the C side, a function writes an array of descriptors for all these textures:
VkDescriptorImageInfo* get_materials_descriptor_infos(
uint32_t* texture_count,
const materials_t* materials)
{
(*texture_count) = (uint32_t) materials->material_count * material_texture_count;
VkDescriptorImageInfo* texture_infos = malloc(sizeof(VkDescriptorImageInfo) * *texture_count);
for (uint32_t i = 0; i != *texture_count; ++i) {
texture_infos[i].imageView = materials->textures.images[i].view;
texture_infos[i].sampler = materials->sampler;
}
return texture_infos;
}
The shader uses this global declaration:
//! Textures (base color, specular, normal
//! consecutively) for each material
layout (binding = 5) uniform sampler2D
g_material_textures[3 * MATERIAL_COUNT];
Implementing a shading pass with a classic G-buffer is fairly straight forward. You unpack data from the G-buffer and then all data needed for shading is available. This part is quite a bit more complicated with visibility buffers. My renderer implements it with a single function in 101 lines of code. It takes the 2D index of a pixel, the triangle index from the visibility buffer and a ray direction for this pixel. Then it fills a struct with data that you would normally find in a G-buffer and some other useful attributes.
shading_data_t get_shading_data(
ivec2 pixel, int primitive_index,
vec3 ray_direction)
There is some fancy math in there, so you might want to take a closer look at the code. Here is an overview of what it does:
• Read and dequantize position, normal vector and texture coordinates for all three vertices of the relevant triangle. In total, 3·128=384 bits per pixel are read from two buffers but these reads are very cache coherent because there is no index buffer and triangles are sorted.
• Compute derivatives of the barycentric coordinates with respect to screen space coordinates. We need those for texture filtering.
• Use the barycentric coordinates to compute the interpolated position, normal and texture coordinates for the fragment.
• Compute the screen space derivatives for the texture coordinates (using the derivatives for the barycentric coordinates).
• Read the 8-bit material index.
• Sample all three textures of the material using textureGrad(). Thanks to the derivatives, mipmapping and anisotropic filtering work correctly.
• Apply some conversions to the data from the textures to get the parameters for the Frostbite BRDF (e.g. the diffuse albedo).
• Construct a tangent frame (more on that below) and transform the shading normal to world space.
• Apply a little hack to avoid shading normals where the viewing direction is below the horizon.
There is quite a lot of arithmetic going on here. On the other hand, all memory reads are either heavily cache coherent (partly due to the triangle ordering) or access block-compressed textures. Compared to a classic G-buffer, I am trading reduced bandwidth requirements for increased computation. That may or may not be a good trade, dependent on which resource is limiting performance. In any case, there is the added benefit that it becomes much easier to perform shading for surfaces encountered in a path tracer. The renderer only has ray traced shadows at this time but it is nice to be future proof, isn't it?
## Computing tangent frames
Now on to an aspect that is simultaneously cool and mildly embarrassing. To compute a tangent frame for a triangle, all I need are the texture coordinates and positions for all three vertices. That is readily available in the method that prepares the shading data. Thus, I implement it on the spot in the fragment shader. This moderate computational overhead lets me avoid loads of six 3D vectors per pixel.
But I promised embarrassment: Tangent frames for triangles are not the same as tangent frames for vertices. On smooth surfaces, the established practice is to merge the tangent frames of all adjacent triangles at the vertices. Then these tangents are interpolated across the triangles. Since I only look at one triangle at a time and lack connectivity information, I cannot do that, especially not in the fragment shader. I do construct the tangents so that they are orthogonal to the interpolated normal, so at least this way smooth surfaces are accounted for. However, there can be discontinuous changes of tangent vectors at triangle boundaries.
The other embarrassing aspect that makes me feel less bad about the first one is that I already lost this game at an earlier stage. I obtain models with normal maps from various sources and load all of those into Blender. Maybe some of these models come with tangent frames but as far as I know Blender drops those during import. My exporter would have to come up with tangent frames on its own. These could be a bit nicer than the ones computed in the fragment shader but most likely they won't be defined in exactly the right way to match whatever was used to bake the normal maps originally.
Getting tangent frames right is hard when you do not control the authoring of normal maps. If I'm going to do it wrong, I can at least do that efficiently.
## The Frostbite BRDF
There are lots of great BRDF models that help us approach the diversity of real surfaces. The flipside is that this sometimes leads into a babel of similar alternatives where nobody understands what anybody else is doing. Getting shading to look the same in different renderers is a daunting task. There are different choices for diffuse and specular microfacet models, normal distribution functions, masking and shadowing and Fresnel approximations. Since microfacet BRDFs are sort of modular, these options give rise to a combinatoric explosion. And on top of that, people do not always agree how to define the input parameters for these models.
However, in recent years we got a glimpse of unification under the slogan “physically based shading.” An appreciable portion of people in real-time rendering now defines most materials in a fairly standardized way. As far as I can tell, the introduction of the Frostbite BRDF [Lagarde2015] has been a major catalyst for that. It is not really a new BRDF, all building blocks have been described in detail before (Disney diffuse, GGX, Smith masking-shadowing, Fresnel-Schlick approximation) [Heitz2014]. But it sets a well-defined standard for how to combine them and how to feed them with parameters from textures. It also strikes a good balance between efficiency and versatility, which probably helped it to get a rather wide adoption. Probably, there are many slightly different variants out there but at least they will give similar shading for identical parameters.
My renderer uses the Frostbite BRDF for absolutely all surfaces. In my opinion, it gives appealing shading for a wide range of solid surfaces (Figure 2). The three textures define base color, normal, an occlusion parameter that I do not use, roughness and metalicity. These attributes are stored in the order listed here (e.g. roughness in the green channel). Metalicity is 1 for metals (where the base color defines the specular color) and 0 for dielectrics (where the base color defines the diffuse albedo and specular highlights are white). The roughness in the texture gets squared before it is used for the GGX normal distribution function. Models from ORCA, Sketchfab and 3D Model Haven all provide such textures.
If you want to check out my reasonably optimized implementation, you should look at evaluate_brdf() in brdfs.glsl and at get_shading_data() in shading_pass.frag.glsl.
Figure 2: The Arcade lit by a polygonal light. Across materials of different roughness, the Frostbite BRDF gives plausible shading.
## Linearly transformed cosines (LTCs)
As I wrote the toy renderer, my goal was to implement new techniques for shading with area lights. That's where LTCs [Heitz2016] come into play. In short, they provide an approximation to specular BRDFs that makes it much easier to integrate over the area light for shading. Eric Heitz wrote a more detailed explanation and of course there's the paper [Heitz2016]. Originally, you could not have shadows with LTCs. Now you can, but more on that in the next post.
To fit coefficients of LTCs, I used Eric Heitz's code (Stephen Hill pointed out that there is an updated version). I have not published this code since it is just slightly modified. All I really did was to put the Frostbite BRDF in there. Then I dump this data to another lazy binary file format and load it into my renderer. There are two noteworthy aspects of my implementation though: It uses a 3D table instead of a 2D table and quantizes LTC coefficients in an unusual way.
To use LTCs, I have to store a few precomputed coefficients for every relevant BRDF and for each viewing direction. For isotropic materials, the viewing direction boils down to a single inclination angle $$\theta_o$$. The appearance of the Frostbite specular BRDF depends on the roughness and on Fresnel $$F_0$$, i.e. the color of the specular reflection when we look at the material at a right angle. For my renderer, I have created a 3D table that accounts for all three parameters. However, Stephen Hill pointed out that he has proposed an alternative where a 2D table suffices [Hill2017]. Inspired by prior work on prefiltered environment maps, the idea is to only store LTCs for Fresnel $$F_0=1$$. Then an additional channel stores the overall brightness of the specular reflection (i.e. the albedo) for Fresnel $$F_0=0$$. The correct albedo is obtained by using $$F_0$$ to interpolate between the albedos for the two extreme values.
My approach with the 3D table gives more accurate fits because the shape of the specular lobe provided by the LTC can depend on Fresnel $$F_0$$ as it should. However, it is a bit more complicated to implement, the table of LTCs is bigger (namely 2.5 MB) and if Fresnel $$F_0$$ varies quickly (in my test scenes it doesn't) texture reads have worse cache locality. To give you an idea whether it is worthwhile to invest these resources, I decided to do a direct comparison. All information needed to implement Stephen Hill's method is available in my table, so I only had to modify the shader a bit. You can download the modified code below.
Figure 3 shows results with my 3D table and Figure 4 uses Stephen Hill's method. As expected, there is more noise with the 2D table. The difference is quite visible, especially on the rough plane to the left, but results with the 2D table are still good. The surfaces here are not metallic, so this example should be considered a worst case for the 2D table.
Figure 3: Three planes of different roughness with Fresnel $$F_0=0.02$$ shaded using BRDF importance sampling for polygonal lights. This rendering uses the weighted balance heuristic, i.e. if the LTC fit would be perfect, there would be no noise. The LTCs used here come from a 3D table.
Figure 4: Like Figure 3 but uses Stephen Hill's method with the 2D LTC table.
In practice, samples for diffuse and specular components of the BRDF should be combined in a different manner. It improves results in penumbras but causes more noise in fully lit regions. More on that in the next post. Figure 5 and Figure 6 show a comparison in this setting. The parts where diffuse shading dominates get worse but for specular highlights, the conclusions are essentially the same as before. Though, the increased noise with the 2D table appears less problematic when there is more noise to begin with.
Figure 5: Like Figure 3 but uses clamped optimal MIS with $$v=0.5$$.
Figure 6: Like Figure 4 but uses clamped optimal MIS with $$v=0.5$$.
Overall, I would say both options are absolutely legitimate. If you do not mind creating and using the slightly bigger table, you get a visible quality improvement out of it. Otherwise, the quality of the results will still be good and you can be more confident that you are not thrashing caches with LTCs.
The other interesting aspect of my implementation is the representation of LTCs in GPU memory. It differs a bit from the solution in the published implementation [Heitz2016]. A linearly transformed cosine is described by a matrix of the form
$M:=\begin{pmatrix}a & 0 & b\\ 0 & c & 0\\ d & 0 & 1 \end{pmatrix}.$
Additionally, the table stores the specular albedo (or two specular albedos with Stephen Hill's method). My implementation, stores one additional entry explicitly (i.e. six values per LTC):
$M:=\begin{pmatrix}a & 0 & b\\ 0 & c & 0\\ d & 0 & e \end{pmatrix}$
Scaling all entries by the same factor does not change the LTC. Thus, I can ensure that the maximal absolute value among all entries is exactly one. I also know the sign for each entry. Therefore, I can use 16-bit fixed point numbers instead of 16-bit floats. Filtering works fine thanks to an UNORM format. Since I do not use any bits for float exponents, accuracy is better but I also use 16 bits more. Maybe it also makes interpolation behave more reasonably in some boundary cases. I am not sure whether this change is really necessary but it doesn't hurt either. Besides, I have a fondness for fixed-point formats. I guess it goes back to moment shadow mapping.
## Stratified random numbers (blue noise)
My renderer uses Monte Carlo integration for shading with either linear lights or polygonal lights. A random point on the light is picked and a shadow ray determines whether it is visible. For that I need either 1D or 2D random numbers on each pixel. Crucially, these random numbers do not have to be independent across pixels. Introducing anticorrelations into random numbers of neighboring pixels in just the right way pushes noise into higher frequencies. That makes it harder to perceive and easier to remove through denoising. Therefore, I do not use any of the great pseudorandom number generators [Jarzynski2020] out there. Instead, I use precomputed textures of stratified random numbers. I have an array of 64 textures and select a random one in each frame that also gets offset randomly.
For the 1D linear lights, it should not come as a surprise that I use my blue noise textures [Ulichney93]. Apparently, everybody does that nowadays and I made them exactly for this sort of situation. Results are great, especially in combination with uniform jittered sampling [Ramamoorthi2012] (Figure 7). Particularly in spots where a single contiguous part of the linear light is visible/occluded, the ray traced shadows essentially just apply a thresholding to the random numbers. Thus, we get cleanly dithered gradients.
Figure 7: Shadows from a long linear light rendered using blue noise textures and six samples per pixel (three diffuse, three specular). Results are improved further through uniform jittered sampling as described in the paper.
But what about 2D polygonal lights? Should I just use two independent 1D blue noise textures? That's what I did for the longest time and it is not so bad but there is a better solution now [Ahmed2020]. The idea is to start from a 2D Sobol sequence. This sequence offers a progressive point set with a special property. Each prefix of $$2^{2^n}$$ points (e.g. 256·256 points) places exactly one point in certain axis-aligned rectangles of various aspect ratios (so-called elementary intervals). If one such elementary interval corresponds to a part of the area light that is completely occluded or completely visible, the outcome is no longer random. We know that we place exactly one sample there and get the corresponding contribution. Thus, variance is decreased.
This reasoning only works at high sample counts but it points us in the right direction. For nearby pixels, similar parts of the area light will be visible or occluded. So if we use nearby entries in the Sobol sequence for nearby pixels, we probably get many of the same advantages. Morton codes could be used to unroll the 1D sequence onto the 2D screen in such a way but that gives rise to regular patterns. Ahmed et al. take a slightly different approach by traversing a quadtree with randomly shuffled children. The order in the depth-first traversal maps Sobol points to pixels. Locality is preserved but regular patterns are broken. I would definitely recommend that you read the full paper [Ahmed2020] instead of this mile high summary. In my opinion, it addresses an important problem for modern real-time rendering well.
One of the benefits of this approach is that it is reasonably efficient to compute the random numbers from the pixel index. However, I already had the infrastructure for precomputed noise tables in place and preferred to stick to it. Thus, I use a slightly modified version of Abdalla Ahmed's code to compute noise tables (published as well). These tables are created for rendering at one sample per pixel. At higher sample counts, they still work but are suboptimal because consecutive samples for a single pixel should also be consecutive entries of the Sobol sequence.
Figure 8, Figure 9 and Figure 10 compare white noise, blue noise and Ahmed's approach. White noise is clearly inferior. The improvement over blue noise with Ahmed's approach is subtle but I found it to be consistently present. Bright and dark pixels cluster together slightly less.
Figure 8: Shadow from a rectangular light for the leg of a chair using white noise.
Figure 9: Shadow from a rectangular light for the leg of a chair using blue noise.
Figure 10: Shadow from a rectangular light for the leg of a chair using Ahmed's approach.
I also tried many other options:
• Using the first two dimensions of a 4D Sobol sequence to determine where samples should go in a 2D noise table, gave fairly good results but with structured noise.
• Using Owen-scrambled Sobol in the same manner gave even more structured artifacts. I am not sure why.
• Halton gives less structured noise but also a bit more low-frequent noise. To get it to look good, I had to use a 2048×1152 resolution, so the precomputed textures took 1.1 GB in the end.
Overall, Ahmed's approach is the winner in this comparison for direct illumination with polygonal area lights. Its lead is not huge but it performs consistently well in every aspect. It never looked clearly worse than one of the other options.
## Conclusions
In my renderer, the life of a triangle is short. It does so little with the geometry that the complete lack of culling is irrelevant. Most work happens in the shading pass but this pass also has low bandwidth requirements. The image quality does not stem from geometric complexity but from sophisticated shading. In particular, linear lights or area lights can look really nice when used appropriately. In the next post, I'll look beyond the random numbers and explain how my renderer supports large linear and polygonal lights with ray-traced shadows without resorting to approximations or introducing a lot of noise.
## References
Ahmed, Abdalla G. M. and Wonka, Peter (2020). Screen-Space Blue-Noise Diffusion of Monte Carlo Sampling Error via Hierarchical Ordering of Pixels. ACM Transactions on Graphics (proc. SIGGRAPH Asia), 39(6). Official version | Author's version
Burley, Brent (2020). Practical Hash-based Owen Scrambling. Journal of Computer Graphics Techniques, 9(4):1-20. Official version
Burns, Christopher A. and Hunt, Warren A. (2013). The Visibility Buffer: A Cache-Friendly Approach to Deferred Shading. Journal of Computer Graphics Techniques, 2(2):55-69. Official version
Gruen, Holger and Benthin, Carsten and Woop, Sven (2020). Sub-triangle opacity masks for faster ray tracing of transparent objects. Proceedings of the ACM on Computer Graphics and Interactive Techniques (proc. HPG), 3(2). Official version
Heitz, Eric (2014). Understanding the Masking-Shadowing Function in Microfacet-Based BRDFs. Journal of Computer Graphics Techniques (JCGT), 3(2):48-107. Official version
Heitz, Eric and Dupuy, Jonathan and Hill, Stephen and Neubelt, David (2016). Real-time Polygonal-light Shading with Linearly Transformed Cosines. ACM Transactions on Graphics (proc. SIGGRAPH), 35(4). Official version | Author's version
Hill, Stephen and Heitz, Eric (2017). Physically Based Shading in Theory and Practice: Real-Time Area Lighting: a Journey from Research to Production. ACM SIGGRAPH 2017 Courses, article 7. Official version | Author's version
Jarzynski, Mark and Olano, Marc (2020). Hash Functions for GPU Rendering. Journal of Computer Graphics Techniques, 9(3):21-38. Official version
Lagarde, Sébastian and de Rousiers, Charles (2015). Physically Based Shading in Theory and Practice: Moving Frostbite to PBR. ACM SIGGRAPH 2014 Courses, article 23. Author's version
Ramamoorthi, Ravi and Anderson, John and Meyer, Mark and Nowrouzezahrai, Derek (2012). A Theory of Monte Carlo Visibility Sampling. ACM Transactions on Graphics, 31(5). Official version | Author's version
R. A. Ulichney (1993). Void-and-cluster method for dither array generation. Proc. SPIE 1913, Human Vision, Visual Processing, and Digital Display IV. Official version | Author's version
|
# Eigenvalues of a $4\times4$ matrix
I want to find the eigenvalues of the matrix
$$\left[ \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & a & a & 0 \\ 0 & a & a & 0 \\ 0 & 0 & 0 & b \end{array} \right]$$
Can somebody explain me the theory behind getting the eigenvalues of this $4\times4$ matrix? The way I see it is to take the $4$ $a$'s as a matrix itself and see the big matrix as a diagonal one. The problem is, I can't really justify this strategy.
Thanks
-
Do you know how to find the eigenvalues of a $2\times2$ or $3\times3$ matrix? Even more directly, do you know what an eigenvalue is? – JavaMan Jan 27 '13 at 20:35
en.wikipedia.org/wiki/… – Belgi Jan 27 '13 at 20:36
For an ordinary 2x2 and 3x3 matrix, I use the characteristic equation. But I can't see how to apply that to bigger matrices – Jeroen Jan 27 '13 at 20:42
You should think of your matrix as a block matrix consisting of the $1x1$, $0-\text{matrix}$, a $2x2$ matrix of all $a$'s and a $1x1$ matrix $b$. Your first column is $0$, which means that the vector $(1,0,0,0)$ has 0 eigenvalue. Now, the 2x2 matrix of all $a's$ by itself has eigenvectors $(1,1)$ and $(1,-1)$ which have eigenvalues $2a$ and 0 respectively. Since this $2x2$ matrix is within a $4x4$, convince yourself that the eigenvectors of the full matrix will be $(0,1,1,0)$ and $(0,1,-1,0)$. Finally, the last column has only 1 $b$ in the bottom so $(0,0,0,1)$ will be an eigenvector with eigenvalue $b$.
Note that the above works primarily because your matrix has diagonal block structure, with zeros everywhere else.
-
The eigenvalues of $A$ are the roots of the characteristic polynomial $p(\lambda)=\det (\lambda I -A)$.
In this case, the matrix $\lambda I-A$ is made of three blocks along the diagonal.
Namely $(\lambda)$, $\left(\begin{matrix} \lambda-a & -a \\ -a & \lambda -a \end{matrix}\right)$, and $(\lambda -b)$.
The determinant is therefore equal to the product of the determinants of these three matrices.
So you find: $$p(\lambda)=\lambda\cdot \lambda(\lambda-2a)\cdot (\lambda -b).$$
Now you see that your eigenvalues are $0$ (with multiplicity $2$), $2a$, and $b$.
-
|
# low pass filter in spwm h bridge
I have a problem with the filtering stage of an spwm H bridge inverter,i have spwm signal 25khz and fundamental of 50hz.i make a low pass filter with a 1,5mh inductor in series an capacitor 2 μf parallel.All its perfect the signal is ok the voltage 220 volt ok..when i connect a load in the output the voltage drops dramaticaly,and the efficiency for 1000w inverter is very bad..! (i thing current can`t pass from the inductor) what is the wrong?
NOTE,the size of core in the choke is like a ring of my finger.
• Schematic or it didn’t happen. Aug 3, 2018 at 22:18
• The inductor needs to be small enough to pass the Di/dt that the load needs
– user16222
Aug 4, 2018 at 8:32
Your inductor value and capacitor value seem OK (resonant at 2.9 kHz) and loading with 48.4 ohms (1000 watts on 220 VAC) should not be a problem.
But the devil is in the detail: -
• Your inductor has too much series resistance
• Your H bridge is incapable of delivering 1000 watts efficiently
Here's a bode plot so you can see it should be OK: -
The red curve tells you that the frequency response is flat in the 50 Hz area and that you get "as-expected" roll off of the PWM artefacts above 3 kHz.
With perfect components there should be very little attenuation at 50 Hz
What does this output filter need todo?
1. it needs to attenuate the switching frequency
2. it needs to pass the load current.
From the information you have provided:
1. Inverter is SPWM
2. Switching frequency is 25kHz
3. fundamental is 50Hz
4. Output voltage is 220AC
5. Output "power" is 1kW
You have realised this out of 1.5mH:2uF
as AndyAKA has stated, conceptually it should be fine
$f_o = \frac{1}{2\pi\sqrt{LC}}$ = 2.905kHz.
and at 50Hz there should be no attenuation.
However... this filter can be realised out of an infinite combination of L and C to produce such a cutoff frequency and herein lies the problem. This L could be too big; inductors are referred to as chokes for a reason as they choke the change in current.
You mention SPWM and this has a DClink utilisation of around 50% but without a statement about your DClink voltage or how you are deriving it, not a lot more can be derived (basically your output voltage could be lower thus your load current could be higher adding to the problem).
You state an output power of 1kW... for now I am assuming you are driving into a purely resistive load and thus VA = W (in practice you will need to consider DPF and PF in the efficiency).
At 1kW and 220V, the load current is 4.5Arms
The maximum currrent at maximum frequency (50Hz) is thus:
$i(2\pi50\times t) = \sqrt{2}\times4.5\times sin(2\pi 50 \times t)$
The maximum rate of change of current is thus:
$2\sqrt{2}\pi 50 \times 4.5= 0.001999A/\mu s$
If we assume the filter capacitor will tend to short the load at 25kHz and thus zero voltage occurs at the maximum di/dt
$\hat{L} = \frac{\check{V}}{2\sqrt{2}\hat{f}\times\check{I}}$
$= \frac{220}{0.001999A/\mu s} = 110mH$.
This is the MAXIMUM usable inductor upto 25kHz, for such a load change. you wouldn't want this size due to practicality but also there is about -1.7dB of attenuation IF the filter was set at 3kHz as you have.
so from a current throughput you should also be fine.
So questions...
1. What have you exactly built?
|
# WMS:Customized Lag Time Equation
Almost all of the lag time equations are of the form:
${\displaystyle T_{LAG}=C_{t}\ast \left({\frac {(L\ast L_{ca})}{\sqrt {S}}}\right)^{m}}$
where:
TLAG = watershed lag time in hours.
Ct = coefficient accounting for differences in watershed slope and storage.
L = the maximum flow length of the watershed along the main channel from the point of reference to the upstream boundary of the watershed, in miles.
Lca = the distance along the main channel from the point of reference to a point opposite the centroid, in miles.
S = slope of the maximum flow distance path in ft/mile.
m = lag exponent
Therefore, if the equation the state, county, etc. uses to compute lag time is not available, it can often be set up using this equation by entering a custom coefficient, Ct, and exponent, m.
|
# Thread: Hypergeometric conditional variance formula
1. ## Hypergeometric conditional variance formula
I know the unconditional variance formula of the form:
((n)(r)(N-r)(N-n)) / ((N)(N)(N-1))
So, what is the variance formula conditioned on 1 or 2 items - particularly when there's 3 different items to choose from ?
2. ## Re: Hypergeometric conditional variance formula
Consider finding the random Z where Z is the distribution when conditioned and then get Var[Z] = E[Z^2] - E[Z]^2 using standard formulae.
|
# DIS 2015 - XXIII. International Workshop on Deep-Inelastic Scattering and Related Subjects
27 April 2015 to 1 May 2015
US/Central timezone
## Classical Gluon Production Amplitude for Nucleus–Nucleus Collisions: First Saturation Correction in the Projectile
30 Apr 2015, 09:20
25m
BALLROOM WEST ()
### BALLROOM WEST
WG2 Small-x, Diffraction and Vector Mesons
### Speaker
Douglas Wertepny (The Ohio State University)
### Description
The classical single-gluon production amplitude in nucleus-nucleus collisions including the first saturation correction in one of the nuclei (the projectile) while keeping multiple-rescattering (saturation) corrections to all orders in the other nucleus (the target) is presented. We expand one of the two nuclei up to two Wilson lines (i.e only two nucleons interact in the projectile nucleus): the single-gluon production amplitude we calculate is order-$g^3$ and is leading-order in the atomic number of the projectile, while resumming all order-one saturation corrections in the target nucleus. Our result is the first step towards obtaining an analytic expression for the first projectile saturation correction to the gluon production cross section in nucleus--nucleus collisions.
Slides
|
1. ## Factoring expressions
I need to completely factor this expression:
$5(6-5s)^2 + 6(7s-2)(6-5s)$
I see the similarities of the $(6-5s)$ but I don't know what to do about it. If anyone could help me out I'll be grateful! Thanks
2. = (6 - 5s)(5(6 - 5s) + 6(7s -2))
= (6 - 5s)(30 - 25s +42s - 12)
= (6 - 5s)(18 - 17s)
3. You flubbed slightly on your negative/positive signs, but I got the gist and you're right! Thanks muchly
|
# Affinity
## Characterization
The affinity model of our intrinsics aims to characterize errors or abberations in the image plane that appear as affine transformations of our image coordinates. These can be caused by various effects depending on the type of camera and underlying electronics that power the camera. Primarily, we categorize these into scale and shear effects.
Taking our generalized projection model, we can characterize these effects in the form of:
$\begin{bmatrix} x \\ y \end{bmatrix} = f \begin{bmatrix} X_c/ Z_c \\ Y_c / Z_c \end{bmatrix} + \begin{bmatrix} c_x \\ c_y \end{bmatrix} + g_{\mathsf{affinity}}(\bar{\textbf a})$
It is worth mentioning that while $$g_{\mathsf{affinity}}$$ can be modeled according to effects in $$x$$, effects in $$y$$, or both, the Tangram Vision Platform uses the convention of only applying our affinity model to the $$x$$ dimension for the sake of simplicity.
## Scale
The scale model aims to describe differences in pixel scaling in the $$x$$ (column) and $$y$$ (row) dimensions of our image plane.
In many perception systems, this is modeled as $$f_x$$ and $$f_y$$. See our explanation of the projection model for why Tangram doesn't follow this paradigm.
Scale effects may be due to:
1. Measured physical differences in pixel size (i.e. rectangular pixels). This is actually quite rare.
2. Differences between the clock frequencies of the analog CCD / CMOS clock and the sampling frequency of the Digital-Analog Converter (DAC) in the frame grabber.
3. Projective compensation effects resulting from an insufficient range of depths in the set of object space points used to calibrate. This is often the case if flat checkerboards are used for calibration.
The way that we model this effect is with a single linear coefficient $$a_1$$, as follows:
$g_{\mathsf{affinity}}(a_1) = \begin{bmatrix} a_1 (x - c_x) \\ 0 \end{bmatrix}$
Read the Tangram Vision Blog post on focal length for a more complete explanation of these effects.
## Shear
Shear effects model a non-orthogonality between pixels. In effect, we would be modeling our image plane being shaped more like a rhombus than a square or rectangle.
For most full-frame cameras, this effect is not typically worth modeling, nor is it something that can be typically observed in practice. However, shear effects can be apparent in many line or push-broom cameras. This effect can be modeled as follows:
$g_{\mathsf{affinity}}(a_2) = \begin{bmatrix} a_2 (y - c_y) \\ 0 \end{bmatrix}$
## Scale and Shear
Combining the two possible models above, we get a unified affinity model.
$g_{\mathsf{affinity}}(a_1, a_2) = \begin{bmatrix} a_1 (x - c_x) + a_2 (y - c_y) \\ 0 \end{bmatrix}$
|
# C# In-Source Documentation Generation for NON-Programmers
The software group I currently work in has recently decided to start documenting our codebase. The initial approach they have been taking was to use the built in triple slash /// method of documenting.
A newer issue we began to find was that the result of running this through doxygen is a very nice representation of the codebase but for a programmer to use, where we had intended for this documentation to be readable by our Systems Engineers who will often come to us asking what a task is doing exactly.
Is there an easy way to document our code using the /// method and doxygen in a manner that if we run it a certain way, we can generate a document that JUST contains the Systems engineering version of documentation without all the extra fluff of a standard programmers documentation that would scare away a systems guy such as methods and member variables etc.? Any alternative solution suggestions are also welcome.
I'm sorry if this is a little confusing as to what we are trying to accomplish, I can adjust as responses come in. Thank you.
-
One thing you can do is to use doxygen's \page command, which gives you "Related Pages". Create a textfile with an extension that is processed by doxygen, and just put a comment in there. (I use .doc, but you might want to change that to something else to avoid confusion with Word documents. I am also putting these files in a common directory called docsrc to have them at one place.) These pages then show up in a seperate section in the docs.
/*!
\page foobar Foobar calculation
I am using the procedure outlined in the bla bla note to estimate
the foobar in our baz. Lorem ipsum dolor...
\section step1 1. Estimation of the foobar-efficiency of the baz system.
\author jdm
*/
You can then create links to the page or the sections with \ref foobar or \ref step1.
In our project, basically everyone who uses the program also codes around with it, so it is nice to have the usage documentation cross-linked with the code. But as the others pointed out, it might not be the best solution for a typical enduser-documentation.
-
I don't think this is going to get you what you want. It sounds like what you really want is to have good specification documentation that the Systems Engineers can use, and good Unit Tests that validate that the code runs according to those specifications. Inline code documentation is really more for the software engineers.
What's a little surprising and slightly frightening about your question is the implication that the Software Engineers are creating a system that the Systems Engineers will have to use, and that the Software Engineers are creating functionality from nothing. You should use extreme caution with having functionality be defined by your Software Engineers; they should be implementing specified functionality (and that specification should be what is used by the Systems Engineers).
-
The original design was done by the systems engineers and implemented by software engineers a few years ago in C++, recently we have moved to converting the codebase to C#, during this some minor tweaks have been made as well as there are many of new(er) systems engineers around for the different product lines that the software is used with. This creates the need for them to ask what exactly the code does today as it stands from the original design through the various change requests. – TaRDy Jul 15 '09 at 20:50
If you're documenting code, then you can assume it's going to be read by programmers. Private members can be stripped from the output, which allows you to document public members as your public documentation. If you're not documenting code, i.e. an end-user interface that is used by non-developers, then I don't think the code is the best place for it.
-
Well and clearly stated. – Paul Sonier Jul 15 '09 at 20:26
We are documenting code in a Systems engineering understandable manner so they know exactly what our implementation is doing. – TaRDy Jul 15 '09 at 20:43
|
## Organic Chemistry 9th Edition
The reason for this is that the two carbon atoms, each having 4 valence electrons would only be able to accommodate 4 bond (which would already complete its octet), the $C_{2}H_{6}$ (structure on the right) would be the maximum possible formula. Adding an additional H atom would not be possible as there are no more extra valence electrons to be shared.
|
# All Questions
663 views
### What is a smart card?
In many cryptographic protocols, some information is transmitted within smart cards. So, what is a smart card? Is it a physical card? What are they used for in cryptographic protocols?
2k views
### For Diffie-Hellman, must g be a generator?
Due to a number of recently asked questions about Diffie-Hellman, I was thinking this morning: must $g$ in Diffie-Hellman be a generator? Recall the mathematics of Diffie-Hellman: Given public ...
4k views
### Why can't one implement bcrypt in Cuda?
I had heard that although it's easy to implement message digest functions like MD5, SHA-1, SHA-256 etc. in CUDA (or any other GPU platform), it is impossible to implement bcrypt there. bcrypt is ...
897 views
### Can you make a hash out of a stream cipher?
A comment on another question made me wonder about something: Assume you're on a rather constrained platform — say, a low-end embedded device — with no built-in crypto capabilities, ...
846 views
### Can I select a large random prime using this procedure?
Say I want a random 1024-bit prime $p$. The obviously-correct way to do this is select a random 1024-bit number and test its primality with the usual well-known tests. But suppose instead that I do ...
580 views
### Can AES decryption be used as encryption?
Definition E: AES encryption D: AES decryption x: plain text y: encrypted text k: key In original AES cipher, encryption: y = E(x, k) decryption: x = D(y, k) Then I define the "reverse AES ...
2k views
### Why hash or salt when signing?
I've seen an example of how to sign using RSA. Besides the signing itself (s = m^d mod n) it also hashes and adds an IV. Why is that needed?
4k views
### Why do we need asymmetric algorithms for key exchange?
In SSL protocols, both symmetric and asymmetric algorithms are used. Why is it so? The symmetric algorithms are more secure and easier to implement. Why are asymmetric algorithms usually preferred in ...
1k views
### Does AES-CTR require an IV for any purpose other than distinguishing identical inputs?
I'd like to encrypt files deterministically, such that any users encrypting the same plaintext will use the same key and end up with the same ciphertext. The ciphertext should be private as long as ...
843 views
### Why doesn't preimage resistance imply the second preimage resistance?
Let the preimage resistance be defined as »given a hash value $h$, it is hard to find any message $m$ such that $\operatorname{hash}(m)=h$«, and let the second preimage resistance be defined as »given ...
2k views
### Low Public Exponent Attack for RSA
I'm having trouble understanding the algorithm for finding the original message $m$, when there is a small public exponent. Here is the example I'm trying to follow (you can also read it in the 'Low ...
7k views
### Why is padding used for RSA encryption given that it is not a block cipher?
In AES we use some padded bytes at end of message to fit 128/256 byte blocks. But as RSA is not a block cipher why is padding used? Can the message size be any byte length (is the encrypting agent ...
1k views
### Sending KCV (key check value) with cipher text
I was wondering why it is not more common to send the KCV of a secret key together with the cipher text. I see many systems that send cipher text and properly prepend the IV to e.g. a CBC mode ...
2k views
2k views
### Why, or when, to use an Initialization Vector?
i'm trying to figure out when an Intialization Vector (IV) should be used. There are anecdotal reports that WEP was broken because of weak IV's. It's also claimed that if two pieces of plaintext are ...
2k views
### Hill Cipher known plaintext attack
I know a plaintext - ciphertext couple of length 6 for a hill cipher where its key is a [3x3] matrix. Based on what I've read and learned, to attack and crack keys of [n x n], if we know a plaintext ...
1k views
### What is “Implicit Authentication”?
What is “Implicit Authentication” in the context of authentication methods? I searched the Web but could not find any article that describes this. If anyone can describe it, that would be a great ...
530 views
### Can we use elliptic curve cryptography in wireless sensors?
Can we use elliptic curve cryptography in wireless sensors? If so, how do you map points to message characters?
784 views
### How to construct encrypted functions (with either public or private data)?
Homomorphic encryption is often touted for its ability to Compute on encrypted data with public functions Compute an encrypted function on public (or private) data I feel I have a good grasp of #1 ...
1k views
### Webapp password storage: Salting a hash vs multiple hashes?
For security's sake, of course it's blasphemous to store passwords in plain-text; using a hash function and then doing a re-hash and comparison is considered much better. But, if bad guys steal your ...
202 views
### Is RSA encryption of a cryptographic hash with a private key the same as signature generation?
It is often said that RSA encryption with a private key is the same as signing (signature generation). Will RSA encryption with a private key over a cryptographic hash give the same result as ...
2k views
### Why is AES not a Feistel cipher?
I am studying for an exam right now. And I wanted to make sure I got this point correct. AES is not a Feistel cipher because the operations in AES are not invertible. Is the above statement ...
258 views
### Relation between attack and attack model for signatures
I would like to know: What is the relationship between an attack and an attack model. For example, let $\Pi$ be the Lamport signature scheme. This signature has it's security based on the one-way ...
381 views
### How much data can I encrypt with AES before I need to change the key in CBC mode?
In my cryptography class, the instructor suggested that in order to give the attacker a minimal advantage of $1/2^{32}$, we have to change the key after $2^{48}$ blocks are encrypted. It seems that ...
1k views
### AES Key Length vs Block Length
This answer points out that certain key and block lengths were a requirement for the AES submissions: The candidate algorithm shall be capable of supporting key-block combinations with sizes of ...
842 views
### Why can't the IV be predictable when its said it doesn't need to be a secret?
I heard multiple times not to reuse the same IV and IV should be random but doesn't need to be secret. I also heard if the IV is something like sequential numbers or something predictable I should ...
2k views
### RSA cracking: The same message is sent to two different people problem
Suppose we have two people: Smith and Jones. Smith public key is e=9, n=179 and Jones public key is e=13, n=179. Bob sends to ...
357 views
### Attack of an RSA signature scheme using PKCS#1 v1.5 encryption padding
My best interpretation of this question is that Java's crypto API has been subverted to perform RSA signature using PKCS#1 v1.5 encryption padding. Assume the signature $S$ of a message $M$ is ...
198 views
### How do I encrypt with the private key? [duplicate]
Possible Duplicate: RSA encryption with private key and decryption with a public key This wording is creeping everywhere (e.g. there): "I encrypt with the private key" and even sometimes, ...
949 views
### Approach towards anonymous e-voting
I want to implement an internet-based e-voting system. Voters shall be able to cast their vote for one out of n possible candidates. Each candidate has his own ballot-box kept by and at a trustworthy ...
185 views
### Does CBC encryption of a hash provide authenticity?
Given a message $M$ and a cryptographic hash function $H$, let $f(M) = E_K(M || H(M))$ where $E_K$ is AES-128-CBC encryption with PKCS#5 padding. Take $H = \textrm{SHA-256}$ if it matters. In other ...
304 views
### Shamir Secret sharing - Can share generator keep x values secret?
I'm wondering, in Shamir secret sharing, can generator of the shares, keep the x values which are used in evaluating the polynomial to obtain y values (i.e., the shares) secret, and whenever the ...
141 views
### Hash Based Encryption (fast & simple), how well would this compare to AES? [duplicate]
First of all, I know it's a very bad idea to invent your own encryption algorithm. It's better to use existing known, trusted, extensively tested and studied algorithms with a proven track record. The ...
136 views
### Currently i am doing decryption for RSA encoding and i face some problem with it [closed]
I am very new to cryptography so I don’t know much about it. I have been given a very large $N$ value and $E$ value to decrypt a ciphertext which was created using a AES 128 key and a IV by using RSA ...
12k views
### Basic explanation of Elliptic Curve Cryptography?
I have been studying Elliptic Curve Cryptography as part of a course based on the book Cryptography and Network Security. The text for provides an excellent theoretical definition of the algorithm but ...
3k views
### How are primes generated for RSA?
As I understand it, the RSA algorithm is based on finding two large primes (p and q) and multiplying them. The security aspect is based on the fact that it's difficult to factor it back into p and q. ...
1k views
### What is the ideal cipher model?
What is the ideal cipher model? What assumptions does it make about a block cipher? How does it relate to assuming that my block cipher is a pseudo-random permutation (PRP)? When is the ideal ...
1k views
### Is this password migration strategy secure?
I want to upgrade the security of some existing databases of users' authentication tokens strictly for the purpose of making sure that if the database is stolen, attackers will not be able to guess ...
4k views
### How can we reason about the cryptographic capabilities of code-breaking agencies like the NSA or GCHQ?
I have read in Applied Cryptography that the NSA is the largest hardware buyer and the largest mathematician employer in the world. How can we reason about the symmetric ciphers cryptanalysis ...
|
# Define the Term Self-inductance of a Solenoid - Physics
Define the term self-inductance of a solenoid.
#### Solution
The ratio of magnetic flux through the solenoid to the current passing through it is called self-inductance of a solenoid. It is given by
L=phi/i
Concept: Solenoid and the Toroid - the Solenoid
Is there an error in this question or solution?
|
# How to calculate the sum $x + x^2 +…+ x^n$ [closed]
How can I get the result of this sum:
$$x + x^2 +...+ x^n$$
## closed as off-topic by graydad, Claude Leibovici, Stefan Hamcke, Sujaan Kunalan, Najib IdrissiFeb 25 '15 at 18:03
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – graydad, Claude Leibovici, Stefan Hamcke, Sujaan Kunalan, Najib Idrissi
If this question can be reworded to fit the rules in the help center, please edit the question.
• why did you change the question? – Omnomnomnom Feb 25 '15 at 17:43
Hint: for any $x \in \Bbb R$ with $x \neq 1$, $$1 + x + x^2 + \cdots + x^{n} = \frac{1-x^{n+1}}{1-x}$$
• I am not sure the result makes sense for $x=1$. The right side is not defined while the left side is $n$. The right side should be extended as a limit for $x=1$. – Alexander Vigodner Feb 25 '15 at 17:32
• @AlexanderVigodner: The left side is $n-1$ ;-) – Stefan Hamcke Feb 25 '15 at 17:55
• @Stefan Hamcke Well, $n+1$ to be precise :). – Alexander Vigodner Feb 25 '15 at 19:26
|
# Synopsis: Plasmon Thermometers for Silicon
Electron oscillations in silicon may be used to map, with nanometer resolution, the temperatures across a silicon device.
Today’s silicon chips operate at ever increasing speed, but they also produce more waste heat than ever. To develop even faster processors, researchers need to reduce the heat that the chips produce. However, they don’t yet understand how heat propagates through the nanoscale components of a modern transistor. To do so, they would need to measure the temperatures across a device with nanometer resolution. Chris Regan of the University of California, Los Angeles, and co-workers have now developed a thermometry technique that, using a scanning transmission electron microscope (STEM), could eventually map temperature in a silicon device with a resolution down to 10 nm.
The technique measures temperature by using the STEM’s electron beam to create and characterize electron oscillations known as plasmons. Plasmons form when the beam causes a material’s valence electrons to oscillate with respect to the ions of its lattice. Since plasmon energies shift as silicon heats up and thermally expands, the temperature of silicon can be inferred by measuring the plasmon energy.
Regan’s group used this effect in a prototype thermometer based on silicon nanoparticles. The particles, deposited onto the surface of a chip, served as temperature probes. The researchers heated the nanoparticles to temperatures ranging from room temperature to $1250{\phantom{\rule{0.3em}{0ex}}}^{\circ }\text{C}$ and fired the electron beam at them. They then determined the plasmons’ energies by measuring the energy lost by the beam as it passed through the nanoparticles. Such measurements allowed them to establish the relationship between plasmon energy and nanoparticle temperature, thus calibrating their thermometer. The results suggest that the STEM beam could be used to scan an entire silicon device, obtaining a high-resolution map of the temperatures across the device.
This research is published in Physical Review Applied.
–Sophia Chen
Sophia Chen is a freelance writer based in Tucson, Arizona.
More Features »
### Announcements
More Announcements »
Optics
Read More »
## Next Synopsis
Materials Science
Read More »
## Related Articles
Nanophysics
### Focus: The Fastest Spinners
Two teams report spinning nanoscale particles at more than 60 billion rpm, the fastest rotation of any object, with the potential to probe the quantum vacuum. Read More »
Nanophysics
### Synopsis: Thermal Radiation Gets a Boost
The thermal radiation transfer between two quartz plates separated by a 200-nm gap is 45 times greater than predicted by conventional laws for blackbodies. Read More »
Plasmonics
### Synopsis: Hidden Structure of Plasmons
Calculations of the current density within collective charge oscillations called plasmons reveal a complicated structure that could affect how plasmons reflect off a boundary. Read More »
|
• 14
• 12
• 9
• 10
• 9
# [java] JAR Portability
This topic is 4117 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
First, let me say this: I am new to this forum. Also, my first forum experience (ever) gave me an incorrect impression which I am still trying to correct (really quick response). Now: I have been making Java games for the past year almost. Since this summer (August 2006, approx) I have been working with the JAR file format. I can run them fine, but various people I have tried to show my games to have had difficulties running them. There are two situations/problems. One is this one person who was using Linux. I don't know, does Linux generally do JAR files correctly? The other is Windows (what I use); some people are getting "Could not find the main class. Program will exit." errors for no apparent reason. I can run it, and I can extract the manifest file, and I have verified that it is present and contains a main class. What could I be doing wrong? Also (question pertaining to where on the forum to put something, if anywhere), what about games that I have already made or done a lot on? Is there some sort of general board for posting accomplishments and projects? Or is this forum solely for people with questions and answers, not people with working stuff? In short, where (on this forum or elsewhere) can I showcase my stuff? Since this is pertinent, I will include the most recent JAR file I've created and the stuff that it goes with: http://www.esnips.com/doc/e2157da2-e83c-48f7-ab55-0031e053f509 I'm not posting the controls mainly because I don't know if this is the right place for it. If it is, I will go ahead.
##### Share on other sites
Linux does jar files fine. On most Linux systems you would need to run it from the command line, or using a shell script.
I could not run your game on XP.
Exception in thread "main" java.lang.IllegalArgumentException: Width (-1) and height (-1) cannot be <= 0 at java.awt.image.DirectColorModel.createCompatibleWritableRaster(Unknown Source) at java.awt.image.BufferedImage.<init>(Unknown Source) at hardpoint2.SuperImage.<init>(SuperImage.java:31) at hardpoint2.SuperImage.forName(SuperImage.java:61) at hardpoint2.HP2.<init>(HP2.java:62) at hardpoint2.HP2.main(HP2.java:190)
I executed with the command: java -jar HardPoint2.jar
##### Share on other sites
I believe you may be missing some image files, or (more likely) if you have them they were not unzipped to the Images\ folder like they should have been.
Controls (I guess I will go ahead and post them):
UP = Accelerate
DOWN = Brakes
LEFT / RIGHT = Rotate
CTRL = Shoot
S = Afterburners (Extra speed while held)
J = Jump (Hold while over a gateway)
ENTER = Pause
|
# Writing Numbers with a GAN¶
This notebook is an example of using Conditional Generative Adversarial Network (GANs) to generate ‘hand written’ digits.
One of the first things to try after training a neural network to classify the handwritten digits from the MNIST dataset is to ‘drive it backwards’ and use it to generate images of numbers. The idea is to start with an image of random noise then use the optimizer to tweak pixels so the classifier strongly predicts it as being the number you want to generate.
The implementation of this is in Tensorflow is straightforward: just fix the weights and biases, and make the image a training variable rather than a placeholder.
Unfortunately it doesn’t work. The classifier divides 784-dimensional space (one dimension for each of the 28x28 pixels) into 10 categories, one for each digit. This space is large, but the classifier has only seen training examples for the parts of it that look like a number. For the parts where it hasn’t seen training data, the output is basically a random choice of the 10 digits. When we ask the optimizer to find a ‘4’ it will find an image that the classifier strongly believes is a number ‘4’, but it will turn out to be a point somewhere in the backwaters of this 784-dimensional space that doesn’t look like a number ‘4’ to you or I.
One alternative is a thing called a ‘Generative Adversarial Network’ or GAN. Instead of driving a classifier backwards, this sets up a system of a two separate networks called a ‘Generator’ (G) and a ‘Discriminator’ (D).
The Generator is given a random seed and told to ‘generate a number 4’. It does this without ever seeing what a human-written ‘4’ looks like.
The Detector takes the output of the Generator and must determine if this input image was generated by a human or computer.
Two two are trained alternately, with the Generator’s learning to fool the Detector and the Detector learning to distinguish real from fake numbers. If there is any systematic difference between a human and the Generator, then the Detector will learn it. This might be anything from ‘the image is white noise’ to ‘the writing is too neat’. The Generator will then have to learn to generate things that more closely match (in a probability distribution sense) what people draw.
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import math
import re
import os
import os.path
%matplotlib inline
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
sess = tf.InteractiveSession()
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
## The Detector Network¶
This network takes an image and a one-hot encoding of the digit is is meant to be and returns a probability that it is ‘real’. It is directly derrived from the Google Tensorflow ‘Deap MNIST’ example, and consists of two convolution layers followed by a fully connected (FC) layer then a final ‘decision’ layer that outputs a single fake/real probability.
The information about the which digit was drawn is injected in the first FC layer by concatinating with the 7x7x64 values from the 2nd convolution layer.
def weight_variable(shape, name):
initial = tf.truncated_normal(shape, stddev=2.0/np.sqrt(np.product(shape))) # He et al.
return tf.Variable(initial, name=name)
def bias_variable(shape, name):
initial = tf.constant(0.0, shape=shape)
return tf.Variable(initial, name=name)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1,2,2,1],
class Detect(object):
def __init__(self):
# First conv. layer
self.W_conv1 = weight_variable([5,5,1,32], name='W_conv1')
self.b_conv1 = bias_variable([32], name='b_conv1')
# 2nd conv. layer
self.W_conv2 = weight_variable([5,5,32,64], name='W_conv2')
self.b_conv2 = bias_variable([64], name='b_conv2')
# Wide FC layer (y is injected here)
self.W_fc1 = weight_variable([7 * 7 * 64 + 10, 1024], name='W_fc1')
self.b_fc1 = bias_variable([1024], name='b_fc1')
# Final decision layer
self.W_fc2 = weight_variable([1024, 1], name='W_fc2')
self.b_fc2 = bias_variable([1], name='b_fc2')
self.all_vars = [
self.W_conv1, self.b_conv1,
self.W_conv2, self.b_conv2,
self.W_fc1, self.b_fc1,
self.W_fc2, self.b_fc2
]
def calc(self, x, y, keep_prob):
# First conv. layer
x_image = tf.reshape(x, [-1, 28, 28,1])
h_conv1 = tf.nn.relu(conv2d(x_image, self.W_conv1) + self.b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
# 2nd conv. layer
h_conv2 = tf.nn.relu(conv2d(h_pool1, self.W_conv2) + self.b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
# Wide FC layer (y is injected here)
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_pool2_flat_plus_y = tf.concat(1, (h_pool2_flat, y))
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat_plus_y, self.W_fc1) + self.b_fc1)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
p_fake = tf.sigmoid(tf.matmul(h_fc1_drop, self.W_fc2) + self.b_fc2)
return tf.clip_by_value(p_fake, 1e-9, 1-1e-9)
## The Generator network¶
My initial implementation of this was an flipped copy of the Tensorflow Deap MNIST/Detector network. However I wasn’t able to train this successfully, and instead used a shallower network that has a single FC layer that takes the random seed and the one-hot encoded number to produce followed by 2 inverted convolution layers.
class GenerateSmaller(object):
"""A simpler network that I was able to train successfully"""
def __init__(self, y, noise, keep_prob, batch_size):
W_fc1 = weight_variable([10 + 2, 7*7*64], name='G_W_fc1')
B_fc1 = bias_variable([7*7*64], name='G_B_fc1')
h_fc1 = tf.nn.relu(tf.matmul(tf.concat(1, (y,noise)), W_fc1) + B_fc1)
h_img = tf.reshape(h_fc1, [-1, 7, 7, 64])
W_conv = weight_variable([5, 5, 32, 64], name="G_W_conv")
B_conv = bias_variable([32], name="G_B_conv")
h2_img = tf.nn.relu(
tf.nn.conv2d_transpose(
value=h_img,
filter=W_conv,
output_shape=[batch_size, 28, 28, 32],
strides=[1, 4, 4, 1])
+ B_conv)
W_conv2 = weight_variable([3, 3, 1, 32], name="G_W_conv2")
B_conv2 = bias_variable([1], name="G_B_conv2")
x_img = tf.nn.conv2d_transpose(
value=h2_img,
filter=W_conv2,
output_shape=[batch_size, 28, 28, 1],
strides=[1, 1, 1, 1]) + B_conv2
self.x = tf.clip_by_value(tf.sigmoid(tf.reshape(x_img, [-1, 28*28,1])) * 1.2 - 0.1, 0.0, 1.0)
self.all_vars = [
W_fc1, B_fc1,
W_conv, B_conv,
W_conv2, B_conv2
]
## Training the networks¶
For training, there are two copies of the detector network. One of these is always fed real images, and the other always gets its input from the Generator. They can’t ‘cheat’ and use this information because they share the same variables :)
The training process takes a few days on my PC, and so checkpointing is used to recover from crashes.
checkpoint_dir = 'digits-with-gan-checkpoints'
if not os.path.exists(checkpoint_dir):
os.mkdir(checkpoint_dir)
x = tf.placeholder(tf.float32, [None, 784], name='x_real')
y = tf.placeholder(tf.float32, [None, 10], name='y')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
detect = Detect()
detect_on_real = detect.calc(x, y, keep_prob)
noise = tf.placeholder(tf.float32, [None, 2], name='noise')
batch_size = 10
generate = GenerateSmaller(y, noise, keep_prob, batch_size=batch_size)
detect_on_generate = detect.calc(generate.x, y, keep_prob)
detect_surprise = tf.reduce_mean(
- tf.log(1-detect_on_real) # On real data it should predict 'not fake' i.e. close to zero
- tf.log(detect_on_generate), # On fake data it should predict 'fake', i.e close to one
reduction_indices=[1])
detect_fake_surprise = tf.reduce_mean(-tf.log(detect_on_generate), reduction_indices=[1])
detect_optimiser = tf.train.AdamOptimizer(1e-4).minimize(detect_surprise, var_list=detect.all_vars)
generate_optimiser = tf.train.AdamOptimizer(1e-4).minimize(-detect_fake_surprise, var_list=generate.all_vars)
generate_error_history = []
detect_error_history = []
saver = tf.train.Saver()
last_chpkt = tf.train.latest_checkpoint(checkpoint_dir)
if last_chpkt:
m = re.compile('.+-([0-9]+)').match(last_chpkt)
global_step = int(m.group(1))
saver.restore(sess, last_chpkt)
print "Restoring from %s global_step:%d" % (last_chpkt, global_step)
else:
sess.run(tf.initialize_all_variables())
global_step = 0
Restoring from digits-with-gan-checkpoints/ckpt-303000 global_step:303000
One of the problems I has training this model was that the training would get stuck in a local minima, with either the detector or the generator ‘winning’ every time. In this case the losing side never manages to improve itself.
In order to avoid that, when the rate is a long way from 50%, I stopped training the ‘winner’ to let the other side catch up. This was a ‘made up’ solution: I haven’t seen it in the literature before.
optimise_detector = True
optimise_generator = True
while global_step <= 1000*1000:
batch = mnist.train.next_batch(batch_size)
if optimise_generator:
generate_optimiser.run(feed_dict={
noise:np.random.randn(batch_size, 2),
y: batch[1],
keep_prob:1.0,
})
if optimise_detector:
detect_optimiser.run(feed_dict={
keep_prob: .5,
x: batch[0],
y: batch[1],
noise:np.random.randn(batch_size, 2)
})
global_step += 1
if global_step % 100 == 0:
error1 = np.mean(sess.run(detect_surprise, feed_dict={
keep_prob: 1.0,
noise:np.random.randn(batch_size, 2),
x: batch[0],
y: batch[1],
}))
error2 = np.mean(sess.run(detect_fake_surprise, feed_dict={
keep_prob: 1.0,
noise:np.random.randn(batch_size, 2),
x: batch[0],
y: batch[1]
}))
for v in detect.all_vars + generate.all_vars:
a = sess.run(tf.reduce_max(tf.abs(v)))
if 1e9 < a:
print "%s => %f" % (v.name, a)
if math.isnan(error1) or math.isnan(error2):
print "Aborting on NAN Detected"
detect_error_history.append(error1)
generate_error_history.append(error2)
overall_error = (error1 + error2)/2
optimise_generator = overall_error < 0.8
optimise_detector = 0.3 < overall_error
if global_step % 1000 == 0:
print "%d Error is %f %f" % (global_step, error1, error2)
save_path = saver.save(sess, os.path.join(checkpoint_dir, 'ckpt'), global_step=global_step)
print 'Model saved in file: ', save_path
304000 Error is 0.429973 0.103284
Model saved in file: digits-with-gan-checkpoints/ckpt-304000
...
999000 Error is 0.359853 0.121428
Model saved in file: digits-with-gan-checkpoints/ckpt-999000
1000000 Error is 0.732500 0.212447
Model saved in file: digits-with-gan-checkpoints/ckpt-1000000
Finally, let’s use the generator to create some numbers:
plt.figure().set_size_inches(8, 8)
for n in xrange(10):
target = [0] * 10
target[n] = 1.0
aa = sess.run(generate.x, feed_dict= {
y: [target]*batch_size,
noise: np.random.randn(batch_size, 2),
keep_prob: 1.0
})
for i in range(10):
plt.subplot(10,10,i*10 + n + 1)
img = np.reshape(aa[i], (28,28))
a = plt.imshow(img)
a.set_cmap('gray')
plt.axis('off')
# plt.savefig("numbers.png", dpi = 72)
Also, we can interpolate between two different numbers. For example what number is half way between 7 and 9?
noise_data = np.random.randn(batch_size, 2)/2
plt.figure().set_size_inches(8, 8)
for n in xrange(10):
target = [0] * 10
target[7] = 1.0 - n/9.
target[9] = n/9.
aa = sess.run(generate.x, feed_dict= {
y: [target]*batch_size,
noise: noise_data,
keep_prob: 1.0
})
for i in range(10):
plt.subplot(10,10,i*10 + n + 1)
img = np.reshape(aa[i], (28,28))
a = plt.imshow(img)
a.set_cmap('gray')
plt.axis('off')
Since we can interpolate between numbers, let’s use that to give a new take on a digital clock where the numbers cross fade in the latent ‘numberness’ space that we’ve just trained:
noise_data = np.zeros((batch_size,2))
imgn = 0
ys = []
for n in xrange(60*10):
target = [0] * 10
ease = 0.5 - np.cos(np.pi * (n%60)/60.)/2
target[n/60] = 1.0 - ease
target[(n/60+1)%10] = ease
ys.append(target)
if (len(ys) == batch_size):
aa = sess.run(generate.x, feed_dict= {
y: ys,
noise: noise_data,
keep_prob: 1.0
})
for i in range(batch_size):
img = np.reshape(aa[i], (28,28))
matplotlib.image.imsave('nn-%03d.png' % imgn, img, vmin=0.0, vmax=1.0, cmap=matplotlib.image.cm.gray)
imgn += 1
ys = []
# mplayer mf://nn*.png -fs -zoom -loop 0 -vo gl -fps 60
import matplotlib
matplotlib.image.imsave('name.png', array)
saver.restore(sess, "digits-with-gan-checkpoints/ckpt-529000")
for v in detect.all_vars + generate.all_vars:
a = sess.run(tf.reduce_max(tf.abs(v)))
print "%s => %f" % (v.name,a)
W_conv1:0 => 0.149732
b_conv1:0 => 0.027031
W_conv2:0 => 0.063758
b_conv2:0 => 0.021519
W_fc1:0 => 0.130650
b_fc1:0 => 0.024133
W_fc2:0 => 0.136678
b_fc2:0 => 0.015503
G_W_fc1:0 => 0.279702
G_B_fc1:0 => 0.397946
G_W_conv:0 => 0.456417
G_B_conv:0 => 0.249894
G_W_conv2:0 => 0.295345
G_B_conv2:0 => 0.054249
plt.figure().set_size_inches(8, 8)
noise_data = np.random.randn(2)
for n in xrange(10):
target = [0] * 10
target[4] = n/4.
aa = sess.run(generate.x, feed_dict= {
y: [target]*batch_size,
noise: noise_data,
keep_prob: 1.0
})
for i in range(10):
plt.subplot(10,10,i*10 + n + 1)
img = np.reshape(aa[i], (28,28))
a = plt.imshow(img)
a.set_cmap('gray')
plt.axis('off')
|
## The Annals of Statistics
### A Combinatoric Approach to the Kaplan-Meier Estimator
David Mauro
#### Abstract
The paper considers the Kaplan-Meier estimator $F^{\mathrm{KM}}_n$ from a combinatoric viewpoint. Under the assumption that the estimated distribution $F$ and the censoring distribution $G$ are continuous, the combinatoric results are used to show that $\int |\theta(z)| dF^{\mathrm{KM}}_n(z)$ has expectation not larger than $\int |\theta(z)| dF(z)$ for any sample size $n$. This result is then coupled with Chebychev's inequality to demonstrate the weak convergence of the former integral to the latter if the latter is finite, if $F$ and $G$ are strictly less than 1 on $\mathscr{R}$ and if $\theta$ is continuous.
#### Article information
Source
Ann. Statist., Volume 13, Number 1 (1985), 142-149.
Dates
First available in Project Euclid: 12 April 2007
https://projecteuclid.org/euclid.aos/1176346582
Digital Object Identifier
doi:10.1214/aos/1176346582
Mathematical Reviews number (MathSciNet)
MR773158
Zentralblatt MATH identifier
0575.62043
JSTOR
|
# What type of compound is sulfur hexafluoride is an example of?
Well, this is an hypervalent sulfur compound, i.e. $S \left(V I +\right)$...
$S {F}_{6}$ is an octahedral sulfur compound, that is an extremely potent greenhouse gas.
|
# [NTG-context] fitting a picture to the available space
Fri Sep 12 12:02:37 CEST 2008
On Thu, Sep 11, 2008 at 6:31 PM, Mojca Miklavec
<mojca.miklavec.lists at gmail.com> wrote:
> On Thu, Sep 11, 2008 at 12:06 AM, Thomas A. Schmitz wrote:
>> Hi guys,
>>
>> I'm pulling my hair out. I'm trying to set up an automatism to fit
>> pictures to the available space on a slide.
>
> Hello,
>
> I'm probably talking about something else, though highly related.
> There's one thing that I often miss on slides:
>
> *Here's a title*
>
> - here are
> - some items
>
> [and I want the picture to fill up all the remaining space on slide]
>
> Or even if there' only title + image. I tried option=max (or something
> similar), but always ended up setting image size manually. Most often
> I got title on one page and image on another (It would be less painful
> to have image on the same page, even if it hangs much over the lower
> border - is there something similar to \placefigure[thispage] option
> (puts the figure on the same page, even if there's no space left)?)
>
> Mojca
>
> [I need to stop asking questions now.]
Quick and dirty:
\showframe
\starttext
\input knuth
\start
\scratchdimen=\pagegoal
|
24
views
0
recommends
+1 Recommend
0 collections
0
shares
• Record: found
• Abstract: found
• Article: found
Is Open Access
# ASTRAL-II: coalescent-based species tree estimation with many hundreds of taxa and thousands of genes
1 , * , 2
Bioinformatics
Oxford University Press
Bookmark
There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
### Abstract
Motivation: The estimation of species phylogenies requires multiple loci, since different loci can have different trees due to incomplete lineage sorting, modeled by the multi-species coalescent model. We recently developed a coalescent-based method, ASTRAL, which is statistically consistent under the multi-species coalescent model and which is more accurate than other coalescent-based methods on the datasets we examined. ASTRAL runs in polynomial time, by constraining the search space using a set of allowed ‘bipartitions’. Despite the limitation to allowed bipartitions, ASTRAL is statistically consistent.
Results: We present a new version of ASTRAL, which we call ASTRAL-II. We show that ASTRAL-II has substantial advantages over ASTRAL: it is faster, can analyze much larger datasets (up to 1000 species and 1000 genes) and has substantially better accuracy under some conditions. ASTRAL’s running time is $O ( n 2 k | X | 2 )$ , and ASTRAL-II’s running time is $O ( n k | X | 2 )$ , where n is the number of species, k is the number of loci and X is the set of allowed bipartitions for the search space.
Availability and implementation: ASTRAL-II is available in open source at https://github.com/smirarab/ASTRAL and datasets used are available at http://www.cs.utexas.edu/~phylo/datasets/astral2/.
Supplementary information: Supplementary data are available at Bioinformatics online.
### Most cited references16
• Record: found
• Abstract: found
• Article: found
Is Open Access
### ASTRAL: genome-scale coalescent-based species tree estimation
(2014)
Motivation: Species trees provide insight into basic biology, including the mechanisms of evolution and how it modifies biomolecular function and structure, biodiversity and co-evolution between genes and species. Yet, gene trees often differ from species trees, creating challenges to species tree estimation. One of the most frequent causes for conflicting topologies between gene trees and species trees is incomplete lineage sorting (ILS), which is modelled by the multi-species coalescent. While many methods have been developed to estimate species trees from multiple genes, some which have statistical guarantees under the multi-species coalescent model, existing methods are too computationally intensive for use with genome-scale analyses or have been shown to have poor accuracy under some realistic conditions. Results: We present ASTRAL, a fast method for estimating species trees from multiple genes. ASTRAL is statistically consistent, can run on datasets with thousands of genes and has outstanding accuracy—improving on MP-EST and the population tree from BUCKy, two statistically consistent leading coalescent-based methods. ASTRAL is often more accurate than concatenation using maximum likelihood, except when ILS levels are low or there are too few gene trees. Availability and implementation: ASTRAL is available in open source form at https://github.com/smirarab/ASTRAL/. Datasets studied in this article are available at http://www.cs.utexas.edu/users/phylo/datasets/astral. Contact: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online.
Bookmark
• Record: found
• Abstract: found
### Is a new and general theory of molecular systematics emerging?
(2008)
Bookmark
• Record: found
• Abstract: found
### Inconsistency of phylogenetic estimates from concatenated data under coalescence.
(2007)
Although multiple gene sequences are becoming increasingly available for molecular phylogenetic inference, the analysis of such data has largely relied on inference methods designed for single genes. One of the common approaches to analyzing data from multiple genes is concatenation of the individual gene data to form a single supergene to which traditional phylogenetic inference procedures - e.g., maximum parsimony (MP) or maximum likelihood (ML) - are applied. Recent empirical studies have demonstrated that concatenation of sequences from multiple genes prior to phylogenetic analysis often results in inference of a single, well-supported phylogeny. Theoretical work, however, has shown that the coalescent can produce substantial variation in single-gene histories. Using simulation, we combine these ideas to examine the performance of the concatenation approach under conditions in which the coalescent produces a high level of discord among individual gene trees and show that it leads to statistically inconsistent estimation in this setting. Furthermore, use of the bootstrap to measure support for the inferred phylogeny can result in moderate to strong support for an incorrect tree under these conditions. These results highlight the importance of incorporating variation in gene histories into multilocus phylogenetics.
Bookmark
### Author and article information
###### Journal
Bioinformatics
Bioinformatics
bioinformatics
bioinfo
Bioinformatics
Oxford University Press
1367-4803
1367-4811
15 June 2015
10 June 2015
10 June 2015
: 31
: 12
: i44-i52
###### Affiliations
1Department of Computer Science, The University of Texas at Austin, Austin, TX 78712, USA and 2Departments of Computer Science and Bioengineering, The University of Illinois at Urbana-Champaign, Champaign, IL 61801, USA
###### Author notes
*To whom correspondence should be addressed.
###### Article
btv234
10.1093/bioinformatics/btv234
4765870
26072508
|
# New Orbiter SVN commit (r.71, Oct 14 2017)
#### Eduard
##### New member
Bug: Having the "Remote Vessel Control" enabled causes CTD when launching Orbiter rev. 55.
#### AssemblyLanguage
##### Donator
Donator
Request: HSI Variable Approach Angle
Would it be possible to modify the HSI MFD to support a variable approach angle?
For example, the Shuttle Atlantis approach angle is 20 degrees. The PAPI at KSC is set at 20 degrees but the HSI shows a much lower angle.
Thanks.
#### jarmonik
Beta Tester
I also want to keep the interface between Orbiter file requests and the zip file server module reasonably abstract, so that the zip server could eventually be replaced by a network server for remote tileset access, although that's a bit further down the line.
That sounds like a good idea. Also, in the current implementation it would be helpful if the path to a planetary textures could be specified. Users tend to have multiple installations of Orbiter, I got three right now. So, it would help a lot if an additional installations could fetch a planetary textures from the master installation.
#### kuddel
##### Donator
Donator
That sounds like a good idea. Also, in the current implementation it would be helpful if the path to a planetary textures could be specified. Users tend to have multiple installations of Orbiter, I got three right now. So, it would help a lot if an additional installations could fetch a planetary textures from the master installation.
That's what symbolic links are perfect for!
I have 15! Different Installations, 4 of which are Orbiter BETA based. Those 4 all have symbolic links at Textures folder for the Hi-Res Textures and Elevation maps...
This is the batch, that creates the links:
Code:
@echo off
setlocal EnableDelayedExpansion
set SRC=c:\Program Files (x86)\Orbiter\_Resources\Textures
set DST=.\Textures
set JUNCTION=c:\Program Files (x86)\Orbiter\_Resources\junction.exe
for /F "tokens=*" %%d in ('dir /b /a:d "%SRC%"') do (
set NAME=%%d
if "!NAME:~-4!" neq ".old" (
echo. %%d
if exist "%DST%/%%d" rmdir /S /Q "%DST%/%%d"
"%JUNCTION%" "%DST%\%%d" "%SRC%\%%d" > nul
)
)
endlocal
In my example all the "shared" resources are located at "_Resources" Folder.
junction.exe is my favorite 'link-creator' btw. and is from SYSINTERNALS.
The extra check for paths "not ending in '.old' " just skips folders I like to keep, but should not be linked.
Last edited:
#### Anroalh12
##### ETSII
I've installed d3d9beta17f for orbiter 2016 but when I select D3D9 in modules tab nothing happens. I can't see the video tab
#### indy91
I have a question about Orbiter 2016 and I hope this is the appropriate place to ask it.
Does Orbiter 2016 accurately model the Earth as an oblate spheroid and if yes, is the atmosphere modeled to fit the spheroid? I am asking about the shape here, not the gravitational model.
While working on the Virtual AGC for NASSP we have noticed that multiple programs have trouble with the spherical Earth in Orbiter 2010. For example, the program 22 of the AGC is used to track a landmark on Earth. As an input the program wants geocentric latitude, longitude and altitude above the Fischer ellipsoid. The AGC then calculates the position vector of the landmark from these inputs. For a landmark on the Earth, we kind of have to trick the AGC to make it work. The longitude is fine, but latitude and altitude have to be adjusted so the AGC finds the correct position of the landmark on the spherical Earth. In NASSP we have set the radius of the spherical Earth to the radius at Cape Canaveral.
Similarly the entry interface is defined as 400k ft altitude above the Earth. Again, the spherical vs. ellipsoid Earth is relevant in some calculations the AGC does. The AGC actually has a program to calculate midcourse corrections itself. For low latitudes at entry interface, where the radius difference is more pronounced, the maneuvers the AGC is calculating become increasingly inaccurate.
I have set a Delta Glider at different points on the Earth on oceans and the radius always seemed to be identical. So if despite the awesome terrain models in Orbiter 2016 the average shape of the Earth is still not an oblate spheroid, consider this a feature request. :thumbup:
#### SolarLiner
##### It's necessary, TARS.
I notice there are camera animations in the new welcome scenario, how was that done? I tried opening the playback editor but have been unable to find any new event type.
Also, if you stop the playback halfway through, those camera animations keep playing anyway.
#### T1234
##### New member
Is it possible to have just the highest earth res texture with the low res elev,if so, what download is it and is the EarthLo level 11?
#### TCR_500
##### Developer of Solar Lander
Remote Vessel Control module (Orbiter default) crashes with this build on my system.
#### bekodreko
##### New member
Remote Vessel Control module (Orbiter default) crashes with this build on my system.
set to unchecked "RCONTROL" no problem:thumbup:
#### TCR_500
##### Developer of Solar Lander
A few notes about the ShuttleA spacecraft before you take Orbiter 2016 out of Beta:
1. The window mesh in the VC doesn't line up properly with the rest of the mesh. It's on the top left and top right corners of the windows where you have this problem.
2. The top panel in the 2D cockpit cannot be accessed and is missing the pod release buttons.
3. Could the VC be made to match the 2D panel?
#### vinny5000
##### New member
Hi, guys!
I have the orbiter beta which is released on February 16th I believe, I read the textures pack installation instructions but I don't see textures>planets in the orbiter beta directory. Any suggestions? Thanks!
Cheers,
Vincent
#### jroly
##### Donator
Donator
Should be like...
D:\orbiter_beta\Textures\Earth
D:\orbiter_beta\Textures\Mars
...etc
#### vinny5000
##### New member
Hi, Jroly!
The textures are working now! Thanks!
Cheers,
Vincent
#### JMW
"SOLVED"
Well not diagnosed but....
Scenario from r.54 works in r.55, but scenario in r.55 doesn't work in r.54....
Be interested to see what result you have with scenario attached and where.
BUT DON'T WASTE ANY TIME ON IT DOC. PLEASE
Beta rev55.
Why does the "pad occupied" flag not come up?
The craft is definitely Inactive (Landed) according to Scenario Editor, but nothing shows against pad in Info tab - Object = Base......
Code that worked pre Beta to detect occupation does not function either (understandably) which is annoying.
Scenario attached is at Cape Canaveral base.
Someone please try it on their set-up to see if you get the same result.
Please Doc, if this could be solved pre release it would be great!
Last edited:
|
Ask question
# The following table shows the approximate average household income in the United States in 1990, 1995, and 2003. (t=0 represents 1990.) begin{array}{|c|c|}hline text{t(Year)} & 0 & 5 & 13 hline text{H(Household Income in} $1,000) & 30 & 35 & 43 hline end{array} Which of the following kinds of models would best fit the given data? Explain your choice of model. ( a, b, c, and m are constants.) a) Linear: H(t)=mb + b b) Quadratic: H(t)=at^{2} + bt + c c) Exponential: H(t)=Ab^{t} Question Exponential models asked 2020-12-24 The following table shows the approximate average household income in the United States in 1990, 1995, and 2003. ($$\displaystyle{t}={0}$$ represents 1990.) $$\displaystyle{b}{e}{g}\in{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}{\left\lbrace{\left|{c}\right|}{c}{\mid}\right\rbrace}{h}{l}\in{e}\text{t(Year)}&{0}&{5}&{13}\backslash{h}{l}\in{e}\text{H(Household Income in}\ \{1},{000}{)}&{30}&{35}&{43}\backslash{h}{l}\in{e}{e}{n}{d}{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}$$ Which of the following kinds of models would best fit the given data? Explain your choice of model. ( a, b, c, and m are constants.) a) Linear: $$\displaystyle{H}{\left({t}\right)}={m}{b}\ +\ {b}$$ b) Quadratic: $$\displaystyle{H}{\left({t}\right)}={a}{t}^{{{2}}}\ +\ {b}{t}\ +\ {c}$$ c) Exponential: $$\displaystyle{H}{\left({t}\right)}={A}{b}^{{{t}}}$$ ## Answers (1) 2020-12-25 Step 1 Plot the points on acoordinate system. Scetching the various models and their characteristics (see below), we find that the linear model would be best for this data set. Step 2 not exponential, because the rate of rising does not seem to change. Not quadratic, because there are no "dips" or "bulges" to account for minimum/maximum values. ### Relevant Questions asked 2020-11-08 The following table lists the reported number of cases of infants born in the United States with HIV in recent years because their mother was infected. Source: Centers for Disease Control and Prevention. $$\displaystyle{b}{e}{g}\in{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}{\left\lbrace{\left|{c}\right|}{c}{\mid}\right\rbrace}{h}{l}\in{e}\text{Year}&\text{amp, Cases}\backslash{h}{l}\in{e}{1995}&{a}\mp,\ {295}\backslash{h}{l}\in{e}{1997}&{a}\mp,\ {166}\backslash{h}{l}\in{e}{1999}&{a}\mp,\ {109}\backslash{h}{l}\in{e}{2001}&{a}\mp,\ {115}\backslash{h}{l}\in{e}{2003}&{a}\mp,\ {94}\backslash{h}{l}\in{e}{2005}&{a}\mp,\ {107}\backslash{h}{l}\in{e}{2007}&{a}\mp,\ {79}\backslash{h}{l}\in{e}{e}{n}{d}{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}$$ a) Plot the data on a graphing calculator, letting $$\displaystyle{t}={0}$$ correspond to the year 1995. b) Using the regression feature on your calculator, find a quadratic, a cubic, and an exponential function that models this data. c) Plot the three functions with the data on the same coordinate axes. Which function or functions best capture the behavior of the data over the years plotted? d) Find the number of cases predicted by all three functions for 20152015. Which of these are realistic? Explain. asked 2021-01-31 The table gives the midyear population of Japan, in thousands, from 1960 to 2010. $$\displaystyle{b}{e}{g}\in{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}{\left\lbrace{\left|{c}\right|}{c}{\mid}\right\rbrace}{h}{l}\in{e}\text{Year}&\text{Population}\backslash{h}{l}\in{e}{1960}&{94.092}\backslash{h}{l}\in{e}{1965}&{98.883}\backslash{h}{l}\in{e}{1970}&{104.345}\backslash{h}{l}\in{e}{1975}&{111.573}\backslash{h}{l}\in{e}{1980}&{116.807}\backslash{h}{l}\in{e}{1985}&{120.754}\backslash{h}{l}\in{e}{1990}&{123.537}\backslash{h}{l}\in{e}{1995}&{125.327}\backslash{h}{l}\in{e}{2000}&{126.776}\backslash{h}{l}\in{e}{2005}&{127.715}\backslash{h}{l}\in{e}{2010}&{127.579}\backslash{h}{l}\in{e}{e}{n}{d}{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}$$ Use a calculator to fit both an exponential function and a logistic function to these data. Graph the data points and both functions, and comment on the accuracy of the models. [Hint: Subtract 94,000 from each of the population figures. Then, after obtaining a model from your calculator, add 94,000 to get your final model. It might be helpful to choose $$\displaystyle{t}={0}$$ to correspond to 1960 or 1980.] asked 2020-11-30 The table gives the midyear population of Norway, in thousands, from 1960 to 2010. $$\displaystyle{b}{e}{g}\in{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}{\left\lbrace{\left|{c}\right|}{c}{\mid}\right\rbrace}{h}{l}\in{e}\text{Year}&\text{Population}\backslash{h}{l}\in{e}{1960}&{3581}\backslash{h}{l}\in{e}{1965}&{3723}\backslash{h}{l}\in{e}{1970}&{3877}\backslash{h}{l}\in{e}{1975}&{4007}\backslash{h}{l}\in{e}{1980}&{4086}\backslash{h}{l}\in{e}{1985}&{4152}\backslash{h}{l}\in{e}{1990}&{4242}\backslash{h}{l}\in{e}{1995}&{4359}\backslash{h}{l}\in{e}{2000}&{4492}\backslash{h}{l}\in{e}{2005}&{4625}\backslash{h}{l}\in{e}{2010}&{4891}\backslash{h}{l}\in{e}{e}{n}{d}{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}$$ Use a calculator to fit both an exponential function and a logistic function to these data. Graph the data points and both functions, and comment on the accuracy of the models. [Hint: Subtract 3500 from each of the population figures. Then, after obtaining a model from your calculator, add 3500 to get your final model. It might be helpful to choose $$\displaystyle{t}={0}$$ to correspond to 1960.] asked 2021-01-06 In addition to quadratic and exponential models, another common type of model is called a power model. Power models are models in the form $$\displaystyle\hat{{{y}}}={a}\ \cdot\ {x}^{{{p}}}$$. Here are data on the eight planets of our solar system. Distance from the sun is measured in astronomical units (AU), the average distance Earth is from the sun. $$\displaystyle{b}{e}{g}\in{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}{\left\lbrace{\left|{c}\right|}{c}{\mid}\right\rbrace}{h}{l}\in{e}\ \text{ Planet }\ &\ \text{ Distance from sun }\ &\text{(astronomical units) }\ &\ \text{ Period of revolution }\ &\text{(Earth years) }\ \backslash{h}{l}\in{e}\ \text{ Mercury }\ &{0.387}&{0.241}\backslash{h}{l}\in{e}\ \text{ Venus }\ &{0.723}&{0.615}\backslash{h}{l}\in{e}\ \text{ Earth }\ &{1.000}&{1.000}\backslash{h}{l}\in{e}\ \text{ Mars }\ &{1.524}&{1.881}\backslash{h}{l}\in{e}\ \text{ Jupiter }\ &{5.203}&{11.862}\backslash{h}{l}\in{e}\ \text{ Saturn }\ &{9.539}&{29.456}\backslash{h}{l}\in{e}\ \text{ Uranus }\ &{19.191}&{84.070}\backslash{h}{l}\in{e}\ \text{ Neptune }\ &{30.061}&{164.810}\backslash{h}{l}\in{e}{e}{n}{d}{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}$$ Calculate and interpret the residual for Neptune. asked 2021-01-07 The U.S. Census Bureau publishes information on the population of the United States in Current Population Reports. The following table gives the resident U.S. population, in millions of persons, for the years 1990-2009. Forecast the U.S. population in the years 2010 and 2011 PSK\begin{array}{|c|c|} \hline \text{Year} & \text{Population (millions)} \\ \hline 1990 & 250 \\ \hline 1991 & 253\\ \hline 1992 & 257\\ \hline 1993 & 260\\ \hline 1994 & 263\\ \hline 1995 & 266\\ \hline 1996 & 269\\ \hline 1997 & 273\\ \hline 1998 & 276\\ \hline 1999 & 279\\ \hline 2000 & 282\\ \hline 2001 & 285\\ \hline 2002 & 288\\ \hline 2003 & 290\\ \hline 2004 & 293\\ \hline 2005 & 296\\ \hline 2006 & 299\\ \hline 2007 & 302\\ \hline 2008 & 304\\ \hline 2009 & 307\\ \hline \end{array}ZSK a) Obtain a scatterplot for the data. b) Find and interpret the regression equation. c) Mace the specified forecasts. asked 2021-01-19 The annual sales S (in millions of dollars) for the Perrigo Company from 2004 through 2010 are shown in the table. $$\displaystyle{b}{e}{g}\in{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}{\left\lbrace{\left|{c}\right|}{c}{\mid}\right\rbrace}{h}{l}\in{e}\text{Year}&{2004}&{2005}&{2006}&{2007}&{2008}&{2009}&{2010}\backslash{h}{l}\in{e}\text{Sales, S}&{898.2}&{1024.1}&{1366.8}&{1447.4}&{1822.1}&{2006.9}&{2268.9}\backslash{h}{l}\in{e}{e}{n}{d}{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}$$ a) Use a graphing utility to create a scatter plot of the data. Let t represent the year, with $$\displaystyle{t}={4}$$ corresponding to 2004. b) Use the regression feature of the graphing utility to find an exponential model for the data. Use the Inverse Property $$\displaystyle{b}={e}^{{{\ln{\ }}{b}}}$$ to rewrite the model as an exponential model in base e. c) Use the regression feature of the graphing utility to find a logarithmic model for the data. d) Use the exponential model in base e and the logarithmic model to predict sales in 2011. It is projected that sales in 2011 will be$2740 million. Do the predictions from the two models agree with this projection? Explain.
asked 2021-02-18
For the following exercises, use a graphing utility to create a scatter diagram of the data given in the table. Observe the shape of the scatter diagram to determine whether the data is best described by an exponential, logarithmic, or logistic model. Then use the appropriate regression feature to find an equation that models the data. When necessary, round values to five decimal places.
$$\displaystyle{b}{e}{g}\in{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}{\left\lbrace{\left|{c}\right|}{c}{\mid}\right\rbrace}{h}{l}\in{e}{x}&{1}&{2}&{3}&{4}&{5}&{6}&{7}&{8}&{9}&{10}\backslash{h}{l}\in{e}{f{{\left({x}\right)}}}&{409.4}&{260.7}&{170.4}&{110.6}&{74}&{44.7}&{32.4}&{19.5}&{12.7}&{8.1}\backslash{h}{l}\in{e}{e}{n}{d}{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}$$
asked 2021-03-11
An automobile tire manufacturer collected the data in the table relating tire pressure x (in pounds per square inch) and mileage (in thousands of miles). A mathematical model for the data is given by
$$\displaystyle f{{\left({x}\right)}}=-{0.554}{x}^{2}+{35.5}{x}-{514}.$$
$$\begin{array}{|c|c|} \hline x & Mileage \\ \hline 28 & 45 \\ \hline 30 & 51\\ \hline 32 & 56\\ \hline 34 & 50\\ \hline 36 & 46\\ \hline \end{array}$$
(A) Complete the table below.
$$\begin{array}{|c|c|} \hline x & Mileage & f(x) \\ \hline 28 & 45 \\ \hline 30 & 51\\ \hline 32 & 56\\ \hline 34 & 50\\ \hline 36 & 46\\ \hline \end{array}$$
(Round to one decimal place as needed.)
$$A. 20602060xf(x)$$
A coordinate system has a horizontal x-axis labeled from 20 to 60 in increments of 2 and a vertical y-axis labeled from 20 to 60 in increments of 2. Data points are plotted at (28,45), (30,51), (32,56), (34,50), and (36,46). A parabola opens downward and passes through the points (28,45.7), (30,52.4), (32,54.7), (34,52.6), and (36,46.0). All points are approximate.
$$B. 20602060xf(x)$$
Acoordinate system has a horizontal x-axis labeled from 20 to 60 in increments of 2 and a vertical y-axis labeled from 20 to 60 in increments of 2.
Data points are plotted at (43,30), (45,36), (47,41), (49,35), and (51,31). A parabola opens downward and passes through the points (43,30.7), (45,37.4), (47,39.7), (49,37.6), and (51,31). All points are approximate.
$$C. 20602060xf(x)$$
A coordinate system has a horizontal x-axis labeled from 20 to 60 in increments of 2 and a vertical y-axis labeled from 20 to 60 in increments of 2. Data points are plotted at (43,45), (45,51), (47,56), (49,50), and (51,46). A parabola opens downward and passes through the points (43,45.7), (45,52.4), (47,54.7), (49,52.6), and (51,46.0). All points are approximate.
$$D.20602060xf(x)$$
A coordinate system has a horizontal x-axis labeled from 20 to 60 in increments of 2 and a vertical y-axis labeled from 20 to 60 in increments of 2. Data points are plotted at (28,30), (30,36), (32,41), (34,35), and (36,31). A parabola opens downward and passes through the points (28,30.7), (30,37.4), (32,39.7), (34,37.6), and (36,31). All points are approximate.
(C) Use the modeling function f(x) to estimate the mileage for a tire pressure of 29
$$\displaystyle\frac{{{l}{b}{s}}}{{{s}{q}}}\in.$$ and for 35
$$\displaystyle\frac{{{l}{b}{s}}}{{{s}{q}}}\in.$$
The mileage for the tire pressure $$\displaystyle{29}\frac{{{l}{b}{s}}}{{{s}{q}}}\in.$$ is
The mileage for the tire pressure $$\displaystyle{35}\frac{{{l}{b}{s}}}{{{s}{q}}}$$ in. is
(Round to two decimal places as needed.)
(D) Write a brief description of the relationship between tire pressure and mileage.
A. As tire pressure increases, mileage decreases to a minimum at a certain tire pressure, then begins to increase.
B. As tire pressure increases, mileage decreases.
C. As tire pressure increases, mileage increases to a maximum at a certain tire pressure, then begins to decrease.
D. As tire pressure increases, mileage increases.
asked 2021-02-09
The table gives the number of active Twitter users worldwide, semiannually from 2010 to 2016. $$\displaystyle{b}{e}{g}\in{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}{\left\lbrace{\left|{c}\right|}{c}{\mid}\right\rbrace}{h}{l}\in{e}\text{Years since}&\text{January 1, 2010}&\text{Twitter user}&\text{(millions)}\backslash{h}{l}\in{e}{0}&{30}&{3.5}&{232}\backslash{h}{l}\in{e}{0.5}&{49}&{4.0}&{255}\backslash{h}{l}\in{e}{1.0}&{68}&{4.5}&{284}\backslash{h}{l}\in{e}{1.5}&{101}&{5.0}&{302}\backslash{h}{l}\in{e}{2.0}&{138}&{5.5}&{307}\backslash{h}{l}\in{e}{2.5}&{167}&{6.0}&{310}\backslash{h}{l}\in{e}{3.0}&{204}&{6.5}&{317}\backslash{h}{l}\in{e}{e}{n}{d}{\left\lbrace{a}{r}{r}{a}{y}\right\rbrace}$$ Use a calculator or computer to fit both an exponential function and a logistic function to these data. Graph the data points and both functions, and comment on the accuracy of the models.
asked 2020-11-08
The burial cloth of an Egyptian mummy is estimated to contain 560 g of the radioactive materialcarbon-14, which has a half life of 5730 years.
a. Complete the table below. Make sure you justify your answer by showing all the steps.
$$\begin{array}{|l|l|l|}\hline t(\text{in years})&m(\text{amoun of radioactive material})\\\hline0&\\\hline5730\\\hline11460\\\hline17190\\\hline\end{array}$$
b. Find an exponential function that models the amount of carbon-14 in the cloth, y, after t years. Make sure you justify your answer by showing all the steps.
c. If the burial cloth is estimated to contain 49.5% of the original amount of carbon-14, how long ago was the mummy buried. Give exact answer. Make sure you justify your answer by showing all the steps.
...
|
# Is Eigensystem::eivin message a bug?
I noticed that in the question Request for clarification of Eigensystem::eivn message the error message
Eigensystem::eivn: Incorrect number 2 of eigenvectors for eigenvalue Root[<<14>>+(<<1>>) #1^6+(104.294 r^4+134.491 a r^4+22564.8 a c r^4-0.497321 n r^4) #1^7+(21.6958 r^2-0.0208333 n r^2) #1^8+1. #1^9&,1] with multiplicity 1.
was ruled as bug of version 9. I have version 10.0.2 and I'm getting the same message for another matrix, this one with no symbolic objects in the entries. It's a pretty big beast, and the reason I do not want to use a numerical solution like Eigenvectors[N[matrix]] is related to another question I asked some time ago: What if do NOT want Mathematica to normalize eigenvectors with Eigenvectors[N[matrix]]?. If asked, I can post the matrix in question in an EDIT, but my question is actually more general: how does one avoid the error
Eigensystem::eivn: Incorrect number X of eigenvectors for eigenvalue Y with multiplicity Z.
and how come it shows up. It is to me unclear if there is a way of fixing this, if it's a bug, if there are ways to work around this. The problem is that Mathematica is not able to match eigenvalues with multiplicity Z to a number Z of eigenvectors, but he finds a different number X which is higher than Z in all cases I've seen. Might this have to do with how Mathematica deals with the case of having more eigenvalues than independent eigenvectors, namely by using zero vectors? Could one solve this by computing first the eignvalues and then for each one find the corresponding set of eigenvectors via another route that is not Eigenvectors[matrix], or is it just a waste of time?
• Have you tried working with arbitrary precision numbers instead, by using e.g. Eigensystem[ Rationalize[yourmatrix,0] ]. – MarcoB Jul 3 '15 at 15:36
• I did, the error message still appears – user50473 Jul 3 '15 at 15:48
• Do you have version 10.0.2 or 10.1? If you still see this problem with 10.1, I would be interested to see the matrix. – ilian Jul 3 '15 at 16:22
• @ilian I have 10.0.2 – user50473 Jul 3 '15 at 16:23
• Please do not use the bugs tag when posting questions. This is a special tag that, by convention, is always added by someone else than the original poster, after the community has verified the bug. It will be added later if deemed appropriate. – Szabolcs Jul 3 '15 at 16:46
|
# Chapter 1 R Foundations
Data science is emerging as a vital skill for researchers, analysts, librarians, and others who deal with data in their personal and professional work. In essence, data science is the application of the scientific method to data for the purpose of understanding the world we live in. More specifically, data science tasks emerge from an interdisciplinary amalgam of statistical analysis, computer science, and social science research conventions. Although other programming languages such as python exceed R in general popularity, R remains as one of the most popular programming languages for data scientists and researchers due to a focus on statistical programming. As such, R and RStudio are invaluable tools to the data scientist, statistician, researcher, and many others.
This guidebook aims to provide readers an opportunity to make a start towards learning R for a variety of data science tasks, include (a) data cleaning and preparation, (b) statistical analysis, (c) data visualization, (d) natural language processing, (e) network analysis, and (f) Structural Equation Modeling to name a few. In Chapters 1 and 2 we invite readers to install R and RStudio and to start manipulating data for analysis. Chapter 3 and Chapter 4 include introductory exercises to teach data visualization and statistical analysis in R. In Chapter 5 and beyond, you will explore basic analytic concepts (e.g., correlation and regression) and more advanced approaches to data modeling through the lenses of Structural Equation Modeling, Network Analysis, and Text Analysis.
By the end of this guidebook, among several other skills, you will be able to use R to visualize distributions of data across categorical groups, such as in the example below:
library(ggstatsplot)
library(tidyverse)
ggbetweenstats(
data = data,
x = Level,
y = intentions,
)
Calculate correlation coefficients, and visualize relationships between your data:
library(psych)
library(tidyverse)
variablelist <- data %>% select(intentions, values, needs, parentsupport, SESComp)
psych::pairs.panels(variablelist,
method = "pearson", # correlation method
hist.col = "#00AFBB",
density = TRUE, # show density plots
ellipses = TRUE,
lm = TRUE# show correlation ellipses
)
Fit General Linear Models (GLM) and produce publishable tables:
library(tidyverse)
library(knitr)
library(broom)
tidy(lm(intentions ~ SESComp + parentsupport + needs + values, data=data)) %>%
kable(caption = "Estimates for Variation in Music Elective Intentions.",
col.names = c("Predictor", "B", "SE", "t", "p"),
digits = c(0, 2, 3, 2, 3))
Table 1.1: Estimates for Variation in Music Elective Intentions.
Predictor B SE t p
(Intercept) -5.42 6.261 -0.87 0.394
SESComp 3.87 1.639 2.36 0.025
parentsupport 0.52 0.306 1.69 0.101
needs -0.28 0.115 -2.40 0.023
values 0.28 0.072 3.90 0.001
Fit, interpret, and visualize Structural Equation Models (SEM):
library(lavaan)
library(semPlot)
model <- '
# regressions
needs ~ parentsupport
values ~ needs + parentsupport + peersupport
intentions ~ values + needs + SESComp'
fit <- sem(model, data=data)
semPaths(fit, what = "path", whatLabels = "std", style = "lisrel", edge.label.cex = .9, rotation = 2, curve = 3, layoutSplit = FALSE, normalize = FALSE, height = 9, width = 6.5, residScale = 10)
As well as clean text data and analyze large corpora through the lenses of machine learning concepts such as sentiment analysis and topic modeling:
library(sentimentr)
sentiment('Sentiment analysis is super fun. I hate sentiment analysis. Sentiment analysis is okay')
## element_id sentence_id word_count sentiment
## 1: 1 1 5 0.6708204
## 2: 1 2 4 -0.3750000
## 3: 1 3 4 0.0000000
Each concept is presented with unique datasets, including qualitative data from Harry Potter, Hamilton, and Donald Trump Tweets, and quantitative data from baseR datasets such as mtcars, elective intentions data for middle school students, and other real world data.
Additionally, there will be an informal assessment tied to each chapter so you can test and apply your skills as you move through the book.
## 1.1 Getting Started with R and RStudio
This chapter contains an introduction to the installation of R, how to install packages, and an introduction to object-based coding concepts. If you are having trouble downloading R and installing your first packages, please view the optional check in assessment at https://jayholster.shinyapps.io/RLevel0Assessment/ Take a few minutes to download and install both R and RStudio. They are both free and easy to download.
According to Rstudio’s documentation, <https://cran.r-project.org/doc/contrib/Paradis-rdebuts_en.pdf>): “when R is running, variables, data, functions, results, etc, are stored in the active memory of the computer in the form of objects which have a name. The user can do actions on these objects with operators (arithmetic, logical,comparison, . . .) and functions (which are themselves objects).” We will explore the concept and capabilities of the obect throughout this text.
### 1.1.1 The Integrated Development Environment
Where R is a programming language, RStudio is an integrated development environment (IDE) which enables users to efficiently access and view most facets of the program in a four pane environment. These include the source, console, environment and history, as well as the files, plots, packages and help. The console is in the lower-left corner, and this is where commands are entered and output is printed. The source pane is in the upper-left corner, and is a built in text editor. While the console is where commands are entered, the source pane includes executable code which communicates with the console. The environment tab, in the upper-right corner displays an list of loaded R objects. The history tab tracks the user’s keystrokes entered into the console. Tabs to view plots, packages, help, and output viewer tabs are in the lower-right corner.
Where SPSS and other menu based analytic software are limited by user input and installed software features, R operates as a mediator between user inputs and open source developments from our colleagues all over the world. This affords R users a certain flexibility. However, it takes a few extra steps to appropriately launch projects. Regardless of your needs with R, you will likely interact with the following elements of document set up.
### 1.1.2 Learning to Read Code
“Numquam ponenda est pluralitas sine necessitate. Plurality is never to be posited without necessity.”
William of Occam, circa 1495
The most formidable challenge many new R users face is learning to code. While coding can seem daunting at first, it is important to remember that all coding tasks simply involve solutions to problems the user identifies. No matter how difficult the problem, there are always a lot of solutions to each problems, and someone else has always encountered the solution, and likely has posted it to a forum. Occams Razor (i.e., the solution with the least amount of assumptions is the best) helps you identify the problems to solve as you interact with code. For new users, your breakthrough moment where you start to feel like a programmer might come from a well-worded google search and a focused effort to solve an issue with the R programming language. It is, indeed, a language, so half of the battle is learning to read code in a manner that is meaningful to you. Throughout this guidebook, you will be provided with tutorials and suggestions for reading the code you interact with. In this first chapter, you will start to encounter objects, functions, arguments, and operators. It is important to develop code reading fluency. As such, I encourage readers to speak code out loud to themselves as they interact with this guidebook.
### 1.1.3 Practices in Reproducibility
It is important—whether you are working alone or with others—to adopt a collaboration mindset. This value is clearly important when working with other statistical collaborators or with domain experts who do not have experience in R. Even experienced users might become confused when examining a peer’s code. The same effect may occur if you return to a project after many months, and find yourself lost in your own code. As such, I recommend utilizing R markdown files (File -> New File -> R Markdown)and comments # to provide notes to yourself and others who might interact with your code. For instance, this is an R Markdown document (.Rmd for file extensions). R Markdown is a simple formatting syntax for authoring HTML, PDF, and MS Word documents that can include blocks of code, as well as space for narrative to describe the code. For more details on using R Markdown see http://rmarkdown.rstudio.com.
The # symbol is used to change the size of headers ### = small, ## = medium, # = large. There several options for formatting R markdown documents. For a introductory and comprehensive cheatsheet, see https://rstudio.com/wp-content/uploads/2015/02/rmarkdown-cheatsheet.pdf
There are no rules for how you should format your document. If you only need to write R code and have no need for whitespace or printing out your work, you can use an R script (File -> New File -> R Script). However, clean coding practices often require annotation, and R markdown makes that easy.
When you click the Knit button a document will be generated that includes both content as well as the output of any embedded R code chunks within the document. You can quickly insert chunks into your R Markdown file with the keyboard shortcut Cmd + Option + I (Ctrl + Alt + I for Windows users). Comments can be utilized within code chunks, and are not considered a functioning part of the code to R.
x <- 10 # This is an example of a comment.
No matter what coding format you choose, insert narrative into your document in a way that makes sense to you. It may be helpful to split your code up into small, easy to digest chunks as to not become overwhelmed when examining your work.
## 1.2 Coding in R
### 1.2.1 Set Your Working Directory
It is helpful to use a project specific directory and to frequently save your work. When you first use R follow this procedure: Create a sub-directory (folder), “R” for example, in your “Documents” folder, or somewhere on your machine that you can easily access. This sub-folder, also known as working directory, will be used by R to read and save files. Think of it as a download and upload folder for R only.
You can specify your working directory to R in a few ways. One option is to click the "Session" drop down menu at the top of your screen and then click "Set Working Directory." It might also be useful to change the directory using code. To do this, use the function setwd(), and then enter the path of your directory into the parenthesis. Your working directory path will look something like what you see below. If you are unsure about the path you should input here, find the folder using your machine’s finder function, right click the folder, and examine the details of the folder’s path. Make sure that you are using forward slashes in quotes as the example indicates.
You will need to run the code chunk in order to process the change in working directory. To run a code chunk, you can select the code you want to run and hit command and enter simultaneously on Mac, or control and enter on a Windows machine. Alternatively, you can click the green play button in the top right corner of the code chunk to run the entire cell.
setwd('/Users/username/Desktop/R/')
### 1.2.2 Operators
If this is your first foray into coding, you might think of it as a conversation you are having with R about a problem you are trying to solve. To start, you might consider simple arithmetic as the problem, and the code you write as the conversation. You can talk to R using numerical digits and text. Operators are the symbols that connect your numbers or words with mathematical (e.g., addition), relational (e.g., >=), and other logical or conditional manipulations
To start coding with mathematical operators, enter a number in the code box below, then click the run button.
7
## [1] 7
Now, pick a set of two numerals to sum, placing an addition sign between them. Then click the run button. Considering operators alone, R can be utilized as a simple calculator.
7+2
## [1] 9
Base R comes with working mathematical operators for addition +, subtraction -, multiplication *, division /, and exponents ^. Although I’ve left an example for you below, you might try making your own.
7+2-10*40
## [1] -391
### 1.2.3 Functions
Functions run specific tasks based on the arguments within parenthesis. For example, the sum() function adds a specified number set. You can read the code below to say “sum the numbers 1 and 10.”
sum(1, 10)
## [1] 11
There are some grammatical conventions that R requires to successfully run code. When you try to run this function below without the comma, R returns an error. Fix this example by including a comma directly after the first number. It does not matter whether you have a space after the comma before the next numeral. If any part of your code is not correct, it will produce an error message like you see below. The “unexpected numeric constant” in the error message refers to the space after the 1 that was not preceded by a comma.
sum(1 10)
## Error: <text>:1:7: unexpected numeric constant
## 1: sum(1 10
## ^
Instead of a comma, you can use a colon—another operator—to sum a sequence of numbers. You can read this code aloud saying “sum all numbers from 1 to 10.”
sum(1:10)
## [1] 55
There are a plethora of pre-installed functions in R. R also has the capability to find the right function for you. For instance, you might encounter a new function which you are not familiar with, like seq() below. To investigate what this function does, place your cursor after ‘seq’ but before the first parenthesis, and press tab. Hover over the function seq in the dropdown list to see a full description. If you need more information, you might use the help() function, with the name of the function you are curious about inside the parenthesis. The description indicates that the function is used to generate regular sequences. The information that emerges from using the help() function describes a list of arguments, including from, to, and by. This pane will populate on the bottom right side of RStudio when you run the help() function. Based on this information, what do you think the output of the first line in the following chunk will be?
seq(0, 20, 4)
## [1] 0 4 8 12 16 20
help(seq)
Was the output what you expected? The seq() function generates a sequence of numbers. In this example, 0 and 20 are the upper and lower limits of this sequence. The 4 indicates that the sequence should list numbers from 0 to 20 in increments of 4. Now, look to the bottom right side RStudio. Since you ran the entire cell, the command ‘help(seq)’ launched a search in the R documentation for the seq function in addition to running the seq function. Here, we can ascertain that this function takes a set of arguments (from = 0, to = 20, by = 4). When you paste that exact code into the seq function, it generates the same result. Try it!
### 1.2.4 Objects
You may have heard the phrase “object-oriented programming.” This phrase is accurate as all coding in R relies on the assignment of objects. When you assign an object in R, you are indicating that you want R to remember this assignment so it can be used as part of other code. pi is an example of an object built into base R. The input below is not numeric, but still represents a number. Run the code, and you will see that the word pi has been assigned the numeric value of pi. This is one of the few predefined objects in R.
pi
## [1] 3.141593
You might also assign a number pi to an object you name yourself.
x <- 3.141593
To assign values to objects, as the numerical content of pi was to the object x, use the <- operator. For example, the code segment below assigns the value 50 to the object ‘a’, and 14 to the object ‘b’ using the <- operator. It may be helpful to read the code out loud, saying a is 50, and b is 14.
a <- 50
b <- 14
There are five basic types of objects in R. The objects a, b, and x are each examples of vectors, or homogeneous data made up of characters, logical, numerical, or integer values. Additionally, a list of heterogeneous data (i.e., involving multiple data types) can be assigned to an object. Matricies, or two-dimensional data with undefined column headers, and data frames, a matrix with multiple specified columns that represent a certain type of observation seen in corresponding rows, can also be assigned to objects. Moreover, many objects are typically involved in coding and they tend to interact with each other.
See the code chunk below for a simple example of the interaction of two objects that were assigned in the previous code chunk.
a + b
## [1] 64
Notice that R held the object assignments from the previous cell. You can also assign a function to an object, and call that object to execute the function. For instance:
addvalues <- a + b
addvalues
## [1] 64
The product is not reached because R understands the input addvalues, but because the object add values calls the newly defined function ‘a + b’. Try switching the values of a and b three chunks ago, and running the subsequent chunks. Remember that objects are case sensitive and cannot contain spaces. If you ran the code ‘A+B’, what would happen?
## 1.3 Case Study
Now it is time for our first case study.
You are approached by a colleague who wants create code that sums numerical values associated with letters in others’ names (e.g., a = 1, b = 2,… z = 26). To start on this project, create 26 objects, one for each letter of the alphabet. Then sum your own name using mathematical expressions and the objects you created.
a <- 1
b <- 2
c <- 3 # and so on
## 1.4 Review
In this chapter, we explored how to download R and R studio, set up our working directory, learned about object based programming, and began to work with a few functions. To make sure you understand the material, there is a practice assessment to go along with this chapter at https://jayholster.shinyapps.io/RLevel0Assessment/.
## 1.5 References
Duignan, B. (2021). Occam’s razor. Encyclopedia Britannica. https://www.britannica.com/topic/Occams-razor
Epskamp, S. (2015). semPlot: Unified visualizations of structural equation models. Structural Equation Modeling, 22(3), 474–483. <https://doi.org/10.1080/10705511.2014.937847>
Patil, I. (2021). Visualizations with statistical details: The ‘ggstatsplot’ approach. Journal of Open Source Software, 6(61), 3167. [https://doi.org/[10.21105/joss.03167](https://doi.org/%5B10.21105/joss.03167){.uri}
Robinson D, Hayes A, & Couch, S. (2022). broom: Convert Statistical Objects into Tidy Tibbles. https://broom.tidymodels.org/, https://github.com/tidymodels/broom.
Revelle, W. (2017) psych: Procedures for Personality and Psychological Research, Northwestern University, Evanston, Illinois, USA, <https://CRAN.R-project.org/package=psych> Version = 1.7.8.
Rinker, T.W. (2021). sentimentr: Calculate Text Polarity Sentiment. version 2.9.0, https://github.com/trinker/sentimentr.
Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software, 48(2), 1–36. https://doi.org/[10.18637/jss.v048.i02](https://doi.org/%5B10.18637/jss.v048.i02){.uri}.
Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L.D., François, R., Grolemund, G., Hayes, A., Henry, L., Hester, J., Kuhn, M., Pedersen, T.L., Miller, E., Bache, S.M., Müller, K., Ooms, J., Robinson, D., Seidel, D.P., Spinu, V., Takahashi, K., Vaughan, D., Wilke, C., Woo, K., & Yutani, H. (2019). Welcome to the tidyverse. Journal of Open Source Software, 4(43), 1686. https://doi.org/10.21105/joss.01686.
Xie, Y. (2021). knitr: A General-Purpose Package for Dynamic Report Generation in R. https://yihui.org/knitr/
### 1.5.1 R Short Course Series
Video lectures of each guidebook chapter can be found at https://osf.io/6jb9t/. For this chapter, find the follow the folder path R Level Zero -> AY 2021-2022 Spring and access the video files, r markdown documents, and other materials for each short course.
### 1.5.2 Acknowledgements
This guidebook was created with support from the Center for Research Data and Digital Scholarship and the Laboratory for Interdisciplinary Statistical Analaysis at the University of Colorado Boulder, as well as the U.S. Agency for International Development under cooperative agreement #7200AA18CA00022. Individuals who contributed to materials related to this project include Jacob Holster, Eric Vance, Michael Ramsey, Nicholas Varberg, and Nickoal Eichmann-Kalwara.
|
# The Brenner-Hochster-Koll\'ar and Whitney Problems for Vector-valued Functions and Jets
@article{Fefferman2012TheBA,
title={The Brenner-Hochster-Koll\'ar and Whitney Problems for Vector-valued Functions and Jets},
author={Charles Fefferman and Garving K Luli},
journal={arXiv: Algebraic Geometry},
year={2012}
}
• Published 11 September 2012
• Mathematics
• arXiv: Algebraic Geometry
In this paper, we give analytic methods for finding m (and m+\omega) times continuously differentiable solutions of a finite system of linear equations. Along the way, we also solve a generalized Whitney problem for vector-valued functions and jets.
$C^{m}$ semialgebraic sections over the plane
• Mathematics
Journal of the Mathematical Society of Japan
• 2022
for polynomials P1, · · · , Pr, Q1, · · · , Qs on R . (We allow the cases r = 0 or s = 0.) A semialgebraic function φ : E → R is a function whose graph {(x, φ(x)) : x ∈ E} is a semialgebraic set. We
Solutions to a system of equations for $C^m$ functions
• Mathematics
Revista Matemática Iberoamericana
• 2020
Fix $m\geq 0$, and let $A=\left( A_{ij}\left( x\right) \right) _{1\leq i\leq N,1\leq j\leq M}$ be a matrix of semialgebraic functions on $\mathbb{R}^{n}$ or on a compact subset $E \subset Cm solutions of semialgebraic or definable equations • Mathematics Advances in Mathematics • 2021$C^2$interpolation with range restriction • Mathematics, Computer Science Revista Matemática Iberoamericana • 2022 This paper constructs a (parameter-dependent, nonlinear) C(R) extension operator that preserves the range [λ,Λ], and provides an efficient algorithm to compute such an extension using O(N logN) operations, where N = #(E). C A ] 5 N ov 2 02 1 On the Whitney distortion extension problem for C m ( R n ) and C ∞ ( R n ) • 2021 Smooth Selection for Infinite Sets • Mathematics • 2021 Whitney’s extension problem asks the following: Given a compact set E ⊂ R and a function f : E → R, how can we tell whether there exists F ∈ C(R) such that F |E = f? A 2006 theorem of Charles C A ] 1 7 Ju l 2 02 1 C 2 Interpolation with Range Restriction • Computer Science, Mathematics • 2021 This paper constructs a (parameter-dependent, nonlinear) C(R) extension operator that preserves the range [λ,Λ], and provides an efficient algorithm to compute such an extension using O(N logN) operations, where N = #(E). On the Shape Fields Finiteness Principle • Mathematics • 2020 In this paper, we improve the finiteness constant for the finiteness principles for$C^m(\mathbb{R}^n,\mathbb{R}^d)$and$C^{m-1,1}(\mathbb{R}^n,\mathbb{R}^D)\$ selection proven by Fefferman, Israel,
Interpolation of data by smooth non-negative functions
• Mathematics, Computer Science
• 2016
We prove a finiteness principle for interpolation of data by nonnegative Cm functions. Our result raises the hope that one can start to understand constrained interpolation problems in which e.g. the
Finiteness Principles for Smooth Selection
• Mathematics
• 2015
In this paper we prove finiteness principles for $${C^m{({\mathbb{R}^n},{\mathbb{R}^D)}}}$$Cm(Rn,RD) and $${C^{m-1,1}(\mathbb{R}^n,\mathbb{R}^D)}$$Cm-1,1(Rn,RD) selections. In particular, we provide
## References
SHOWING 1-10 OF 24 REFERENCES
Continuous solutions to algebraic forcing equations
We ask for a given system of polynomials f_1,...,f_n and f over the complex numbers when there exist continuous functions q_1,...,q_n such that q_1 f_1+...+q_n f_n = f. This condition defines the
Continuous closure of sheaves
We give a purely algebraic construction of the continuous closure of any finitely generated torsion free module; a concept first studied by H.~Brenner and M.~Hochster. The construction implies that,
Continuous linear combinations of polynomials. In From Fourier analysis and number theory to Radon transforms and geometry, 233–282
• Dev. Math. 28,
• 2013
A linear extension operator for a space of smooth functions defined on a closed subset in R
|
# Convergence of a spiral in $\mathbb{C}$
Does the series $$\sum_{k=0}^{\infty}\frac{i^k}{k!}$$converge, and if so, what is the value of it?
-
Do you know the Taylor expansion of $\exp (x)$? – Fabian Jan 2 '13 at 19:14
Yes, for some reason I forgot it. Thank you all. – Alyosha Jan 2 '13 at 19:15
This question is so trivial I think I may delete it. – Alyosha Jan 2 '13 at 19:16
Go ahead....... – Fabian Jan 2 '13 at 19:17
Hint: $$e^z=\sum_{k=0}^{\infty}\frac{z^k}{k!}$$
-
That seems more like an answer than a hint. – robjohn Jan 2 '13 at 19:56
@robjohn Sometimes the line between answers and hints can be very subtle. This is one of them. – Nameless Jan 2 '13 at 19:59
The sum of the series is $\exp{(i)}$, which has the value of $\cos{1} + i \sin{1}$ (the arguments being in radians).
-
Well, first of all: what's that power series' convergence radius? $$a_k:=\frac{i^k}{k!}\Longrightarrow \frac{a_{k+1}}{a_k}=\frac{i^{k+1}}{(k+1)!}\frac{k!}{i^k}=\frac{i}{k+1}\xrightarrow[k\to\infty]{}0$$
Thus, the series converges for all $\,i\in\Bbb R\,$
Advice: Don't use the letter $\,i\,$ as it usually stands for $\,i=\sqrt{-1}\,$ in mathematics..unless you really meant this $\,i\,$ , of course.
-
Recall: $\;\;$for $\large\;x \in \mathbb{C}\,:$ $\large\;\;\;\displaystyle \sum_{k=0}^{\infty}\frac{x^k}{k!} = e^x$
• In your case, we have $\large x = i,\;$ giving us $\large \;e^x = e^i = e^{i\theta},\text{ where}\;\;\theta = 1$.
• Now recall Euler's Formula: $$\large \;\;e^{i\theta} = \cos \theta + i \sin\theta,$$
$\quad$ and simply evaluate at $\;\large\theta = 1$.
-
and $e^i=\cos(1)+i\sin(1)$. – pbs Jan 2 '13 at 20:09
Alyosha: is this clear now? Just checking. $\quad$:-) – amWhy Jan 3 '13 at 18:04
yes, much better. – pbs Jan 7 '13 at 10:25
$i^0=1, i^1=i, i^2=-1, i^3=-i, i^4=i^0=1$. Therefore, the numerators of the power series are periodic of period 4; what's more, they split off naturally into a real part (the even members) and an imaginary part (the odd members): \begin{align*} \sum_{k=0}^{\infty}\frac{i^k}{k!} &= 1+\frac{i}{1!}+\frac{i^2}{2!}+\frac{i^3}{3!}+\frac{i^4}{4!}+\frac{i^5}{5!}+\cdots \\ &= 1+\frac{i}{1!}+\frac{-1}{2!}+\frac{-i}{3!}+\frac{1}{4!}+\frac{i}{5!}+\cdots \\ &=\left(1+\frac{-1}{2!}+\frac{1}{4!}+\cdots\right)+\left(\frac{i}{1!}+\frac{-i}{3!}+\frac{i}{5!}+\cdots\right) \\ &=\left(1-\frac{1}{2!}+\frac{1}{4!}+\cdots\right)+i\left(\frac{1}{1!}-\frac{1}{3!}+\frac{1}{5!}+\cdots\right) \\ &=\cos(1)+i\cdot\sin(1)\\ \end{align*}
OK, but isn't that just 1 step backwards in the proof of $e^x$'s MaClaurin expansion? – Alyosha Jan 2 '13 at 21:12
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.