text
stringlengths
100
356k
# Dealing With Multicollinearity: A Brief Overview and Introduction to Tolerant Methods This semester I’m taking a Multivariate statistics course taught by Professor Scott Steinschneider in the BEE department at Cornell. I’ve been really enjoying the course thus far and thought I would share some of what we’ve covered in the class with a blog post. The material below on multicollinearity is from Dr. Steinschneider’s class, presented in my own words. ### What is Multicollinearity? Multicollinearity is the condition where two or more predictor variables in a statistical model are linearly related (Dormann et. al. 2013). The existence of multicollinearity in your data set can result in an increase of the variance of regression coefficients leading to unstable estimation of parameter values. This in turn can lead to erroneous identification of relevant predictors within a regression and detracts from a model’s ability to extrapolate beyond the range of the sample it was constructed with. In this post, I’ll explain how multicollinearity causes problems for linear regression by Ordinary Least Squares (OLS), introduce three metrics for detecting multicollinearity and detail two “Tolerant Methods” for dealing with multicollinearity within a data set. ### How does multicollinearity cause problems in OLS regression? To illustrate the problems caused by multicollinearity, let’s start with a linear regression: $y=x\beta +\epsilon$ Where: $y=x\beta +\epsilon$ $x = a \hspace{.1 cm} vector \hspace{.1 cm} of \hspace{.1 cm} predictor \hspace{.1 cm} variables$ $\beta = a \hspace{.1 cm} vector \hspace{.1 cm} of \hspace{.1 cm} coefficients$ $\epsilon = a \hspace{.1 cm} vector \hspace{.1 cm} of \hspace{.1 cm} residuals$ The Gauss-Markov theorem states that the Best Linear Unbiased Estimator (BLUE) for each  coefficient can be found using OLS: $\hat{\beta}_{OLS} = (x^Tx)^{-1}x^Ty$ This  estimate will have a variance defined as: $var(\hat{\beta}_{OLS}) =\sigma^2 (x^Tx)^{-1}$ Where: $\sigma^2 = the \hspace{.1 cm} variance\hspace{.1 cm} of \hspace{.1 cm} the\hspace{.1 cm} residuals$ If you dive into the matrix algebra, you will find that the term (xTx) is equal to a matrix with ones on the diagonals and the pairwise Pearson’s correlation coefficients (ρ) on the off-diagonals: $(x^Tx) =\begin{bmatrix} 1 & \rho \\ \rho & 1 \end{bmatrix}$ As the correlation values increase, the values within (xTx)-1 also increase. Even with a low residual variance, multicollinearity can cause large increases in estimator variance. Here are a few examples of the effect of multicollinearity using a hypothetical regression with two predictors: $\rho = .3 \rightarrow (x^Tx)^{-1} =\begin{bmatrix} 1 & \rho \\ \rho & 1 \end{bmatrix}^{-1} = \begin{bmatrix} 1.09 & -0.33 \\ -0.33 & 1.09 \end{bmatrix}$ $\rho = .9 \rightarrow (x^Tx)^{-1} =\begin{bmatrix} 1 & \rho \\ \rho & 1 \end{bmatrix}^{-1} = \begin{bmatrix} 5.26 & -4.73 \\ -5.26 & -4.73 \end{bmatrix}$ $\rho = .999 \rightarrow (x^Tx)^{-1} =\begin{bmatrix} 1 & \rho \\ \rho & 1 \end{bmatrix}^{-1} = \begin{bmatrix} 500.25 & -499.75 \\ -499.75 & 500.25\end{bmatrix}$ So why should you care about the variance of your coefficient estimators? The answer depends on what the purpose of your model is. If your only goal is to obtain an accurate measure of the predictand, the presence of multicollinearity in your predictors might not be such a problem. If, however, you are trying to identify the key predictors that effect the predictand, multicollinearity is a big problem. OLS estimators with large variances are highly unstable, meaning that if you construct estimators from different data samples you will potentially get wildly different estimates of your coefficient values (Dormann et al. 2013). Large estimator variance also undermines the trustworthiness of hypothesis testing of the significance of coefficients. Recall that the t value used in hypothesis testing for an OLS regression coefficient is a function of the sample standard deviation (the square root of the variance) of the  OLS estimator. $t_{n-2} =\frac{\hat{\beta_j}-0}{s_{\beta_j}}$ An estimator with an inflated standard deviation, $s_{\beta_j}$, will thus yield a lower t value, which could lead to the false rejection of a significant predictor (ie. a type II error). See Ohlemüller et al. (2008) for some examples where hypothesis testing results are undermined by multicollinearity. ### Detecting Multicollinearity within a data set Now we know how multicollinearity causes problems in our regression, but how can we tell if there is multicollinearity within a data set? There are several commonly used metrics for which basic guidelines have been developed to determine whether multicollinearity is present. The most basic metric is the pairwise Pearson Correlation Coefficient between predictors, r. Recall from your intro statistics course that the Pearson Correlation Coefficient is a measure of the linear relationship between two variables, defined as: $r_{x_1,x_2}=\frac{cov(x_1,x_2)}{\sigma_x\sigma_y}$ A common rule of thumb is that multicollinearity may be a problem in a data set if any pairwise |r| > 0.7 (Dormann et al. 2013). Another common metric is known as the Variance Inflation Factor (VIF). This measure is calculated by regressing each predictor on all others being used in the regression. $VIF(\beta_j) = \frac{1}{1-R^2_j}$ Where Rj2 is the R2 value generated by regressing predictor xj on all other predictors. Multicollinearity is thought to be a problem if VIF > 10 for any given predictor (Dormann et al. 2012). A third metric for detecting multicollinearity in a data set is the Condition Number (CN) of the predictor matrix defined as the square root of the ratio of the largest and smallest eigenvalues in the predictor matrix: $CN=\sqrt{\frac{\lambda_{max}}{\lambda_{min}}}$ CN> 15 indicates the possible presence of multicollinearity, while a CN > 30 indicates serious multicollinearity problems (Dormann et al. 2013). ### Dealing with Multicollinearity using Tolerant Methods In a statistical sense, there is no way to “fix” multicollinearity. However, methods have been developed to mitigate its effects. Perhaps the most effective way to remedy multicollinearity is to make a priori judgements about the relationship between predictors and remove or consolidate predictors that have known correlations. This is not always possible however, especially when the true functional forms of relationships are not known (which is often why regression is done in the first place). In this section I will explain two “Tolerant Methods” for dealing with multicollinearity. The purpose of Tolerant Methods is to reduce the sensitivity of regression parameters to multicollinearity. This is accomplished through penalized regression. Since multicollinearity can result in large and opposite signed  estimator values for correlated predictors, a penalty function is imposed to keep the value of predictors below a pre-specified value. $\sum_{j=1}^{p}|\beta|^l \leq c$ Where c is the predetermined value representing model complexity, p is the number of predictors and l is either 1 or 2 depending on the type of tolerant method employed (more on this below). #### Ridge Regression Ridge regression uses the L2 norm, or Euclidean distance, to constrain model coefficients (ie. l = 2 in the equation above). The coefficients created using ridge regression are defined as: $\hat{\beta}_{r} = (x^Tx+\lambda I)^{-1}x^Ty$ Ridge regression adds a constant, λ, to the term xTx to construct the estimator. It should be noted that both x and y should be standardized before this estimator is constructed. The Ridge regression coefficient is the result of a constrained version of the ordinary least squares optimization problem. The objective is to minimize the sum of square errors for the regression while meeting the complexity constraint. $\hat{\beta_r} \begin{cases} argmin(\beta) \hspace{.1cm}\sum_{i=1}^{N} \epsilon_i^2 \\ \sum_{j=1}^{p}|\beta_j|^2 \leq c \end{cases}$ To solve the constrained optimization, Lagrange multipliers can be employed. Let z equal the Residual Sum of Squares (RSS) to be minimized: $argmin(\beta) \hspace{.3cm} z= (y-x\beta)^T(y-x\beta)+\lambda(\sum_{i=1}^{N}|\beta_j|^2-c)$ This can be rewritten in terms of the L2 norm of β: $z = (y-x\beta)^T(y-x\beta)+\lambda||\beta||^2_2$ Taking the derivative with respect to β and solving: $0 = \frac{\partial z}{\partial \beta} = -2x^T(y-x\beta)+2\lambda\beta$ $x^Ty = x^Tx\beta+\lambda\beta=(x^Tx+\lambda I)\beta$ $\hat{\beta}_{r} = (x^Tx+\lambda I)^{-1}x^Ty$ Remember that the Gauss-Markov theorem states that the OLS estimate for regression coefficients is the BLUE, so by using ridge regression, we are sacrificing some benefits of OLS estimators in order to constrain estimator variance. Estimators constructed using ridge regression are in fact biased, this can be proven by calculating the expected value of ridge regression coefficients. $E[\hat{\beta_r}]=(I+\lambda(x^Tx)^{-1})\beta \neq \beta$ For a scenario with two predictors, the tradeoff between reduced model complexity and increase bias in the estimators can be visualized graphically by plotting the estimators of the two beta values against each other. The vector of beta values estimated by regression are represented as points on this plot  $(\hat{\beta}=[\beta_1, \beta_2])$.  In Figure 1,$\beta_{OLS}$ is plotted in the upper right quadrant and represents estimator that produces the smallest RSS possible for the model. The ellipses centered around  are representations of the increasing RSS resulting from the combination of β1 and β2  values, each RSS is a function of a different lambda value added to the regression.  The circle centered around the origin represents the chosen level of model complexity that is constraining the ridge regression. The ridge estimator is the point where this circle intersects a RSS ellipse. Notice that as the value of c increases, the error introduced into the estimators decreases and vice versa. Figure 1: Geometric Interpretation of a ridge regression estimator. The blue dot indicates the OLS estimate of Beta, ellipses centered around the OLS estimates represent RSS contours for each Beta 1, Beta 2 combination (denoted on here as z from the optimization equation above). The model complexity is constrained by distance c from the origin. The ridge regression estimator of Beta is shown as the red dot, where the RSS contour meets the circle defined by c. The c value displayed in Figure 1 is only presented to explain the theoretical underpinnings of ridge regression. In practice, c is never specified, rather, a value for λ is chosen a priori to model construction. Lambda is usually chosen through a process known as k-fold cross validation, which is accomplished through the following steps: 1. Partition data set into K separate sets of equal size 2. For each k = 1 …k, fit model with excluding the kth set. 3. Predict for the kth set 4. Calculate the cross validation error (CVerror)for kth set: $CV^{\lambda_0}_k = E[\sum(y-\hat{y})^2]$ 5. Repeat for different values of , choose a that minimizes   $CV^{\lambda_0} = \frac{1}{k}CV^{\lambda_0}_k$ #### Lasso Regression Another Tolerant Method for dealing with multicollinearity known as Least Absolute Shrinkage and Selection Operator (LASSO) regression, solves the same constrained optimization problem as ridge regression, but uses the L1 norm rather than the L2 norm as a measure of complexity. $\hat{\beta}_{Lasso} \begin{cases} argmin(\beta) \hspace{.1cm}\sum_{i=1}^{N} \epsilon_i^2 \\ \sum_{j=1}^{p}|\beta_j|^1 \leq c \end{cases}$ LASSO regression can be visualized similarly to ridge regression, but since c is defined by the sum of absolute values of beta, rather than sum of squares, the area it constrains is diamond shaped rather than circular.  Figure 2 shows the selection of the beta estimator from LASSO regression. As you can see, the use of the L1 norm means LASSO regression selects one of the predictors and drops the other (weights it as zero). This has been argued to provide a more interpretable estimators (Tibshirani 1996). Figure 2: Geometric interpretation of Lasso Regression Estimator. The blue dot indicates the OLS estimate of Beta, ellipses centered around the OLS estimate represents RSS contours for each Beta 1, Beta 2 combination (denoted as z from the optimization equation). The mode complexity is constrained by the L1 norm representing model complexity. The Lasso estimator of Beta is shown as the red dot, the location where the RSS contour intersects the diamond defined by c. ### Final thoughts If you’re creating a model with multiple predictors, it’s important to be cognizant of potential for multicollinearity within your data set. Tolerant methods are only one of many possible remedies for multicollinearity (other notable techniques include data clustering and Principle Component Analysis) but it’s important to remember that no known technique can truly “solve” the problem of multicollinearity. The method chosen to deal with multicollinearity should be chosen on a case to case basis and multiple methods should be employed if possible to help identify the underlying structure within the predictor data set (Dormann et. al. 2013) ### Citations Dormann, C. F., Elith, J., Bacher, S., Buchmann, C., Carl, G., Carré, G., Marquéz, J. R. G., Gruber, B., Lafourcade, B., Leitão, P. J., Münkemüller, T., McClean, C., Osborne, P. E., Reineking, B., Schröder, B., Skidmore, A. K., Zurell, D. and Lautenbach, S. 2013, “Collinearity: a review of methods to deal with it and a simulation study evaluating their performance.” Ecography, 36: 27–46. doi:10.1111/j.1600-0587.2012.07348.x Ohlemüller, R. et al. 2008. “The coincidence of climatic and species rarity: high risk to small-range species from climate change.” Biology Letters. 4: 568 – 572. Tibshirani, Robert 1996. “Regression shrinkage and selection via the lasso.” Journal of the Royal Statistical Society. Series B (Methodological): 267-288. # Calculating Risk-of-Failures as in the Research Triangle papers (2014-2016) – Part 1 There has been a series of papers (e.g., Palmer and Characklis, 2009; Zeff et al., 2014; Herman et al., 2014) suggesting the use of an approximate risk-of-failure (ROF) metric, as opposed to the more conventional days of supply remaining, for utilities’ managers to decide when to enact not only water use restrictions, but also water transfers between utilities. This approach was expanded to decisions about the best time and in which new infrastructure project a utility should invest (Zeff at al., 2016), as opposed to setting fixed times in the future for either construction or options evaluation. What all these papers have in common is that drought mitigation and infrastructure expansion decisions are triggered when the values of the short and long-term ROFs, respectively, for a given utility exceeds those of pre-set triggers. For example, the figure below shows that as streamflows (black line, subplot “a”) get lower while demands are maintained (subplot “b”), the combined storage levels of the fictitious utility starts to drop around the month of April (subplot “c”), increasing the utility’s short-term ROF (subplot “d”) until it finally triggers transfers and restrictions (subplot “e”). Despite the triggered restriction and transfers, the utility’s combined storage levels crossed the dashed line in subplot “c”, which denotes the fail criteria (i.e. combined storage levels dropping below 20% of the total capacity). It is beyond the scope of this post to go into the details presented in all of these papers, but even after reading them the readers may be wondering how exactly ROFs are calculated. In this post, I’ll try to show in a graphical and concise manner how short-term ROFs are calculated. In order to calculate a utility’s ROF for week m, we would run 50 independent simulations (henceforth called ROF simulations) all departing from the system conditions (reservoir storage levels, demand probability density function, etc.) observed in week m, and each using one of 50 years of streamflows time series recorded immediately prior to week m. The utility’s ROF is then calculated as the number of ROF simulations in which the combined storage level of that utility dropped below 20% of the total capacity in at least one week, divided by the number of ROF simulations ran (50). An animation of the process can be seen below. For example, for a water utility who started using ROF triggers on 01/01/2017, this week’s short-term ROF (02/13/2017, or week m=7) would be calculated using the recorded streamflows from weeks 6 through -47 (assuming here a year of 52 weeks, for simplicity) for ROF simulation 1, the streamflows from weeks -48 to -99 for ROF simulation 2, and so on until we reach 50 simulations. However, if the utility is running an optimization or scenario evaluation and wants to calculate the ROF in week 16 (04/10/2017) of a system simulation, ROF simulation 1 would use 10 weeks of synthetically generated streamflows (16 to 7) and 42 weeks of historical records (weeks 6 to -45), simulation 2 would use records for weeks -46 to -97, and so on, as in a 50 years moving window. In another blog post, I will show how to calculate the long-term ROF and the reasoning behind it. Works cited Herman, J. D., H. B. Zeff, P. M. Reed, and G. W. Characklis (2014), Beyond optimality: Multistakeholder robustness tradeoffs for regional water portfolio planning under deep uncertainty, Water Resour. Res., 50, 7692–7713, doi:10.1002/2014WR015338. Palmer, R., and G. W. Characklis (2009), Reducing the costs of meeting regional water demand through risk-based transfer agreements, J. Environ. Manage., 90(5), 1703–1714. Zeff, H. B., J. R. Kasprzyk, J. D. Herman, P. M. Reed, and G. W. Characklis (2014), Navigating financial and supply reliability tradeoffs in regional drought management portfolios, Water Resour. Res., 50, 4906–4923, doi:10.1002/2013WR015126. Zeff, H. B., J. D. Herman, P. M. Reed, and G. W. Characklis (2016), Cooperative drought adaptation: Integrating infrastructure development, conservation, and water transfers into adaptive policy pathways, Water Resour. Res., 52, 7327–7346, doi:10.1002/2016WR018771. # Synthetic streamflow generation A recent research focus of our group has been the development and use of synthetic streamflow generators.  There are many tools one might use to generate synthetic streamflows and it may not be obvious which is right for a specific application or what the inherent limitations of each method are.  More fundamentally, it may not be obvious why it is desirable to generate synthetic streamflows in the first place.  This will be the first in a series of blog posts on the synthetic streamflow generators in which I hope to sketch out the various categories of generation methods and their appropriate use as I see it.  In this first post we’ll focus on the motivation and history behind the development of synthetic streamflow generators and broadly categorize them. ### Why should we use synthetic hydrology? The most obvious reason to use synthetic hydrology is if there is little or no data for your system (see Lamontagne, 2015).  Another obvious reason is if you are trying to evaluate the effect of hydrologic non-stationarity on your system (Herman et al. 2015; Borgomeo et al. 2015).  In that case you could use synthetic methods to generate flows reflecting a shift in hydrologic regime.  But are there other reasons to use synthetic hydrology? In water resources systems analysis it is common practice to evaluate the efficacy of management or planning strategies by simulating system performance over the historical record, or over some critical period.  In this approach, new strategies are evaluated by asking the question:  How well would we have done with your new strategy? This may be an appealing approach, especially if some event was particularly traumatic to your system. But is this a robust way of evaluating alternative strategies?  It’s important to remember that any hydrologic record, no matter how long, is only a single realization of a stochastic process.  Importantly, drought and flood events emerge as the result of specific sequences of events, unlikely to be repeated.  Furthermore, there is a 50% chance that the worst flood or drought in an N year record will be exceeded in the next N years.  Is it well advised to tailor our strategies to past circumstances that will likely never be repeated and will as likely as not be exceeded?  As Lettenmaier et al. [1987] reminds us “Little is certain about the future except that it will be unlike the past.” Even under stationarity and even with long hydrologic records, the use of synthetic streamflow can improve the efficacy of planning and management strategies by exposing them to larger and more diverse flood and drought than those in the record (Loucks et al. 1981; Vogel and Stedinger, 1988; Loucks et al. 2005).  Figure 7.12 from Loucks et al. 2005 shows a typical experimental set-up using synthetic hydrology with a simulation model.  Often our group will wrap an optimization model like Borg around this set up, where the system design/operating policy (bottom of the figure) are the decision variables, and the system performance (right of the figure) are the objective(s). (Loucks et al. 2005) ### What are the types of generators? Many synthetic streamflow generation techniques have been proposed since the early 1960s.  It can be difficult for a researcher or practitioner to know which method is best suited to the problem at hand.  Thus, we’ll start with a very broad characterization of what is out there, then proceed to some history. Broadly speaking there are two approaches to generating synthetic hydrology: indirect and direct.  The indirect approach generates streamflow by synthetically generating the forcings to a hydrologic model.  For instance one might generate precipitation and temperature series and input them to a hydrologic model of a basin (e.g. Steinschneider et al. 2014).  In contrast, direct methods use statistical techniques to generate streamflow timeseries directly. The direct approach is generally easier to apply and more parsimonious because it does not require the selection, calibration, and validation of a separate hydrologic model (Najafi et al. 2011).  On the other hand, the indirect approach may be desirable.  Climate projections from GCMs often include temperature or precipitation changes, but may not describe hydrologic shifts at a resolution or precision that is useful.  In other cases, profound regime shifts may be difficult to represent with statistical models and may require process-driven models, thus necessitating the indirect approach. Julie’s earlier series focused on indirect approaches, so we’ll focus on the direct approach.  Regardless of the approach many of the methods are same.  In general generator methods can be divided into two categories: parametric and non-parametricParametric methods rely on a hypothesized statistical model of streamflow whose parameters are selected to achieve a desired result (Stedinger and Taylor, 1982a).  In contrast non-parametric methods do not make strong structural assumptions about the processes generating the streamflow, but rather rely on re-sampling from the hydrologic record in some way (Lall, 1995).  Some methods combine parametric and non-parametric techniques, which we’ll refer to as semi-parametric (Herman et al. 2015). Both parametric and non-parametric methods have advantages and disadvantages.  Parametric methods are often parsimonious, and often have analytical forms that allow easy parameter manipulation to reflect non-stationarity.  However, there can be concern that the underlying statistical models may not reflect the hydrologic reality well (Sharma et al. 1997).  Furthermore, in multi-dimensional, multi-scale problems the proliferation of parameters can make parametric models intractable (Grygier and Stedinger, 1988).  Extensive work has been done to confront both challenges, but they may lead a researcher to adopt a non-parametric method instead. Because many non-parametric methods ‘re-sample’ flows from a record, realism is not generally a concern, and most re-sampling schemes are computationally straight forward (relatively speaking).  On the other hand, manipulating synthetic flows to reflect non-stationarity may not be as straightforward as a simple parameter change, though methods have been suggested (Herman et al. 2015Borgomeo et al. 2015).  More fundamentally, because non-parametric methods rely so heavily on the data, they require sufficiently long records to ensure there is enough hydrologic variability to sample.  Short records can be a concern for parametric methods as well, though parametric uncertainty can be explicitly considered in those methods (Stedinger and Taylor, 1982b).  Of course, parametric methods also have structural uncertainty that non-parametric models largely avoid by not assuming an explicit statistical model. In the coming posts we’ll dig into the nuances of the different methods in greater detail. ### A historical perspective The first use of synthetic flow generation seems to have been by Hazen [1914].  That work attempted to quantify the reliability of a water supply by aggregating the streamflow records of local streams into a 300-year ‘synthetic record.’  Of course the problem with this is that the cross-correlation between concurrent flows rendered the effective record length much less than the nominal 300 years. Next Barnes [1954] generated 1,000 years of streamflow for a basin in Australia by drawing random flows from a normal distribution with mean and variance equal to the sample estimates from the observed record.  That work was extended by researchers from the Harvard Water Program to account for autocorrelation of monthly flows (Maass et al., 1962; Thomas and Fiering, 1962).  Later work also considered the use of non-normal distributions (Fiering, 1967), and the generation of correlated concurrent flows at multiple sites (Beard, 1965; Matalas, 1967). Those early methods relied on first-order autoregressive models that regressed flows in the current period on the flows of the previous period (see Loucks et al.’s Figure 7.13  below).  Box and Jenkins [1970] extended those methods to autoregressive models of arbitrary order, moving average models of arbitrary order, and autoregressive-moving average models of arbitrary order.  Those models were the focus of extensive research over the course of the 1970s and 1980s and underpin many of the parametric generators that are widely used in hydrology today (see Salas et al. 1980; Grygier and Stedinger, 1990; Salas, 1993; Loucks et al. 2005). (Loucks et al. 2005) By the mid-1990s, non-parametric methods began to gain popularity (Lall, 1995).  While much of this work has its roots in earlier work from the 1970s and 1980s (Yakowitz, 1973, 1979, 1985; Schuster and Yakowitz, 1979; Yakowitz and Karlsson, 1987; Karlson and Yakowitz, 1987), improvements in computing and the availability of large data sets meant that by the mid-1990s non-parametric methods were feasible (Lall and Sharma, 1996).  Early examples of non-parametric methods include block bootstrapping (Vogel and Shallcross, 1996), k-nearest neighbor (Lall and Sharma, 1996), and kernel density methods (Sharma et al. 1997).  Since that time extensive research has made improvement to these methods, often by incorporating parametric elements.  For instance, Srinivas and Srinivasan (2001, 2005, and 2006) develop a hybrid autoregressive-block bootstrapping method designed to improve the bias in lagged correlation and to generate flows other than the historical, for multiple sites and multiple seasons.  K-nearest neighbor methods have also been the focus of extensive research (Rajagopalan and Lall, 1999; Harrold et al. 2003; Yates et al. 2003; Sharif and Burn, 2007; Mehortra and Sharma, 2006; Prairie et al. 2006; Lee et al. 2010, Salas and Lee, 2010, Nowak et al., 2010), including recent work by our group  (Giuliani et al. 2014). Emerging work focuses on stochastic streamflow generation using copulas [Lee and Salas, 2011; Fan et al. 2016], entropy theory bootstrapping [Srivastav and Simonovic, 2014], and wavelets [Kwon et al. 2007; Erkyihun et al., 2016], among other methods. In the following posts I’ll address different challenges in stochastic generation [e.g. long-term persistence, parametric uncertainty, multi-site generation, seasonality, etc.] and the relative strengths and shortcomings of the various methods for addressing them. ### Works Cited Barnes, F. B., Storage required for a city water supply, J. Inst. Eng. Australia 26(9), 198-203, 1954. Beard, L. R., Use of interrelated records to simulate streamflow, J. Hydrol. Div., ASCE 91(HY5), 13-22, 1965. Borgomeo, E., Farmer, C. L., and Hall, J. W. (2015). “Numerical rivers: A synthetic streamflow generator for water resources vulnerability assessments.” Water Resour. Res., 51(7), 5382–5405. Y.R. Fan, W.W. Huang, G.H. Huang, Y.P. Li, K. Huang, Z. Li, Hydrologic risk analysis in the Yangtze River basin through coupling Gaussian mixtures into copulas, Advances in Water Resources, Volume 88, February 2016, Pages 170-185. Fiering, M.B, Streamflow Synthesis, Harvard University Press, Cambridge, Mass., 1967. Giuliani, M., J. D. Herman, A. Castelletti, and P. Reed (2014), Many-objective reservoir policy identification and refinement to reduce policy inertia and myopia in water management, Water Resour. Res., 50, 3355–3377, doi:10.1002/2013WR014700. Grygier, J.C., and J.R. Stedinger, Condensed Disaggregation Procedures and Conservation Corrections for Stochastic Hydrology, Water Resour. Res. 24(10), 1574-1584, 1988. Grygier, J.C., and J.R. Stedinger, SPIGOT Technical Description, Version 2.6, 1990. Harrold, T. I., Sharma, A., and Sheather, S. J. (2003). “A nonparametric model for stochastic generation of daily rainfall amounts.” Water Resour. Res., 39(12), 1343. Hazen, A., Storage to be provided in impounding reservoirs for municipal water systems, Trans. Am. Soc. Civ. Eng. 77, 1539, 1914. Herman, J.D., H.B. Zeff, J.R. Lamontagne, P.M. Reed, and G. Characklis (2016), Synthetic Drought Scenario Generation to Support Bottom-Up Water Supply Vulnerability Assessments, Journal of Water Resources Planning & Management, doi: 10.1061/(ASCE)WR.1943-5452.0000701. Karlsson, M., and S. Yakowitz, Nearest-Neighbor methods for nonparametric rainfall-runoff forecasting, Water Resour. Res., 23, 1300-1308, 1987. Kwon, H.-H., U. Lall, and A. F. Khalil (2007), Stochastic simulation model for nonstationary time series using an autoregressive wavelet decomposition: Applications to rainfall and temperature, Water Resour. Res., 43, W05407, doi:10.1029/2006WR005258. Lall, U., Recent advances in nonparametric function estimation: Hydraulic applications, U.S. Natl. Rep. Int. Union Geod. Geophys. 1991- 1994, Rev. Geophys., 33, 1093, 1995. Lall, U., and A. Sharma (1996), A nearest neighbor bootstrap for resampling hydrologic time series, Water Resour. Res. 32(3), pp. 679-693. Lamontagne, J.R. 2015,Representation of Uncertainty and Corridor DP for Hydropower 272 Optimization, PhD edn, Cornell University, Ithaca, NY. Lee, T., J. D. Salas, and J. Prairie (2010), An enhanced nonparametric streamflow disaggregation model with genetic algorithm, Water Resour. Res., 46, W08545, doi:10.1029/2009WR007761. Lee, T., and J. Salas (2011), Copula-based stochastic simulation of hydrological data applied to Nile River flows, Hydrol. Res., 42(4), 318–330. Lettenmaier, D. P., K. M. Latham, R. N. Palmer, J. R. Lund and S. J. Burges, Strategies for coping with drought Part II: Planning techniques for planning and reliability assessment, EPRI P-5201, Final Report Project 2194-1, June 1987. Loucks, D.P., Stedinger, J.R. & Haith, D.A. 1981, Water Resources Systems Planning and Analysis, 1st edn, Prentice-Hall, Englewood Cliffs, N.J. Loucks, D.P. et al. 2005, Water Resources Systems Planning and Management: An Introduction to Methods, Models and Applications, UNESCO, Delft, The Netherlands. Maass, A., M. M. Hufschmidt, R. Dorfman, H. A. Thomas, Jr., S. A. Marglin and G. M. Fair, Design of Water Resource Systems, Harvard University Press, Cambridge, Mass., 1962. Matalas, N. C., Mathematical assessment of synthetic hydrology, Water Resour. Res. 3(4), 937-945, 1967. Najafi, M. R., Moradkhani, H., and Jung, I. W. (2011). “Assessing the uncertainties of hydrologic model selection in climate change impact studies.” Hydrol. Process., 25(18), 2814–2826. Nowak, K., J. Prairie, B. Rajagopalan, and U. Lall (2010), A nonparametric stochastic approach for multisite disaggregation of annual to daily streamflow, Water Resour. Res., 46, W08529, doi:10.1029/2009WR008530. Nowak, K., J. Prairie, B. Rajagopalan, and U. Lall (2010), A nonparametric stochastic approach for multisite disaggregation of annual to daily streamflow, Water Resour. Res., 46, W08529, doi:10.1029/2009WR008530. Nowak, K., J. Prairie, B. Rajagopalan, and U. Lall (2010), A nonparametric stochastic approach for multisite disaggregation of annual to daily streamflow, Water Resour. Res., 46, W08529, doi:10.1029/2009WR008530. Nowak, K., J. Prairie, B. Rajagopalan, and U. Lall (2010), A nonparametric stochastic approach for multisite disaggregation of annual to daily streamflow, Water Resour. Res., 46, W08529, doi:10.1029/2009WR008530. Prairie, J. R., Rajagopalan, B., Fulp, T. J., and Zagona, E. A. (2006). “Modified K-NN model for stochastic streamflow simulation.” J. Hydrol. Eng., 11(4), 371–378. Rajagopalan, B., and Lall, U. (1999). “A k-nearest-neighbor simulator for daily precipitation and other weather variables.” Water Resour. Res., 35(10), 3089–3101. Salas, J. D., J. W. Deller, V. Yevjevich and W. L. Lane, Applied Modeling of Hydrologic Time Series, Water Resources Publications, Littleton, Colo., 1980. Salas, J.D., 1993, Analysis and Modeling of Hydrologic Time Series, Chapter 19 (72 p.) in The McGraw Hill Handbook of Hydrology, D.R. Maidment, Editor. Salas, J.D., T. Lee. (2010). Nonparametric Simulation of Single-Site Seasonal Streamflow, J. Hydrol. Eng., 15(4), 284-296. Schuster, E., and S. Yakowitz, Contributions to the theory of nonparametric regression, with application to system identification, Ann. Stat., 7, 139-149, 1979. Sharif, M., and Burn, D. H. (2007). “Improved K-nearest neighbor weather generating model.” J. Hydrol. Eng., 12(1), 42–51. Sharma, A., Tarboton, D. G., and Lall, U., 1997. “Streamflow simulation: A nonparametric approach.” Water Resour. Res., 33(2), 291–308. Srinivas, V. V., and Srinivasan, K. (2001). “A hybrid stochastic model for multiseason streamflow simulation.” Water Resour. Res., 37(10), 2537–2549. Srinivas, V. V., and Srinivasan, K. (2005). “Hybrid moving block bootstrap for stochastic simulation of multi-site multi-season streamflows.” J. Hydrol., 302(1–4), 307–330. Srinivas, V. V., and Srinivasan, K. (2006). “Hybrid matched-block bootstrap for stochastic simulation of multiseason streamflows.” J. Hydrol., 329(1–2), 1–15. Roshan K. Srivastav, Slobodan P. Simonovic, An analytical procedure for multi-site, multi-season streamflow generation using maximum entropy bootstrapping, Environmental Modelling & Software, Volume 59, September 2014a, Pages 59-75. Stedinger, J. R. and M. R. Taylor, Sythetic streamflow generation, Part 1. Model verification and validation, Water Resour. Res. 18(4), 909-918, 1982a. Stedinger, J. R. and M. R. Taylor, Sythetic streamflow generation, Part 2. Parameter uncertainty,Water Resour. Res. 18(4), 919-924, 1982b. Steinschneider, S., Wi, S., and Brown, C. (2014). “The integrated effects of climate and hydrologic uncertainty on future flood risk assessments.” Hydrol. Process., 29(12), 2823–2839. Thomas, H. A. and M. B. Fiering, Mathematical synthesis of streamflow sequences for the analysis of river basins by simulation, in Design of Water Resource Systems, by A. Maass, M. Hufschmidt, R. Dorfman, H. A. Thomas, Jr., S. A. Marglin and G. M. Fair, Harvard University Press, Cambridge, Mass., 1962. Vogel, R.M., and J.R. Stedinger, The value of stochastic streamflow models in over-year reservoir design applications, Water Resour. Res. 24(9), 1483-90, 1988. Vogel, R. M., and A. L. Shallcross (1996), The moving block bootstrap versus parametric time series models, Water Resour. Res., 32(6), 1875–1882. Yakowitz, S., A stochastic model for daily river flows in an arid region, Water Resour. Res., 9, 1271-1285, 1973. Yakowitz, S., Nonparametric estimation of markov transition functions, Ann. Stat., 7, 671-679, 1979. Yakowitz, S. J., Nonparametric density estimation, prediction, and regression for markov sequences J. Am. Stat. Assoc., 80, 215-221, 1985. Yakowitz, S., and M. Karlsson, Nearest-neighbor methods with application to rainfall/runoff prediction, in Stochastic  Hydrology, edited by J. B. Macneil and G. J. Humphries, pp. 149-160, D. Reidel, Norwell, Mass., 1987. Yates, D., Gangopadhyay, S., Rajagopalan, B., and Strzepek, K. (2003). “A technique for generating regional climate scenarios using a nearest-neighbor algorithm.” Water Resour. Res., 39(7), 1199.
+0 # The spherical coordinates of (-3, 4, -12) are (rho, theta, phi). Find tan*theta + theta*phi. 0 79 1 +11 The spherical coordinates of (-3, 4, -12) are (rho, theta, phi). Find tan*theta + theta*phi. I've already tried converting the cartesian coordinates provided into spherical coordinates: I got rho = 13, theta = arccos(-12/13), and phi = arctan(-4/3) I'm not sure how to add theta and phi together, I'm not sure if I converted them correctly at all. Jun 16, 2020 #1 +8340 +1 Yes, you have converted them correctly. But do you mean $$\tan\left(\theta + \varphi\right)$$ instead? Because you asked how to add theta and phi together. Here is how you add theta and phi. You first convert the arccos in theta to arctan with trigonometry. Then you can use the formula $$\arctan A + \arctan B = \arctan \dfrac{A + B}{1 - AB}$$ Jun 17, 2020
What constrains a paraglider vertically? I recently flew on a tandem paraglider for the first time. Since then I keep asking myself a question I have no clear answers to. While flying I clearly perceived the lift generated by the airfoil, but the strongest feeling I had was as if the paraglider hung to a rail, like a suspension railway, or to a cable, like a cable-car. In other words, I perceived a very strong vertical constraint. I know the airfoil generates lift, but I understand the amount of lift is not enough to give you a stability feeling like when driving a truck! To be similar to the constraint produced by a rail, this vertical constraint should be in the order of thousands of kg, IMHO. Any suggestions on how to estimate this amount, and its origins? • Let me paraphrase this question a bit: by "constrained", I believe the OP means "very stiff". The acceleration is sudden - e.g. it goes from level flight to ascending flight in a very short amount of time, as opposed to, a general upward acceleration (and associated increase in G loading) lasting for 10 seconds. – kevin Aug 10 at 2:31 What you perceive as "motion" is acceleration. No acceleration, and you feel no "motion" -- your kinesic sense and inner ear will tell your brain you're sitting still on a solid surface. What you experienced as "vertical constraint" was nothing more or less than the result of flying in a very stable manner relative to pitch -- no pitching, up or down, means no change in vertical acceleration (which ought to be exactly 1 G at all times, if you're to feel "still"), and no sensation of motion. To be similar to the constraint produced by a rail, this vertical constraint should be in the order of thousands of kg, IMHO. Any suggestions on how to estimate this amount, and its origins? The thing is that the constraint is something you "perceived": I perceived a very strong vertical constraint not that was actually there. Only gravity, lift, and drag (and thrust, if powered) are acting on the paraglider, nothing else. And the lift is easily estimated, as it is roughly equal to the weight of the passengers and the aircraft summed up and being this a paraglider, definitely not in the range of the thousands of kg. Having this been your first experience, the sensation can easily be explained by your lack of familiarity with the transportation method. Fundamentally, the "constrained" feeling comes from the steepness of the lift coefficient vs angle of attack curve. The angle of attack is the angle at which the air approaches the wing. In level flight, this has a certain value, let's say around 5°. Now let's say you encounter a rising pocket of air. Instantaneously, the angle of attack increases. This directly increases the lift coefficient. As a result, the aircraft starts accelerating upwards, until it is ascending as fast as the rising pocket of air, at which point as far as the aircraft is concerned it is in level flight again (while it is in fact rising along with the air pocket). The "sharpness" at which this happens is your "rail-like" feeling of "vertical constraint" as you call it. For thin airfoils, the lift coefficient is approximately $2\pi \alpha$ with $\alpha$ the angle of attack. That means that if we were in level flight at $\alpha=5°$, we would be accelerating upwards at $1g$ if the angle of attack increased to just 10°. I can promise you that you can't tell such a minute change in the angle of the incoming wind. Therefore, as far as you are concerned, the wind is always coming head on and you feel like you are flying "on rails". By the way, "thousands of kilos" of lift for a tandem paraglider of let's say 250kg would mean over $4g$ of acceleration which would be enough to knock you unconscious if sustained for any amount of time.
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No:  574.10012 Autor:  Erdös, Paul; Sárközy, A.; Pomerance, C. Title:  On locally repeated values of certain arithmetic functions. I. (In English) Source:  J. Number Theory 21, 319-332 (1985). Review:  It is shown that, for certain integer-valued arithmetic functions f, the equation n+f(n) = m+f(m) has infinitely many solutions with n\ne m. Let \nu(n) denote the number of distinct prime factors of n. Then, for f = \nu, a lower bound for the number of solutions n,m \leq x is given. Reviewer:  L.Lucht Classif.:  * 11A25 Arithmetic functions, etc. 11A25 Arithmetic functions, etc. 11N30 Turan theory Keywords:  arithmetic functions; number of distinct prime factors; lower bound; number of solutions © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
A new measure of tension between experiments # A new measure of tension between experiments Saroj Adhikari    Dragan Huterer Department of Physics, University of Michigan, 450 Church St, Ann Arbor, MI 48109-1040; Leinweber Center for Theoretical Physics, University of Michigan, 450 Church St, Ann Arbor, MI 48109-1040 July 25, 2019 ###### Abstract Tensions between cosmological measurements by different surveys or probes have always been important — and are presently much discussed — as they may lead to evidence of new physics. Several tests have been devised to probe the consistency of datasets given a cosmological model, but they often have undesired features such as dependence on the prior volume, or burdensome requirements such as that of near-Gaussian posterior distributions. We propose a new quantity, defined in a similar way as the Bayesian evidence ratio, in which these undesired properties are absent. We test the quantity on simple models with Gaussian and non-Gaussian likelihoods. We then apply it to data from the Planck satellite: we investigate the consistency of CDM model parameters obtained from TT and EE angular power spectrum measurements, as well as the mutual consistency of cosmological parameters obtained from large scale (multipoles, ) and small scale () portions of each measurement and find no significant discrepancy in the six-dimensional CDM parameter space. Introduction. The use of Bayesian statistics in cosmology is now commonplace: most of the results on cosmological parameters from cosmic microwave background (CMB) experiments Ade et al. (2016) and large-scale structure (LSS) surveys Abbott et al. (2017) are reported as posterior distributions. In addition, various Bayesian methods are used for model comparison Trotta (2008). Along with the increase in the number of cosmological surveys and the improvement in their precision, a number of tensions between parameters derived from different experiments have been observed. For example, the Hubble constant measured using the distance ladder in the local universe disagrees with that derived from Planck CMB observations; in the standard six parameter CDM model, the disagreement is about Riess et al. (2016); Bernal et al. (2016). There is also some tension between the measurements of the amplitude of fluctuations and the matter density from weak lensing to that of the measurement from Planck CMB data (Hildebrandt et al., 2017; Troxel et al., 2017, 2018). As a result, a number of statistics have been developed to compare datasets in cosmology. The primary goal of these statistics is to determine if two datasets are consistent realizations of the same model, that is, with a single set of cosmological parameters (see Seehars et al. (2016); Charnock et al. (2017); Lin and Ishak (2017a) for discussions and comparisons of some of the popular methods). For an alternative approach using hyperparameters, see Hobson et al. (2002); Bernal and Peacock (2018). The Bayesian evidence-based metric of Marshall et al. (2006) has been widely used March et al. (2011); Amendola et al. (2013); Martin et al. (2014); Joudaki et al. (2017); Raveri (2016), but is known to strongly depend on the priors given to parameters. This has led to the use of other measures Seehars et al. (2014); Grandis et al. (2016); Feeney et al. (2018) that do not have the prior-volume dependence, but at the expense of losing the simplicity of an evidence ratio. In this work, we define an evidence-based quantity which fixes the problem of prior volume dependence and which can be evaluated on a easy-to-interpret scale. Consider two datasets and , and let be the parameters of a model. Let us assume that both the datasets and the combination of them can be modeled by a particular CDM realization, and the priors are wide enough to include the parameter posteriors preferred by both the datasets individually. Most commonly, a Bayesian analysis is used to determine posterior probability distributions for the model parameters . Suppose the two datasets separately give two (normalized) posterior distributions p1(θ|d1)=L(d1|θ)π(θ)E(d1);p2(θ|d% 2)=L(d2|θ)π(θ)E(d2), (1) where denotes the likelihood of the data d given the model defined by a set of parameters , is the prior probability of the model parameters, and is called the marginal likelihood or the evidence, E(d) =∫dθL(d|θ)π(θ). (2) We will always use normalized probability density functions for the likelihood () and the prior (). The posterior for the combination of datasets is: p12(θ|d1,d2)=L(d1,d2,θ)π(θ)E(d1,d2)=L(d1,θ)L(d2,θ)π(θ)E(d1,d2), (3) where the second equality assumes that the combined likelihood is approximated well by the product of the two likelihoods. The ratio of the evidences obtained using two different models, called the Bayes Factor, is a widely used measure for model comparison. In this work, we will define a similar ratio to compare two sets of parameter constraints of a model obtained using different datasets or experiments. Evidence for model parameters. We first define the marginal likelihood (or the evidence) for the maximum likelihood model parameters, , instead of the usual definition of the evidence for the data, d. We do so as our primary goal is to quantify the level of consistency between model parameters obtained from different datasets or experiments. Analogous to Eq. (2), we define the evidence for the maximum likelihood model parameters E(g(θML)) =∫dθL(g(θML)|θ)π(θ), (4) where, instead of the measured data, we have used the maximum likelihood values of the data realization given the model, ; see Figure 1 for an illustration. Here is the function that computes the model prediction for the data given the parameters ; for example, in the case of the CMB temperature fluctuation data, the model prediction is represented by the theory angular power spectra . If the likelihood in the above equation is a combination of two experiments, then we can define evidences for the maximum likelihood parameters obtained through the combination of the two datasets (denoted by ), . Alternatively, we can define an evidence so that each part of the data vector in the evidence integral uses its own maximum likelihood parameter values, obtained by analyzing each experiment separately, . As we will show, the ratio of the two evidences can quantify the tension between the parameter constraints obtained from two different datasets. Evidence-based dataset comparison. For simplicity, consider two datasets that have independent likelihoods, , and let the measured data vector for each experiment be denoted by . Let us further assume that the maximum likelihood parameters of the model, CDM for example, are known for three different cases: two datasets analyzed separately, , and their combined analysis, . Our null hypothesis is that both the datasets are realizations of a single set of parameters, , from the combined fit. The alternative, more complicated, hypothesis is that each of the datasets are realizations of their own set of parameters, . Then, using the Bayes theorem similarly to the derivation of the Bayes factor, we get p(H1)p(H0) =∫dθπ(θ)Li(gi(θMLi)|θ)Lj(gj(θMLj)|θ)∫dθπ(θ)Li(gi(θMLij)|θ)Lj(gj(θMLij)|θ) (5) =EsepijEcomij≡Rij, where the superscripts sep and com in the formula above stand for separate and combined maximum likelihood parameters, respectively. We will first consider a case where the above expression can be evaluated analytically. Consider two likelihoods given by two -dimensional multivariate Gaussian distributions with arbitrary covariance matrices and , L1(θ)= 1√det(2πΣ1)exp[−12(d1−θ)TΣ−11(d1−θ)] L2(θ)= 1√det(2πΣ2)exp[−12(d2−θ)TΣ−12(d2−θ)]. In this simple example, we have taken so that the expressions are easy to evaluate analytically. If we further assume that the prior on each of the parameters is uniform and wide (compared to the constraint on the parameter), we get Petersen and Pedersen (2012), Esep12 ∝∫dθL1(d1|θ)L2(d2|θ)=1√det[2π(Σ1+Σ2)] ×exp[−12(d1−d2)T(Σ1+Σ2)−1(d1−d2)] (6) Ecom12 ∝∫dθL1(d12|θ)L2(d12|θ)=1√det[2π(Σ1+Σ2)] so that R12= exp[−12(d1−d2)T(Σ1+Σ2)−1(d1−d2)], (7) the negative logarithm of which () is the two-experiment index of inconsistency (IOI) defined in Lin and Ishak (2017a). Under these conditions assuming that the null hypothesis is true, is distributed with degrees of freedom (dof) Raveri and Hu (2018) (see their definition and discussion of ). More generally, the ratio of probabilities of two hypotheses (evidence ratio) is similar to a likelihood-ratio test Kass and Raftery (1995), and the distribution of asymptotically approaches by Wilks theorem Wilks (1938). Here, , when comparing two datasets. We will, therefore, evaluate the probability-to-exceed (PTE) value of observed values by taking to be distributed. For two one-dimensional Gaussian likelihoods: and , we get . The application of our new measure to the marginalized Hubble constant likelihoods from Planck Ade et al. (2016) and distance ladder Riess et al. (2016), therefore, trivially gives us the values expected from Gaussian statistics i.e Lin and Ishak (2017b) with a p-value or . Also, we note that our new measure is related to the tension measure defined in Verde et al. (2013), because in some situations can be approximated by shifting one of the posterior probability density functions while preserving its shape. However, there can be ambiguity in the process of shifting one or both of the posterior distributions (for non-Gaussian and multimodal distributions), as discussed in Section X.B. of Lin and Ishak (2017a). That ambiguity is removed in our definition, as we reference the likelihood functions directly. We provide an example in Figure 2, in which the Gaussian likelihood is simply, . The non-Gaussian likelihood is a (normalized) sum of two Gaussians, defined as . The distributions plotted in Figure 2 are (dashed) and (solid). Because the combined fit is insensitive to the narrow peak near , we get without any ambiguity in how to shift the distributions, which shows that the two sets of parameters and from the two likelihoods are consistent, as expected. Without the additional peak at , the level of consistency is slightly better: , a simple verification that the new measure gets contribution from non-Gaussian features. Next, we calculate using different pairs of datasets (e.g. TT vs EE) from the Planck satellite, in which case is no more a simple linear function but has to evaluated numerically. Application to Planck data. We use the binned and foreground-marginalized plik_lite likelihood from the Planck collaboration Aghanim et al. (2016) which includes multipoles for TT power spectrum, and multipoles for EE power spectrum. We fix the Planck calibration factor to 1; see Sec. C.6.2 of Aghanim et al. (2016), from which the CMB-only Gaussian plik_lite likelihood is: lnL(~CCMBb|Cthb)=−12xT~Σ−1x−12ln[det(2π~Σ)], (8) where . The binned and marginalized mean and covariance matrix are provided by the Planck team. To evaluate the likelihood in Eq. (8), we compute lensed for a given set of parameters using camb Lewis et al. (2000); Howlett et al. (2012) and bin the using the appropriate weights to get . Without low-multipole polarization data, the optical depth to reionization is only weakly constrained and is strongly degenerate with the amplitude of scalar fluctuations . To break this degeneracy, we use a low- polarization prior . The evidences we compute are: EsepTT,EE= ∫dθπ(θ)L({CTTℓ(θMLT),CEEℓ(θMLE)}|θ) (9) EcomTT,EE= ∫dθπ(θ)L({CTTℓ(θMLC),CEEℓ(θMLC)}|θ) where and are obtained individually by using the respective TT and EE data, while is the maximum likelihood model parameters from the combined fit. We obtain the maximum likelihood values by using a global optimization algorithm differential_evolution Storn and Price (1997) implemented in scipy Jones et al. (01). We calculate the evidences using the MultiNest package Feroz and Hobson (2008); Feroz et al. (2009), and quote results and statistical errorbars produced by the importance nested sampling method Feroz et al. (2013). For evidence calculations, we take uniform priors on six cosmological parameters listed in Table 1. The results are shown in Table 2 where, in addition to , we also quote the corresponding probability-to-exceed (p-value) and Gaussian - values. For the discrepancy between model parameters obtained from TT and EE spectra, we obtain (approximately ). Previous studies also find no indication of strong discrepancy between these datasets Shafieloo and Hazra (2017), albeit by using more complicated methods, or by directly using the posteriors Lin and Ishak (2017b). We perform another test using the Planck power spectrum data, by splitting the temperature data into and samples and calculating for these two datasets. We again find that the level of inconsistency is small with or approximately , which agrees with the significance obtained using simulated data sets in Aghanim et al. (2017). Note that, to obtain the values in Table 2, we are using the plik_lite likelihood in which low- () multipoles are not included; inclusion of these large-scale multipoles would likely increase the discrepancy as their amplitude is known to be anomalously low. To estimate the effect of low- part of the TT likelihood, we implement an approximation to the low- likelihood following Aghanim et al. (2017) (see their Section 3.2 for details), which they have tested to find that the approximation gives similar cosmological parameters compared to the computationally more demanding pixel-space likelihood. To summarize: is drawn from a probability distribution function, where are mask-dependent fitting factors determined for the commander mask. Here is the mask-deconvolved power spectrum, which we take to be the Planck commander quadratic maximum likelihood (QML) s. Any correlation between different multipoles for and with the plik_lite multipole bins is ignored. For , including the approximate low- likelihood, we now get or approximately , which again agrees with the significance quoted in Aghanim et al. (2017) obtained using simulations. We finally carry out a similar analysis with the polarization data: we split the Planck EE data in multipole, using the plik_lite likelihood for each multipole range. The large and small scale multipole split for the EE spectrum results in consistent CDM parameters: , or approximately , which is expected given the lesser constraining power of the EE spectrum for Planck noise levels. Summary and Conclusion. We have introduced a new statistic to quantify tension between experiments. The statistic is based upon Bayesian evidence, and has advantages of not depending on the prior volumes of the parameters, and of being straightforward to apply to multiparameter, non-Gaussian likelihood distributions. We have shown that our new measure reduces to the expected discrepancy measure for Gaussian distributed posteriors, and gives sensible results in the non-Gaussian tests that we performed. Applying the new statistic to the Planck power spectrum data, we find that the cosmological parameters obtained from TT and EE spectra are consistent, and that the level of discrepancy of the parameters obtained from the TT spectrum split into smaller and larger scales at is slightly larger at about . We have limited our application to just the Planck data in this work. It is worthwhile to apply the new measure to comparing the Planck constraints with weak-lensing constraints Abbott et al. (2017) and smaller-scale CMB constraints Aylor et al. (2017). It will also be useful to consider using the statistic in the context of CDM extensions. Further, we have only carefully investigated the ratio for comparing two datasets. A straightforward application of the ratio for more than two datasets might be possible by evaluating as distributed with degrees of freedom, but detailed investigation of this possibility and application to other cosmological datasets is left for future study. Acknowledgments. The authors are supported by NASA under contract 14-ATP14-0005. DH is also supported by DOE under Contract No. DE-FG02-95ER40899. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. We thank Marius Millea for providing the necessary coefficients and example code to implement the low- approximated likelihood. We are grateful to Wayne Hu, Marco Raveri, Vivian Miranda, Weikang Lin, Mustapha Ishak-Boushaki, Pavel Motloch and Michael Hobson for insightful comments. ## References You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
## Calculus: Early Transcendentals 8th Edition Published by Cengage Learning # Chapter 2 - Section 2.3 - Calculating Limits Using the Limit Laws - 2.3 Exercises: 15 #### Answer $\lim\limits_{t \to -3}\dfrac{t^{2}-9}{2t^{2}+7t+3}=\dfrac{6}{5}$ #### Work Step by Step Apply direct substitution first: $\lim\limits_{t \to -3}\dfrac{t^{2}-9}{2t^{2}+7t+3}=\dfrac{(-3)^{2}-9}{2(-3)^{2}+7(-3)+3}=\dfrac{0}{0}$ $(Indeterminate$ $Form)$ Apply factorization to the numerator: $t^{2}-9=(t-3)(t+3)$ Apply factorization to the denominator: $2(2t^{2}+7t+3)=4t^2+2(7t)+6=\dfrac{(2t+6)}{2}(2t+1)=(t+3)(2t+1)$ Evaluate the limit: $\lim\limits_{t \to -3}\dfrac{t^{2}-9}{2t^{2}+7t+3}=\lim\limits_{t \to -3}\dfrac{(t-3)(t+3)}{(t+3)(2t+1)}=\lim\limits_{t \to -3}\dfrac{t-3}{2t+1}=...$ $...=\dfrac{-3-3}{2(-3)+1}=\dfrac{-6}{-5}=\dfrac{6}{5}$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# zbMATH — the first resource for mathematics Models of superstable Horn theories. (English. Russian original) Zbl 0597.03017 Algebra Logic 24, 171-210 (1985); translation from Algebra Logika 24, No. 3, 278-326 (1985). The main aim of this paper is the development of a structural theory for models of complete Horn theories with non-maximal spectra. A termal lemma, proved in the paper, permits to prove the existence of a prime model over any independent set of models of a complete Horn theory with non-maximal spectrum. It is proved that any model may be decomposed into submodels of smaller depths. This gives the possibility to characterize models of Horn theories with non-maximal spectra. Lower and upper bounds of the spectra of complete Horn theories are found. This gives a proximate characterization of the spectrum of a complete Horn theory if its depth is 1 or $$>\omega$$. Reviewer: S.R.Kogalovskij ##### MSC: 03C45 Classification theory, stability and related concepts in model theory 03C35 Categoricity and completeness of theories Full Text: ##### References: [1] E. A. Palyutin, ”A description of categorical quasivarieties,” Algebra Logika,14, No. 2, 145–185 (1975). · Zbl 0319.08004 [2] E. A. Palyutin, ”On categorical positive Horn theories,” Algebra Logika,18, No. 1, 47–72 (1979). · Zbl 0448.03017 [3] E. A. Palyutin, ”Categorical Horn classes. 1,” Algebra Logika,19, No. 5, 582–614 (1980). [4] E. A. Palyutin, ”Spectra and structure of models of complete theories,” in: Manual in Mathematical Logic, 1. Model Theory [in Russian], Nauka, Moscow (1982), pp. 320–387. [5] S. Burris and H. P. Sankappanavar, A Course in Universal Algebra, Springer-Verlag (1980). [6] J. Saffe, Einige Ergebniss über die Anzahl abzählbarer Modelle superstabiler Theorien, Dissertation, Universität Hannover (1981). [7] J. Saffe, ”A superstable theory with the dimensional order property has many models,” in: Proc. Herbrand Symposium, Logic Colloquium, Vol. 81, North-Holland, Amsterdam (1978). · Zbl 0499.03014 [8] J. Saffe, ”The number of uncountable models of $$\omega$$-stable theories,” Ann. Pure Appl. Logic,24, 231–261 (1983). · Zbl 0518.03010 · doi:10.1016/0168-0072(83)90007-6 [9] S. Shelah, Classification Theory and the Number of Nonisomorphic Models, North-Holland, Amsterdam (1978). · Zbl 0388.03009 [10] S. Shelah, ”The spectrum problem I: $$\omega$$-saturated models, the main gap,” Israel J. Math.,43, No. 4, 324–356 (1982). · Zbl 0532.03013 · doi:10.1007/BF02761237 [11] S. Shelah, ”The spectrum problem II: totally transcendental and infinite depth,” Israel J. Math.,43, No. 4, 357–364 (1982). · Zbl 0532.03014 · doi:10.1007/BF02761238 [12] B. Jonsson and P. Olin, ”Almost direct products and saturation,” Comp. Math.,20, 125–132 (1968). · Zbl 0155.03501 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
## What Is the Commodity Channel Index (CCI)? The Commodity Channel Index​ (CCI) is a momentum-based oscillator used to help determine when an investment vehicle is reaching a condition of being overbought or oversold. Developed by Donald Lambert, this technical indicator assesses price trend direction and strength, allowing traders to determine if they want to enter or exit a trade, refrain from taking a trade, or add to an existing position. In this way, the indicator can be used to provide trade signals when it acts in a certain way. ### Key Takeaways • The Commodity Channel Index (CCI) is a technical indicator that measures the difference between the current price and the historical average price. • When the CCI is above zero, it indicates the price is above the historic average. Conversely, when the CCI is below zero, the price is below the historic average. • The CCI is an unbounded oscillator, meaning it can go higher or lower indefinitely. For this reason, overbought and oversold levels are typically determined for each individual asset by looking at historical extreme CCI levels where the price reversed from. ## The Formula for the Commodity Channel Index (CCI) Is:  \begin{aligned} &\text{CCI} = \frac{ \text{Typical Price} - \text{MA} }{ .015 \times \text{Mean Deviation} } \\ &\textbf{where:}\\ &\text{Typical Price} = \textstyle{ \sum_{i=1}^{P} ( ( \text{High} + \text{Low} + \text{Close} ) \div 3 ) } \\ &P = \text{Number of periods} \\ &\text{MA} = \text{Moving Average} \\ &\text{Moving Average} = ( \textstyle{ \sum_{i=1}^{P} \text{Typical Price} } ) \div P \\ &\text{Mean Deviation} = ( \textstyle{ \sum_{i=1}^{P} \mid \text{Typical Price} - \text{MA} \mid } ) \div P \\ \end{aligned} ## How to Calculate the Commodity Channel Index (CCI) 1. Determine how many periods your CCI will analyze. Twenty is commonly used. Fewer periods result in a more volatile indicator, while more periods will make it smoother. For this calculation, we will assume 20 periods. Adjust the calculation if using a different number. 2. In a spreadsheet, track the high, low, and close for 20 periods and compute the typical price. 3. After 20 periods, compute the moving average (MA) of the typical price by summing the last 20 typical prices and dividing by 20. 4. Calculate the mean deviation by subtracting the MA from the typical price for the last 20 periods. Sum the absolute values (ignore minus signs) of these figures and then divide by 20. 5. Insert the most recent typical price, the MA, and the mean deviation into the formula to compute the current CCI reading. 6. Repeat the process as each new period ends. ## What Does the Commodity Channel Index (CCI) Tell You? The CCI is primarily used for spotting new trends, watching for overbought and oversold levels, and spotting weakness in trends when the indicator diverges with price. When the CCI moves from negative or near-zero territory to above 100, that may indicate the price is starting a new uptrend. Once this occurs, traders can watch for a pullback in price followed by a rally in both price and the CCI to signal a buying opportunity. The same concept applies to an emerging downtrend. When the indicator goes from positive or near-zero readings to below -100, then a downtrend may be starting. This is a signal to get out of longs or to start watching for shorting opportunities. Despite its name, the CCI can be used in any market and is not just for commodities. Overbought or oversold levels are not fixed since the indicator is unbound. Therefore, traders look to past readings on the indicator to get a sense of where the price reversed. For one stock, it may tend to reverse near +200 and -150. Another commodity, meanwhile, may tend to reverse near +325 and -350. Zoom out on the chart to see lots of price reversal points, and the CCI readings at those times. There are also divergences—when the price is moving in the opposite direction of the indicator. If the price is rising and the CCI is falling, this can indicate a weakness in the trend. While divergence is a poor trade signal, since it can last a long time and doesn't always result in a price reversal, it can be good for at least warning the trader that there is the possibility of a reversal. This way, they can tighten stop loss levels or hold off on taking new trades in the price trend direction. ## The Commodity Channel Index (CCI) vs. the Stochastic Oscillator Both of these technical indicators are oscillators, but they are calculated quite differently. One of the main differences is that the Stochastic Oscillator is bound between zero and 100, while the CCI is unbounded. Due to the calculation differences, they will provide different signals at different times, such as overbought and oversold readings. ## Limitations of Using the Commodity Channel Index (CCI) While often used to spot overbought and oversold conditions, the CCI is highly subjective in this regard. The indicator is unbound and, therefore, prior overbought and oversold levels may have little impact in the future. The indicator is also lagging, which means at times it will provide poor signals. A rally to 100 or -100 to signal a new trend may come too late, as the price has had its run and is starting to correct already. Such incidents are called whipsaws; a signal is provided by the indicator but the price doesn't follow through after that signal and money is lost on the trade. If not careful, whipsaws can occur frequently. Therefore, the indicator is best used in conjunction with price analysis and other forms of technical analysis or indicators to help confirm or reject CCI signals.
@robertmerritt 2022-07-07T09:47:27.000000Z 字数 4914 阅读 208 # EVERY STUDENT NEEDS TO STOP MAKING these 20 COMMON GRAMMATICAL ERRORS After formal long periods of tutoring, a ton of understudies commit errors. I generally commit errors in variable based math. Many different understudies commit errors in syntax, physical science, science etcetera. It is very hard. The expressions and words which could appear alright to you could likewise look garbage when they are down on paper. At the point when you are self-altering, you could generally make linguistic blunders. The inquiry emerges of how you will work on the syntactic design and mix-ups in any event, when you don't know you made them. To see which common missteps you may be making; you can peruse this post. It's fine, we as a whole may be committing errors at some stage in our lives. At the point when I write my essay , I attempt to commit a note to keep away from the errors which get generally adjusted to you. You can bookmark this page to keep yourself aware of your mix-ups over and over. The constriction utilized for 'they are' is alluding to an item which has a place o an individual. For instance, I heard their food is the best part is that they're going there. The main contrast between the two forms is really being something as opposed to possessing something. You're quick, you made the track in less than a moment. Attempt to see the distinction? 'You're 'is a compression yet 'your' is possessive. Compressions and possessive things should be utilized with alert. It as a rule confounds the best English online essay writer. 'It's a compression while 'it's a possessive'. Most of individuals befuddle it since 'it's' has an 's just after it. You ought to attempt to do +F to recognize an error which means the possessiveness of a thing. As I see it in the wild, it makes me crazy. At the point when you are attempting to declare something or drawing a comparison, you want to explain what is that something else. If not, the peruser can not understand the meaning of the comparison. Assuming that your sentence has an item, uninvolved voice can happen to you for sure. The latent voice is the point at which the item in a sentence is set toward the start rather than the end. It sounds hazy and feeble when you are writing in an uninvolved voice. You can see that the sentence does not have a subject acting directly through the subject. An essay writing service can assume a significant part to keep away from such mix-ups while editing your document. However, such services are not recommended to be utilized like editing clockwork. Subject and protest placement can significantly affect the sentence structure. The dangling modifiers are another significant mix-up made by individuals. A dangling modifier is a word or articulation that changes a word not obviously communicated in the sentence. A modifier portrays, makes sense of, or gives more understanding concerning a thought. "Having wrapped up" states a movement however doesn't name the expert of that action. In English sentences, the expert ought to be the subject of the fundamental statement that follows. In this sentence, it is Jill. She shows up shrewdly to be the one doing the movement ("having completed"), and this sentence subsequently doesn't have a dangling modifier. When our teacher told us that a business is solitary not plural. Subsequently, business in this way should be termed as 'it' and not 'they'. Understanding the confusion is very simple. Overall English, we ought to feature a brand. Understudies can likewise commit errors to mistype In to versus into. Therefore, if you want to continuously be right, it should be remembered that 'into' can be utilized in circumstances when you want to connect the words with others and you ought to use 'into' to address movement. If you want to incorporate the expression ''call into a game, you can use 'in' as a piece of the action word. You can likewise hire essay writers for additional assistance. The two words free versus lose are frequently befuddled. Lose is an action word that means you can't track down a thing, while free is a descriptor which addresses something approximately fixed as expressed in word references. Individuals frequently blend the two words impact versus influence. They befuddle these words as they discuss the changes which have been made after motivation from some occasion. The dont's and do's are situated at better places in light of the fact that the punctuations are situated at various areas. Individuals for the most part commit errors by putting a punctuation at some unacceptable spot. Countless language students recognize the distinction among 'I' and 'Me' with regards to using them in the expressions or sentences and it could lead to various hardships. The comma join and run on sentences are another significant misstep made by the understudies. A sudden spike in demand for sentence means connecting two free provisions with no relevant combination or legitimate accentuation. It is the same as a comma join. The main distinction which exists between the two is that in such cases a comma isolates the two free statements with no combination by any stretch of the imagination. Individuals sometimes forget or mistype that the pronouns need to concur with each relating thing. A solitary alluding pronoun ought to have a particular thing. The punctuations are frequently used to address ownership. There are various possessives which do not expect punctuations to be utilized after them. At the point when you start to write a sentence in its current form, there ought to constantly be an agreement in the action words as well as the subjects. The action word ought to be particular assuming that the subject is solitary. I trust this blog could have helped you recognize your writing botches in a proper manner and you would have had the option to write a successful write essay for me free of any potential mistakes. • 私有 • 公开 • 删除
finding nearest perfect square for a fractional number Is it possible to have a clear definition for the nearest perfect square number for a fractional number? For example, let us consider a number 0.004. What is another decimal number closest to it, that is also a perfect square? Is it 0.0025? We know 0.0025 is a perfect square (0.05*0.05) but is it the closest one? Is there any way to find out? (PS : please suggest some tags for questions like these) • How do you actually define perfect square for real/rational numbers. In the case of $\mathbb R$, every number $x \ge 0$ is a square, in case of $\mathbb Q$, the squares are dense. – martini Dec 5 '12 at 7:39 • Perhaps the OP is restricting attention o terminating decimals. But the squares of these are also dense. – André Nicolas Dec 5 '12 at 7:42 • Yes I am only talking about terminating decimals, otherwise we will probably have infinite choices – user13267 Dec 5 '12 at 8:28 Note that $$0.004=\frac{4}{1000}.$$ Now, let's "zoom in" one digit. Change the denominator to $10000$, and bring the numerator to the nearest perfect square integer to $40$ - that'd be $36$. $$0.0036=\frac{36}{10000}=\left(\frac{6}{100}\right)^2.$$ This is closer to $0.004$ than $0.0025$, for sure. But is it the closest? Let's try it again. What if we look for stuff of the form $(x/1000)^2$? We need the closest perfect square to $4000$ - that's $63^2=3969$. Now we have $$0.003969=\frac{3969}{10000}=\left(\frac{63}{1000}\right)^2.$$ That's even closer. Let's go one deeper... $$0.00399424=\frac{399424}{100000000}=\left(\frac{632}{10000}\right)^2.$$ Can you see what's happening here? • @user13267 Exactly. This is what Andre meant above when he said that these squares are "dense": if we pick a small number $\epsilon$, no matter how tiny, there will always be some number of the form $(x/10^n)^2$ in the interval $0.04-\epsilon$ to $0.04+\epsilon$. – Alexander Gruber Dec 5 '12 at 9:40
# Market awaits $direction ## MARKET NEWS The US dollar should gain if the US economy recovers. But the FX market is awaiting signs of this, says Steven Englander. The dollar has traded in a relatively tight range against the euro in recent days, with most trading between$0.970 and \$0.985. Relative to the extent to which equity and bond markets have moved and by which economic data has generally under-performed expectations, the tightness of the range has been extraordinary. Three factors account for this. First, the US, euro area a
# First order differential equation help 1. May 8, 2008 ### rppearso I have a problem solving a first order differential equation: dT/dP - C2/T = C1 Where C2 and C1 are just constants, the differential equations book I have does not address the situation of 1/T. I am trying to develop my own integrating factor but it would be nice for a little guidance. 2. May 8, 2008 ### Defennder $$\frac{1}{T}=T^{-1}$$ You can then express the above as: $$\frac{dT}{dP} - C_{1} \ = C_{2}T^{-1}$$ which would then be in the form of a http://en.wikipedia.org/wiki/Bernoulli_differential_equation" [Broken]. Last edited by a moderator: May 3, 2017 3. May 9, 2008 4. May 9, 2008 ### rppearso I need to learn how to use the little equation editor that everyone else uses it makes equations way easier to read. 5. May 9, 2008 ### Defennder The equation editor I use here is in-built into the forums. It's called LaTeX. You can learn to use it rather easily. Click on the equations and download the latex reference PDF files. If you want to learn how to input a particular maths expression you see, just click on it to see how it's done.
# Can polyatomic ions (CO₃, PO₄, SO₄, NO₃) be considered conjugated systems? From my perspective these resonance structures allow these specific polyatomic ions to act as donor-acceptor molecules. Many donor-acceptor molecules also tend to be conjugated systems because they have chains of alternating conjugated π orbitals. So does this imply that these ions (due to their resonance structures) can act as though they had a conjugated system or am I making too big of an assumption? Curious on others perspectives/if I'm totally of my rocker. • I think you're a bit off ;) To my understanding, the IUPAC Gold Book restricts the term conjugation to organic chemistry. I'm highly biased and too much an organic chemist to disagree :D – Klaus-Dieter Warzecha Jan 28 '14 at 20:42 • @Klaus Warzecha: If the OP substituted "delocalized" in place of "conjugated", would that make for a better statement? – ron Jan 28 '14 at 22:40 • I appreciate the thoughts guys! I think what I'm hunting for is what lies in the stricter definitions of what makes something conjugated vs what makes something resonant. Though in both cases I would consider the electrons delocalized as @ron mentioned. Thanks guys! – Sean Jan 29 '14 at 23:01 Consider nitrate. Inorganikers would say $\ce{N^{5+}}$, $\ce{^{-}O-N(=O)2}$, with five bonds to the nitrogen. The negative charge 1,3-shifts around all three oxygens. Organikers would see it as $\ce{N^{3+}}$, $\ce{[^{-}O{}-]_2N^{+}=O}%edit$, with four bonds to the nitrogen. 1,3 shifts, etc. Is nitrate ever a bidentate ligand? YES! But it is just another resonance structure.
The third-order nonlinear response functions for infrared vibrational spectroscopy are often applied to a weakly anharmonic vibration. For high frequency vibrations in which only the $$\nu = 0$$ state is initially populated, when the incident fields are resonant with the fundamental vibrational transition, we generally consider diagrams involving the system eigenstates $$\nu = 0, 1$$ and 2, and which include v=0-1 and v=1-2 resonances. Then, there are three distinct signal contributions: Note that for the $$S_I$$ and $$S_{II}$$ signals there are two types of contributions: two diagrams in which all interactions are with the v=0-1 transition (fundamental) and one diagram in which there are two interactions with v=0-1 and two with v=1-2 (the overtone). These two types of contributions have opposite signs, which can be seen by counting the number of bra side interactions, and have emission frequencies of $$\omega_{10}$$ or $$\omega_{21}$$. Therefore, for harmonic oscillators, which have $$\omega_{10} = \omega_{21}$$ and $$\sqrt{2}\mu_{10}=\mu_{21}$$, we can see that the signal contributions should destructively interfere and vanish. This is a manifestation of the finding that harmonic systems display no nonlinear response. Some deviation from harmonic behavior is required to observe a signal, such as vibrational anharmonicity $$\omega_{10} \ne \omega_{21}$$, electrical anharmonicity (\sqrt{2}\mu_{10}\ne\mu_{21}\), or level-dependent damping $$\Gamma_{10}\ne\Gamma_{21}$$ or \(\Gamma_{00}\ne\Gamma_{11}.
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer. QUESTION # What is the mass of one mole of water? 18 grams The mass of one mole of any atom/molecule is equal to its atomic molecular mass in grams. The molecular formula of water is H_2O The atomic mass of H = 1 The atomic mass of O = 16 In a sense, the formula means H + H + O So... 1 + 1 + 16 = 18 Therefore, the mass of one mole of water = 18 Grams
+0 # If three runners receive medals how many ways can the three medalists be chosen out of a field of 28? 0 392 3 +8 Is it 28 x 27 x 26? peacemantle  May 2, 2015 #2 +92193 +10 There are 28 ways for the first, 27 ways for the second and 26 ways for the 3rd   That is 28*27*26 BUT it doesn't matter what order they are chosen.  so you have to divide by 3! = 6 $${\frac{{\mathtt{28}}{\mathtt{\,\times\,}}{\mathtt{27}}{\mathtt{\,\times\,}}{\mathtt{26}}}{{\mathtt{6}}}} = {\mathtt{3\,276}}$$ this is how I would normally do it 28C3 $${\left({\frac{{\mathtt{28}}{!}}{{\mathtt{3}}{!}{\mathtt{\,\times\,}}({\mathtt{28}}{\mathtt{\,-\,}}{\mathtt{3}}){!}}}\right)} = {\mathtt{3\,276}}$$ Melody  May 2, 2015 Sort: #1 +520 +8 However,  as a challenge extension you could extend this to include ties, e.g., from 2 up to 28 ties for first, up to 27 ties for second, etc.     I'm not sure how to do this, though, and it sounds complicated. # 🌹  🌹  🌹  🌹  🌹  🌹  🌹  🌹 . #2 +92193 +10 There are 28 ways for the first, 27 ways for the second and 26 ways for the 3rd   That is 28*27*26 BUT it doesn't matter what order they are chosen.  so you have to divide by 3! = 6 $${\frac{{\mathtt{28}}{\mathtt{\,\times\,}}{\mathtt{27}}{\mathtt{\,\times\,}}{\mathtt{26}}}{{\mathtt{6}}}} = {\mathtt{3\,276}}$$ this is how I would normally do it 28C3 $${\left({\frac{{\mathtt{28}}{!}}{{\mathtt{3}}{!}{\mathtt{\,\times\,}}({\mathtt{28}}{\mathtt{\,-\,}}{\mathtt{3}}){!}}}\right)} = {\mathtt{3\,276}}$$ Melody  May 2, 2015 #3 +520 +5 Yes, you are right, Melody. The question doesn't say 1st, 2nd and 3rd places. Maybe there are 3 runners who are to be awarded a medal for being a comically costumed competitor for St Patrick's Day, or some such.
# What will come in place of question mark '?' in the following question?32% of (14% of 1200) = 64% of ? 1. 114 2. 84 3. 72 4. 64 5. 1200 Option 2 : 84 ## Detailed Solution Given expression is, ⇒ 32% of (14% of 1200) = 64% of ? $$\begin{array}{l} \Rightarrow \frac{{32}}{{100}} \times \left( {\frac{{14}}{{100}} \times 1200} \right) = \frac{{64}}{{100}} \times \;?\\ \Rightarrow \frac{{32}}{{100}} \times \left( {14 \times 12} \right) = \frac{{64}}{{100}} \times \;?\\ \Rightarrow \frac{{32}}{{100}} \times \left( {168} \right) = \frac{{64}}{{100}} \times \;? \end{array}$$ ⇒ 32 × 168 = 64 × ? $$\Rightarrow {\rm{\;}}? = \frac{{32 \times 168}}{{64}} = 84$$ ⇒ ? = 84 Free IBPS Clerk Prelims Full Mock Test 113242 100 Questions 100 Marks 60 Mins
Draft of a fired heater I’m a chemical engineer learning about fired heaters. I’m learning about draft and I picked this out of a textbook- For example - Drafts: • Just above the burners = $-0.3\text{in H}_2\text{O}$ • At the bridgewall = $-0.05\text{in H}_2\text{O}$ • At the breeching = $-0.6\text{in H}_2\text{O}$ • Above the stack damper = close to zero. Why is the draft at the breeching more negative than the draft just above the burned and at the bridgewall ? Isn’t the draft when moving up the furnace be less negative ? The draft will decrease as you move up the radiant section as the elevation is reducing with very little pressure drop. The convection section on the other hand will have significant pressure drop for the flue gas flow with little elevation change to move through it. The negative pressure above the convection section exit must be at a higher negative pressure than the inlet due to this pressure drop. A more helpful way to think about this might be to envision a 35 foot tall radiant section operating at 1800F. The base draft is -0.40 In. W.C. and with the elevation change a rough decrease in draft expected is 0.10 In. W.C. for every 10 feet in elevation change so lets say the draft at the top of the radiant section or bridgewall is now -0.05 In. W.C. (-0.40 minus 0.10/10 feet * 35 feet). Just above this starts the convection section with several rows of close tolerance tubes. At the operating conditions the flue gas must all flow through these spaces which will increase its velocity and cause several changes in direction to the flow. The convection section is only 10' deep here but contains 20 tube banks which causes a differential pressure of 0.30 In. W.C. from inlet to exit. Here we know what the pressure is just below the tubes (-0.05 In. W.C.) and we know the final stack pressure on the downstream side of the convection tubes must be enough to overcome the loss in draft across the tubes. The loss in draft across the tubes is the elevation change plus the flowing pressure drop or 0.40 In. W.C. (0.30 + 0.10/10 feet *10 feet). The downstream side of the convection section or breeching section must then be -0.35 In. W.C. (-0.05 + 0.40 = -X). I do take issue with the stack damper listing. The draft at the end or top of the stack is close to zero however the draft at the stack damper is very unlikely to be close to zero unless the damper is at the top of the stack which is essentially never done. The damper adds pressure drop to the exiting flue gas, the elevation and flue gas temperature is where the draft is created. If the breeching section is at -0.35 In. W.C. and the stack damper is fully open the stack must be around 35 feet (0.10 In W.C./10 feet * 35 feet). If the damper were to be closed down to say only 25% open the downstream side of the damper would still be at -0.35 In W.C. however the upstream side would now be lower due to the pressure drop added by the damper ( a rough estimate might be for the upstream damper draft to now be -0.2 In W.C. with the damper 25% closed).
# How to create private BlueSpice for MediaWiki website? I am creating a private wiki that should only be visible to users that sign in. We are using MediaWiki with the BlueSpice extension. Thus far, I found that by default "read" access is provided to everyone, logged in or not. Therefore, I explicitly added read access in the WikiAdmin - Permission manager to group User checking for line Read the Namespace column. Having done this, the pages now say: Login required So this is good, however, there are still details visible. For example, the special page http://wiki.domain.com/index.php/Special:RecentChanges shows the names of pages and the log entry for each edit. Additionally, the names of users are revealed and their edit history. The solution is to go to group * in the WikiAdmin - Permission manager and uncheck the Read under the Wiki column adjacent to the Namespace column mentioned in the question.
Reference documentation for deal.II version 8.5.0 GridIn< dim, spacedim > Class Template Reference #include <deal.II/grid/grid_in.h> Public Types enum  Format { Default, unv, ucd, abaqus, dbmesh, xda, msh, netcdf, tecplot, vtk } Public Member Functions GridIn () void attach_triangulation (Triangulation< dim, spacedim > &tria) void read (std::istream &in, Format format=Default) void read (const std::string &in, Format format=Default) void read_ucd (std::istream &in, const bool apply_all_indicators_to_manifolds=false) void read_abaqus (std::istream &in, const bool apply_all_indicators_to_manifolds=false) Static Public Member Functions static std::string default_suffix (const Format format) static Format parse_format (const std::string &format_name) static std::string get_format_names () static::ExceptionBase & ExcUnknownSectionType (int arg1) static::ExceptionBase & ExcUnknownElementType (int arg1) static::ExceptionBase & ExcUnknownIdentifier (std::string arg1) static::ExceptionBase & ExcNoTriangulationSelected () static::ExceptionBase & ExcInvalidVertexIndex (int arg1, int arg2) static::ExceptionBase & ExcInvalidVertexIndexGmsh (int arg1, int arg2, int arg3) static::ExceptionBase & ExcInvalidDBMeshFormat () static::ExceptionBase & ExcInvalidDBMESHInput (std::string arg1) static::ExceptionBase & ExcDBMESHWrongDimension (int arg1) static::ExceptionBase & ExcInvalidGMSHInput (std::string arg1) static::ExceptionBase & ExcGmshUnsupportedGeometry (int arg1) static::ExceptionBase & ExcGmshNoCellInformation () Static Protected Member Functions static void debug_output_grid (const std::vector< CellData< dim > > &cells, const std::vector< Point< spacedim > > &vertices, std::ostream &out) Protected Attributes SmartPointer< Triangulation< dim, spacedim >, GridIn< dim, spacedim > > tria Static Private Member Functions static void skip_empty_lines (std::istream &in) static void skip_comment_lines (std::istream &in, const char comment_start) static void parse_tecplot_header (std::string &header, std::vector< unsigned int > &tecplot2deal, unsigned int &n_vars, unsigned int &n_vertices, unsigned int &n_cells, std::vector< unsigned int > &IJK, bool &structured, bool &blocked) Private Attributes Format default_format Detailed Description template<int dim, int spacedim = dim> class GridIn< dim, spacedim > This class implements an input mechanism for grid data. It allows to read a grid structure into a triangulation object. At present, UCD (unstructured cell data), DB Mesh, XDA, Gmsh, Tecplot, NetCDF, UNV, VTK, and Cubit are supported as input format for grid data. Any numerical data other than geometric (vertex locations) and topological (how vertices form cells, faces, and edges) information is ignored, but the readers for the various formats generally do read information that associates material ids or boundary ids to cells or faces (see this and this glossary entry for more information). Note Since deal.II only supports line, quadrilateral and hexahedral meshes, the functions in this class can only read meshes that consist exclusively of such cells. If you absolutely need to work with a mesh that uses triangles or tetrahedra, then your only option is to convert the mesh to quadrilaterals and hexahedra. A tool that can do this is tethex, available here. The mesh you read will form the coarsest level of a Triangulation object. As such, it must not contain hanging nodes or other forms of adaptive refinement, or strange things will happen if the mesh represented by the input file does in fact have them. This is due to the fact that most mesh description formats do not store neighborship information between cells, so the grid reading functions have to regenerate it. They do so by checking whether two cells have a common face. If there are hanging nodes in a triangulation, adjacent cells have no common (complete) face, so the grid reader concludes that the adjacent cells have no neighbors along these faces and must therefore be at the boundary. In effect, an internal crack of the domain is introduced this way. Since such cases are very hard to detect (how is GridIn supposed to decide whether a place where the faces of two small cells coincide with the face or a larger cell is in fact a hanging node associated with local refinement, or is indeed meant to be a crack in the domain?), the library does not make any attempt to catch such situations, and you will get a triangulation that probably does not do what you want. If your goal is to save and later read again a triangulation that has been adaptively refined, then this class is not your solution; rather take a look at the PersistentTriangulation class. To read grid data, the triangulation to be filled has to be empty. Upon calling the functions of this class, the input file may contain only lines in one dimension; lines and quads in two dimensions; and lines, quads, and hexes in three dimensions. All other cell types (e.g. triangles in two dimensions, triangles or tetrahedra in 3d) are rejected. (Here, the "dimension" refers to the dimensionality of the mesh; it may be embedded in a higher dimensional space, such as a mesh on the two-dimensional surface of the sphere embedded in 3d, or a 1d mesh that discretizes a line in 3d.) The result will be a triangulation that consists of the cells described in the input file, and to the degree possible with material indicators and boundary indicators correctly set as described in the input file. Note You can not expect vertex and cell numbers in the triangulation to match those in the input file. (This is already clear based on the fact that we number cells and vertices separately, whereas this is not the case for some input file formats; some formats also do not require consecutive numbering, or start numbering at indices other than zero.) Supported input formats At present, the following input formats are supported: • UCD (unstructured cell data) format: this format is used for grid input as well as data output. If there are data vectors in the input file, they are ignored, as we are only interested in the grid in this class. The UCD format requires the vertices to be in following ordering: in 2d * 3-----2 * | | * | | * | | * 0-----1 * and in 3d * 7-------6 7-------6 * /| | / /| * / | | / / | * / | | / / | * 3 | | 3-------2 | * | 4-------5 | | 5 * | / / | | / * | / / | | / * |/ / | |/ * 0-------1 0-------1 * Note, that this ordering is different from the deal.II numbering scheme, see the Triangulation class. The exact description of the UCD format can be found in the AVS Explorer manual (see http://www.avs.com). The UCD format can be read by the read_ucd() function. • DB mesh format: this format is used by the BAMG mesh generator (see http://www-rocq.inria.fr/gamma/cdrom/www/bamg/eng.htm. The documentation of the format in the BAMG manual is very incomplete, so we don't actually parse many of the fields of the output since we don't know their meaning, but the data that is read is enough to build up the mesh as intended by the mesh generator. This format can be read by the read_dbmesh() function. • XDA format: this is a rather simple format used by the MGF code. We don't have an exact specification of the format, but the reader can read in several example files. If the reader does not grok your files, it should be fairly simple to extend it. • Gmsh 1.0 mesh format: this format is used by the GMSH mesh generator (see http://www.geuz.org/gmsh/). The documentation in the GMSH manual explains how to generate meshes compatible with the deal.II library (i.e. quads rather than triangles). In order to use this format, Gmsh has to output the file in the old format 1.0. This is done adding the line "Mesh.MshFileVersion = 1" to the input file. • Gmsh 2.0 mesh format: this is a variant of the above format. The read_msh() function automatically determines whether an input file is version 1 or version 2. • Tecplot format: this format is used by TECPLOT and often serves as a basis for data exchange between different applications. Note, that currently only the ASCII format is supported, binary data cannot be read. • UNV format: this format is generated by the Salome mesh generator, see http://www.salome-platform.org/ . The sections of the format that the GridIn::read_unv function supports are documented here: Note that Salome, let's say in 2D, can only make a quad mesh on an object that has exactly 4 edges (or 4 pieces of the boundary). That means, that if you have a more complicated object and would like to mesh it with quads, you will need to decompose the object into >= 2 separate objects. Then 1) each of these separate objects is meshed, 2) the appropriate groups of cells and/or faces associated with each of these separate objects are created, 3) a compound mesh is built up, and 4) all numbers that might be associated with some of the internal faces of this compound mesh are removed. • VTK format: VTK Unstructured Grid Legacy file reader generator. The reader can handle only Unstructured Grid format of data at present for 2D & 3D geometries. The documentation for the general legacy vtk file, including Unstructured Grid format can be found here: http://www.cacr.caltech.edu/~slombey/asci/vtk/vtk_formats.simple.html The VTK format requires the vertices to be in following ordering: in 2d * 3-----2 * | | * | | * | | * 0-----1 * and in 3d * 7-------6 7-------6 * /| | / /| * / | | / / | * / | | / / | * 4 | | 4-------5 | * | 3-------2 | | 2 * | / / | | / * | / / | | / * |/ / | |/ * 0-------1 0-------1 * • Cubit format: deal.II doesn't directly support importing from Cubit at this time. However, Cubit can export in UCD format using a simple plug-in, and the resulting UCD file can then be read by this class. The plug-in script can be found on the deal.II wiki page under Mesh Input and Output. Alternatively, Cubit can generate ABAQUS files that can be read in via the read_abaqus() function. This may be a better option for geometries with complex boundary condition surfaces and multiple materials - information which is currently not easily obtained through Cubit's python interface. Structure of input grid data. The GridReordering class It is your duty to use a correct numbering of vertices in the cell list, i.e. for lines in 1d, you have to first give the vertex with the lower coordinate value, then that with the higher coordinate value. For quadrilaterals in two dimensions, the vertex indices in the quad list have to be such that the vertices are numbered in counter-clockwise sense. In two dimensions, another difficulty occurs, which has to do with the sense of a quadrilateral. A quad consists of four lines which have a direction, which is by definition as follows: * 3-->--2 * | | * ^ ^ * | | * 0-->--1 * Now, two adjacent cells must have a vertex numbering such that the direction of the common side is the same. For example, the following two quads * 3---4---5 * | | | * 0---1---2 * may be characterised by the vertex numbers (0 1 4 3) and (1 2 5 4), since the middle line would get the direction 1->4 when viewed from both cells. The numbering (0 1 4 3) and (5 4 1 2) would not be allowed, since the left quad would give the common line the direction 1->4, while the right one would want to use 4->1, leading to an ambiguity. The Triangulation object is capable of detecting this special case, which can be eliminated by rotating the indices of the right quad by two. However, it would not know what to do if you gave the vertex indices (4 1 2 5), since then it would have to rotate by one element or three, the decision which to take is not yet implemented. There are more ambiguous cases, where the triangulation may not know what to do at all without the use of sophisticated algorithms. Furthermore, similar problems exist in three space dimensions, where faces and lines have orientations that need to be taken care of. For this reason, the read_* functions of this class that read in grids in various input formats call the GridReordering class to bring the order of vertices that define the cells into an ordering that satisfies the requirements of the Triangulation class. Be sure to read the documentation of that class if you experience unexpected problems when reading grids through this class. Dealing with distorted mesh cells For each of the mesh reading functions, the last call is always to Triangulation::create_triangulation(). That function checks whether all the cells it creates as part of the coarse mesh are distorted or not (where distortion here means that the Jacobian of the mapping from the reference cell to the real cell has a non-positive determinant, i.e. the cell is pinched or twisted; see the entry on distorted cells in the glossary). If it finds any such cells, it throws an exception. This exception is not caught in the grid reader functions of the current class, and so will propagate through to the function that called it. There, you can catch and ignore the exception if you are certain that there is no harm in dealing with such cells. If you were not aware that your mesh had such cells, your results will likely be of dubious quality at best if you ignore the exception. Definition at line 300 of file grid_in.h. Constructor & Destructor Documentation template<int dim, int spacedim> GridIn< dim, spacedim >::GridIn ( ) Constructor. Definition at line 84 of file grid_in.cc. Member Function Documentation template<int dim, int spacedim> void GridIn< dim, spacedim >::attach_triangulation ( Triangulation< dim, spacedim > & tria ) Attach this triangulation to be fed with the grid data. Definition at line 90 of file grid_in.cc. template<int dim, int spacedim> void GridIn< dim, spacedim >::read ( std::istream & in, Format format = Default ) Read from the given stream. If no format is given, GridIn::Format::Default is used. Definition at line 2781 of file grid_in.cc. template<int dim, int spacedim> void GridIn< dim, spacedim >::read ( const std::string & in, Format format = Default ) Open the file given by the string and call the previous function read(). This function uses the PathSearch mechanism to find files. The file class used is MESH. Definition at line 2748 of file grid_in.cc. template<int dim, int spacedim> void GridIn< dim, spacedim >::read_vtk ( std::istream & in ) Read grid data from an vtk file. Numerical data is ignored. Definition at line 98 of file grid_in.cc. template<int dim, int spacedim> void GridIn< dim, spacedim >::read_unv ( std::istream & in ) Read grid data from an unv file as generated by the Salome mesh generator. Numerical data is ignored. Note the comments on generating this file format in the general documentation of this class. Definition at line 385 of file grid_in.cc. template<int dim, int spacedim> void GridIn< dim, spacedim >::read_ucd ( std::istream & in, const bool apply_all_indicators_to_manifolds = false ) Read grid data from an ucd file. Numerical data is ignored. It is not possible to use a ucd file to set both boundary_id and manifold_id for the same cell. Yet it is possible to use the flag apply_all_indicators_to_manifolds to decide if the indicators in the file refer to manifolds (flag set to true) or boundaries (flag set to false). Definition at line 621 of file grid_in.cc. template<int dim, int spacedim> void GridIn< dim, spacedim >::read_abaqus ( std::istream & in, const bool apply_all_indicators_to_manifolds = false ) Read grid data from an Abaqus file. Numerical and constitutive data is ignored. As in the case of the ucd file format, it is possible to use the flag apply_all_indicators_to_manifolds to decide if the indicators in the file refer to manifolds (flag set to true) or boundaries (flag set to false). Note The current implementation of this mesh reader is suboptimal, and may therefore be slow for large meshes. Usage tips for Cubit: • Multiple material-id's can be defined in the mesh. This is done by specifying blocksets in the pre-processor. • Arbitrary surface boundaries can be defined in the mesh. This is done by specifying sidesets in the pre-processor. In particular, boundaries are not confined to just surfaces (in 3d) individual element faces can be added to the sideset as well. This is useful when a boundary condition is to be applied on a complex shape boundary that is difficult to define using "surfaces" alone. Similar can be done in 2d. Compatibility information for this file format is listed below. • Files generated in Abaqus CAE 6.12 have been verified to be correctly imported, but older (or newer) versions of Abaqus may also generate valid input decks. • Files generated using Cubit 11.x, 12.x, 13.x, 14.x and 15.x are valid, but only when using a specific set of export steps. These are as follows: • Go to "Analysis setup mode" by clicking on the disc icon in the toolbar on the right. • Select "Export Mesh" under "Operation" by clicking on the necessary icon in the toolbar on the right. • Select an output file. In Cubit version 11.0 and 12.0 it might be necessary to click on the browse button and type it in the dialogue that pops up. • Select the dimension to output in. • Tick the overwrite box. • If using Cubit v12.0 onwards, uncheck the box "Export using Cubit ID's". An invalid file will encounter errors if this box is left checked. • Click apply. Definition at line 855 of file grid_in.cc. template<int dim, int spacedim> void GridIn< dim, spacedim >::read_dbmesh ( std::istream & in ) Read grid data from a file containing data in the DB mesh format. Definition at line 900 of file grid_in.cc. template<int dim, int spacedim> void GridIn< dim, spacedim >::read_xda ( std::istream & in ) Read grid data from a file containing data in the XDA format. Definition at line 1066 of file grid_in.cc. template<int dim, int spacedim> void GridIn< dim, spacedim >::read_msh ( std::istream & in ) Read grid data from an msh file, either version 1 or version 2 of that file format. The GMSH formats are documented at http://www.geuz.org/gmsh/. Note The input function of deal.II does not distinguish between newline and other whitespace. Therefore, deal.II will be able to read files in a slightly more general format than Gmsh. Definition at line 1224 of file grid_in.cc. template<int dim, int spacedim = dim> void GridIn< dim, spacedim >::read_netcdf ( const std::string & filename ) Read grid data from a NetCDF file. The only data format currently supported is the TAU grid format. This function requires the library to be linked with the NetCDF library. template<int dim, int spacedim> void GridIn< dim, spacedim >::read_tecplot ( std::istream & in ) Read grid data from a file containing tecplot ASCII data. This also works in the absence of any tecplot installation. Definition at line 2535 of file grid_in.cc. template<int dim, int spacedim> std::string GridIn< dim, spacedim >::default_suffix ( const Format format ) static Return the standard suffix for a file in this format. Definition at line 2837 of file grid_in.cc. template<int dim, int spacedim> GridIn< dim, spacedim >::Format GridIn< dim, spacedim >::parse_format ( const std::string & format_name ) static Return the enum Format for the format name. Definition at line 2869 of file grid_in.cc. template<int dim, int spacedim> std::string GridIn< dim, spacedim >::get_format_names ( ) static Return a list of implemented input formats. The different names are separated by vertical bar signs (|') as used by the ParameterHandler classes. Definition at line 2925 of file grid_in.cc. template<int dim, int spacedim> void GridIn< dim, spacedim >::debug_output_grid ( const std::vector< CellData< dim > > & cells, const std::vector< Point< spacedim > > & vertices, std::ostream & out ) staticprotected This function can write the raw cell data objects created by the read_* functions in Gnuplot format to a stream. This is sometimes handy if one would like to see what actually was created, if it is known that the data is not correct in some way, but the Triangulation class refuses to generate a triangulation because of these errors. In particular, the output of this class writes out the cell numbers along with the direction of the faces of each cell. In particular the latter information is needed to verify whether the cell data objects follow the requirements of the ordering of cells and their faces, i.e. that all faces need to have unique directions and specified orientations with respect to neighboring cells (see the documentations to this class and the GridReordering class). The output of this function consists of vectors for each line bounding the cells indicating the direction it has with respect to the orientation of this cell, and the cell number. The whole output is in a form such that it can be read in by Gnuplot and generate the full plot without further ado by the user. Definition at line 2595 of file grid_in.cc. template<int dim, int spacedim> void GridIn< dim, spacedim >::skip_empty_lines ( std::istream & in ) staticprivate Skip empty lines in the input stream, i.e. lines that contain either nothing or only whitespace. Definition at line 2542 of file grid_in.cc. template<int dim, int spacedim> void GridIn< dim, spacedim >::skip_comment_lines ( std::istream & in, const char comment_start ) staticprivate Skip lines of comment that start with the indicated character (e.g. #) following the point where the given input stream presently is. After the call to this function, the stream is at the start of the first line after the comment lines, or at the same position as before if there were no lines of comments. Definition at line 2571 of file grid_in.cc. template<int dim, int spacedim> void GridIn< dim, spacedim >::parse_tecplot_header ( std::string & header, std::vector< unsigned int > & tecplot2deal, unsigned int & n_vars, unsigned int & n_vertices, unsigned int & n_cells, std::vector< unsigned int > & IJK, bool & structured, bool & blocked ) staticprivate This function does the nasty work (due to very lax conventions and different versions of the tecplot format) of extracting the important parameters from a tecplot header, contained in the string header`. The other variables are output variables, their value has no influence on the function execution.. Definition at line 2103 of file grid_in.cc. Member Data Documentation template<int dim, int spacedim = dim> SmartPointer,GridIn > GridIn< dim, spacedim >::tria protected Store address of the triangulation to be fed with the data read in. Definition at line 565 of file grid_in.h. template<int dim, int spacedim = dim> Format GridIn< dim, spacedim >::default_format private Input format used by read() if no format is given. Definition at line 628 of file grid_in.h. The documentation for this class was generated from the following files:
# Finding a distribution family that is preserved under mixture. Consider the following $f_{t+1}(z)=p_{12} f_{t}(z/A)+ p_{21} f_{t}(z/B)+p_{22} f_{t}(z/(A+B))$, where $A$, $B$, and the $p$'s are constants and $f_t$ is a probability distribution. Are there any nice distribution families that are preserved under the transformation? Fail that, are there $f_t$ such that $f_{t+1}$ has a closed form? It's motivated by the following problem: Let there be two simple bonds that either default or pay off a return on investment (That may or may not be correlated), denote the bonds as random variables $Z_1$ and $Z_2$. Now throw in a population of investors, with wealth following a distribution $W$, investing some fixed percentage of their income in the two bonds(investing a fixed percentage is a Nash equilibrium under the model I'm working with). The resulting after-investment wealth distribution will be a mixture of dilations of the original distribution, I'm trying to find a distribution to work with that will make things simple when studying the behavior of the system over time. Any ideas? - Do you have any constraints on the $p$'s? –  Steve Huntsman Mar 18 '10 at 15:06 Ah, sorry. They're basically mixture weights, they should be positive and sum to something less then or equal to 1. –  David Shor Mar 18 '10 at 16:23 One can rewrite the problem in terms of products of i.i.d. random variables as follows. Assume that $X_t$ has distribution density $f_t$. Then, the relation between $f_t$ and $f_{t+1}$ means that one can choose $X_{t+1}=X_tZ_{t+1}$, where the $Z_t$ are i.i.d. and $Z_t=A$ or $B$ or $A+B$, with probabilities $p_{12}A$, $p_{21}B$ and $p_{22}(A+B)$, respectively. Hence, for the relation between $f_t$ and $f_{t+1}$ to make sense, one must assume that the three nonnegative numbers $p_{12}A$, $p_{21}B$ and $p_{22}(A+B)$ sum to $1$, and when this is so, $X_t=X_0Z_1Z_2\cdots Z_t$. This tells you that: • $E(X_t)=E(X_0)m^t$ for every $t$, with $m=E(Z_1)$, that is, $$m=p_{12}A^2+p_{21}B^2+p_{22}(A+B)^2.$$ • $t^{-1}\log X_t$ converges almost surely to $\mu=E(\log Z_1)$, that is, $$\mu=p_{12}A\log A+p_{21}B\log B+p_{22}(A+B)\log(A+B).$$ • $\log X_t$ follows the multinomial distribution with parameters $t$ and $(p_{12}A,p_{21}B,p_{22}(A+B))$, or, more precisely, the convolution of this multinomial with the distribution of $\log X_0$. Unfortunately, these remarks do not help much if one is interested in closed form formulas. Sorry. - One related concept is the p-stable distribution. It's a distribution such that the combination $\sum_i a_i X_i$, if $X_i$ are all i.i.d with respect to the distribution, is distributed as $\|a\|_p X$, where $X$ is also governed by the distribution. Gaussians are 2-stable, and the Cauchy distribution is 1-stable. There is also a $(1/2)$-stable distribution. In general though, such distributions don't exist for $p > 2$. - Below I propose a solution to the difference equation $$f_{t+1}(z) =p_{12} f_t(z/A) + p_{21}f_t(z/B) + p_{22} f_t(z/(A+B)),$$ where the $p_{ij}$'s are positive, $$p_{12}+p_{21}+p_{22}\le 1$$ and $f_t$ is a pdf. By integrating both sides of the given equation from $-\infty$ to $\infty$ we obtain after appropriate change of variables in the right hand side integrals $$1=Ap_{12}+Bp_{21}+(A+B)p_{22}.$$ Now we look for a solution of our initial problem in the form $$f_t (z) =\sum_{n=0}^{\infty} q_n (t) z^n .$$ Substituting the above ansatz into our equation yields after elementary manipulation $$q_n (t+1) =\left ( p_{12} A^{-n} +p_{21} B^{-n} +(A+B)^{-n} \right ) q_n (t).$$ For fix $n$, the last equation is a linear difference equation that can be easily solved to produce $$a_n(t) = b_n t^{ p_{12} A^{-n} +p_{21} B^{-n} +(A+B)^{-n}}$$, where $a_n$ is independent of $t$ i.e. it's a pure constant. Finally we obtain the closed-form solution $$f_t (z) =\sum_{n=0}^{\infty} b_n t^{ p_{12} A^{-n} +p_{21} B^{-n} +(A+B)^{-n}} z^n.$$ Note that $$f_1(z) =\sum_{n=0}^{\infty} b_n z^n.$$ Thus our solution is completely specified given the initial pdf $f_1(z)$. What is left is to tackle the issue of convergence and possible look for alternative representation of the solution. -
f(g(x)) has a degree divisible by n Let $f(x)$ a irreducible polynomial of degree $n$ over a field $F$. Let $g(x)$ be a polynomial in $F[x]$. Prove that every irreducible factor of the composition $f(g(x))$ has a degree which is divisible by $n$. I don't know even how to begin. I really need help. Thanks Suppose $$h$$ is an irreducible factor of $$f \circ g$$, and $$\alpha$$ is a root of $$h$$ (in some extension field). Then $$g(\alpha)$$ is a root of $$f$$, and so, since $$f$$ is irreducible of degree $$n$$, $$[F(g(\alpha)):F]=n$$. Thus $$\deg(h)=[F(\alpha):F]=[F(\alpha):F(g(\alpha))]\cdot[F(g(\alpha)):F]$$ is divisible by $$n$$. • thank you very much for the help. Just one question, why $f(g(\alpha))$ is well-defined? – user42912 Nov 10 '12 at 0:52 • @user42912: I'm having trouble seeing the problem. $\alpha$ is an element of an extension field of $F$. $f\circ g$ is a polynomial over $F$, hence defines a function on every extension field. I'm just applying that function to that element. – Chris Eagle Nov 10 '12 at 9:56 • Why do we know that $[F(g(\alpha)):F] =n$, particularly why do we know it is an equality? Shouldn't it be an inequality? Namely $[F(g(\alpha)):F] \leq n$ – user110320 May 12 '18 at 19:58 • @user110320: We know $[F(g(\alpha)):F] = n$ because $g(\alpha)$ is a root of $f(X)$, and all fields obtained by adjoining a root of $f(X)$ to $F$ are isomorphic to one another by sending a root $\alpha'$ in $F(\alpha')$, say, to $\alpha''$ in $F(\alpha'')$. In particular, $F(\alpha')$ and $F(\alpha'')$ have the same dimension as $F$-vector spaces. – Alex Ortiz Mar 17 at 21:56
### Algebraic and Euclidean Lattices: Optimal Lattice Reduction and Beyond Paul Kirchner, Thomas Espitau, and Pierre-Alain Fouque ##### Abstract We introduce a framework generalizing lattice reduction algorithms to module lattices in order to practically and efficiently solve the $\gamma$-Hermite Module-SVP problem over arbitrary cyclotomic fields. The core idea is to exploit the structure of the subfields for designing a doubly-recursive strategy of reduction: both recursive in the rank of the module and in the field we are working in. Besides, we demonstrate how to leverage the inherent symplectic geometry existing in the tower of fields to provide a significant speed-up of the reduction for rank two modules. The recursive strategy over the rank can also be applied to the reduction of Euclidean lattices, and we can perform a reduction in asymptotically almost the same time as matrix multiplication. As a byproduct of the design of these fast reductions, we also generalize to all cyclotomic fields and provide speedups for many previous number theoretical algorithms. Quantitatively, we show that a module of rank 2 over a cyclotomic field of degree $n$ can be heuristically reduced within approximation factor $2^{\tilde{O}(n)}$ in time $\tilde{O}(n^2B)$, where $B$ is the bitlength of the entries. For $B$ large enough, this complexity shrinks to $\tilde{O}(n^{\log_2 3}B)$. This last result is particularly striking as it goes below the estimate of $n^2B$ swaps given by the classical analysis of the LLL algorithm using the so-called potential. Finally, all this framework is fully parallelizable, and we provide a full implementation. We apply it to break multilinear cryptographic candidates on concrete proposed parameters. We were able to reduce matrices of dimension 4096 with 6675-bit integers in 4 days, which is more than a million times faster than previous state-of-the-art implementations. Eventually, we demonstrate a quasicubic time for the Gentry-Szydlo algorithm which finds a generator given the relative norm and a basis of an ideal. This algorithm is important in cryptanalysis and requires efficient ideal multiplications and lattice reductions; as such we can practically use it in dimension 1024. Available format(s) Category Public-key cryptography Publication info Preprint. MINOR revision. Keywords Lattice reductionLLLcyclotomic fieldsideal latticesymplectic group Contact author(s) t espitau @ gmail com History 2020-02-19: revised See all versions Short URL https://ia.cr/2019/1436 CC BY BibTeX @misc{cryptoeprint:2019/1436, author = {Paul Kirchner and Thomas Espitau and Pierre-Alain Fouque}, title = {Algebraic and Euclidean Lattices: Optimal Lattice Reduction and Beyond}, howpublished = {Cryptology ePrint Archive, Paper 2019/1436}, year = {2019}, note = {\url{https://eprint.iacr.org/2019/1436}}, url = {https://eprint.iacr.org/2019/1436} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
Question-and-Answer Resource for the Building Energy Modeling Community Get started with the Help page # How do you model a heating only radiant system? Is there any way to model a radiant system for heating only. The cooling control temperature schedule seems to be a required input. This question suggested creating an "Always Off" schedule, but I'm not quite sure how to do this, and where this schedule should be set. edit retag close merge delete @ethankheil: just for clarification, are you using ZoneHVAC:LowTemperatureRadiant:VariableFlow object for radiant panels? ( 2015-08-18 15:53:55 -0500 )edit Yes, @Waseem. Sorry for not being more specific. ( 2015-08-18 16:26:46 -0500 )edit Sort by » oldest newest most voted The ZoneHVAC:LowTemperatureradiant:VaribaleFlow does require three schedules i.e. one for availability, second for heating control temperature schedule and the last one for cooling control schedule. These systems can be modeled using availability and heating control schedules (for heating purposes only) i.e. without defining schedule for cooling control. I just simulated without defining cooling control temperature schedule and it worked fine for me. See below the way I defined the object; ZoneHVAC:LowTemperatureRadiant:VariableFlow, Block1:Zone1 Heated Floor, ! - Component name On 24/7, ! - Availability schedule Block1:Zone1, ! - Zone name Block1:Zone1 Heated Floor Radiant Surface List, ! - Radiant surface group name 0.0130, ! - Hydronic tubing inside diameter (m) autosize, ! - Hydronic tubing length (m) MeanAirTemperature, ! - Temperature control type autosize, ! - Maximum hot water flow (m3/s) Block1:Zone1 Heated Floor Hot Water Inlet Node, ! - Heating water inlet node Block1:Zone1 Heated Floor Hot Water Outlet Node, ! - Heating water outlet node 2.00, ! - Heating control throttling range (C) Block1:Zone1 Heating Setpoint Schedule, ! - Heating control temperature schedule , ! - Maximum cold water flow (m3/s) , ! - Cooling water inlet node , ! - Cooling water outlet node , ! - Cooling control throttling range (C) , ! - Cooling high control temperature schedule SimpleOff, ! - Condensation control type 1.00, ! - Condensation control dewpoint offset OnePerSurface, ! - Number of circuits 106.700; ! - Circuit length (m) P.S. I did try this using E+ 8.1, 8.2 and 8.3, all worked fine for me. If you are getting some kind of error then please share that. Also, my question was based on using E+ text editor not OpenStudio (as you did not mentioned that you are using OpenStudio) but the idea is same. You just have to use availability and heating control temperature schedule. ## OpenStudio If you are doing it in OpenStudio, then once you have applied the Zone Equipment then you can apply your Cooling and Heating thermostat schedules. I don't know whether in OpenStudio we have to specify both schedules or specifying only one will work as I did in IDF file but you can assign Always Off schedule for Cooling. You can find this schedule under Library --> Schedule Rulesets, all you need to do is Drag and drop. See picture below; more @Waseem, thank you very much for this helpful and thorough answer. I apologize for not mentioning earlier that I am in fact using OpenStudio. How might this translate to the OS interface? Thanks ( 2015-08-20 18:08:01 -0500 )edit ( 2015-08-21 04:43:00 -0500 )edit @Waseem, thanks again for your help. Unfortunately when I drag the "Always Off Schedule" to the "Cooling Thermostat Schedule" box, the schedule is not properly assigned. Basically I drag the schedule to the appropriate box but it just disappears. Thanks ( 2015-08-25 11:14:03 -0500 )edit ## Stats Seen: 427 times Last updated: Aug 21 '15
## Acta Mathematica ### Real quadrics in Cn, complex manifolds and convex polytopes #### Abstract In this paper, we investigate the topology of a class of non-Kähler compact complex manifolds generalizing that of Hopf and Calabi-Eckmann manifolds. These manifolds are diffeomorphic to special systems of real quadrics Cn which are invariant with respect to the natural action of the real torus (S1)n onto Cn. The quotient space is a simple convex polytope. The problem reduces thus to the study of the topology of certain real algebraic sets and can be handled using combinatorial results on convex polytopes. We prove that the homology groups of these compact complex manifolds can have arbitrary amount of torsion so that their topology is extremely rich. We also resolve an associated wall-crossing problem by introducing holomorphic equivariant elementary surgeries related to some transformations of the simple convex polytope. Finally, as a nice consequence, we obtain that affine non-Kähler compact complex manifolds can have arbitrary amount of torsion in their homology groups, contrasting with the Kähler situation. #### Dedication Dedicated to Alberto Verjovsky on his 60th birthday. #### Article information Source Acta Math., Volume 197, Number 1 (2006), 53-127. Dates Revised: 3 April 2006 First available in Project Euclid: 31 January 2017 https://projecteuclid.org/euclid.acta/1485891843 Digital Object Identifier doi:10.1007/s11511-006-0008-2 Mathematical Reviews number (MathSciNet) MR2285318 Zentralblatt MATH identifier 1157.14313 Rights #### Citation Bosio, Frédéric; Meersseman, Laurent. Real quadrics in C n , complex manifolds and convex polytopes. Acta Math. 197 (2006), no. 1, 53--127. doi:10.1007/s11511-006-0008-2. https://projecteuclid.org/euclid.acta/1485891843 #### References • Aleksandrov, P.S.: Combinatorial Topology, vol. 1–3. Graylock Press, Rochester, NY (1956) • Baskakov, I.V.: Cohomology of K-powers of spaces and the combinatorics of simplicial divisions. Uspekhi Mat. Nauk 57(5), 147–148 (2002) (Russian). English translation in Russian Math. Surveys 57, 989–990 (2002) • Baskakov, I.V., Buchstaber, V.M., Panov, T.E.: Algebras of cellular cochains, and torus actions. Uspekhi Mat. Nauk 59(3), 159–160 (2004) (Russian). English translation in Russian Math. Surveys 59, 562–563 (2004) • Bayer, M.M., Lee, C.W.: Combinatorial aspects of convex polytopes, in Handbook of Convex Geometry, vol. A, pp. 485–534. North-Holland, Amsterdam (1993) • Borcea, C.: Some remarks on deformations of Hopf manifolds. Rev. Roumaine Math. Pures Appl. 26, 1287–1294 (1981) • Bosio, F.: Variétés complexes compactes: une généralisation de la construction de Meersseman et Löpez de Medrano-Verjovsky. Ann. Inst. Fourier (Grenoble) 51, 1259–1297 (2001) • Bredon, G.E.: Introduction to Compact Transformation Groups. Academic Press, New York (1972) • Brieskorn, E., van de Ven, A.: Some complex structures on products of homotopy spheres. Topology 7, 389–393 (1968) • Buchstaber, V.M., Panov, T.E.: Torus actions and their applications in topology and combinatorics. University lecture series, 24. Amer. Math. Soc., Providence, RI (2002) • Calabi, E., Eckmann, B.: A class of compact, complex manifolds which are not algebraic. Ann. of Math. 58, 494–500 (1953) • Camacho, C., Kuiper, N.H., Palis, J.: The topology of holomorphic flows with singularity. Inst. Hautes Études Sci. Publ. Math. 48, 5–38 (1978) • Davis, M.W., Januszkiewicz, T.: Convex polytopes, Coxeter orbifolds and torus actions. Duke Math. J. 62, 417–451 (1991) • Deligne, P., Griffiths, P., Morgan, J., Sullivan, D.: Real homotopy theory of Kähler manifolds. Invent. Math. 29, 245–274 (1975) • Denham, G., Suciu, A.I.: Moment-angle complexes, monomial ideals, and Massey products. To appear in Pure Appl. Math. Q. arXiv:math.AT/0512497. • Girbau, J., Haefliger, A., Sundararaman, D.: On deformations of transversely holomorphic foliations. J. Reine Angew. Math. 345, 122–147 (1983) • Goresky, M., MacPherson, R.: Stratified Morse Theory. Ergebnisse der Mathematik und ihrer Grenzgebiete, 14. Springer, Berlin (1988) • Grünbaum, B.: Convex Polytopes. Pure and Applied Mathematics, 16. Wiley, New York (1967) • Haefliger, A.: Deformations of transversely holomorphic flows on spheres and deformations of Hopf manifolds. Compositio Math. 55, 241–251 (1985) • Hirsch, M.W.: Differential Topology. Springer-Verlag, New York (1976) • Hopf, H.: Zur Topologie der komplexen Mannigfaltigkeiten, in Studies and Essays, pp. 167–185. Interscience Publishers, New York (1948) • Jewell, K.: Complements of sphere and subspace arrangements. Topology Appl. 56, 199–214 (1994) • Kobayashi, S., Horst, C.: Topics in complex differential geometry, in Complex Differential Geometry, DMV Sem., 3, pp. 4–66. Birkhäuser, Basel (1983) • Loeb, J.J., Nicolau, M.: Holomorphic flows and complex structures on products of odd-dimensional spheres. Math. Ann. 306, 781–817 (1996) • de Longueville, M.: The ring structure on the cohomology of coordinate subspace arrangements. Math. Z. 233, 553–577 (2000) • López de Medrano, S.: The space of Siegel leaves of a holomorphic vector field, in Holomorphic Dynamics (Mexico, 1986), Lecture Notes in Math., 1345, pp. 233–245. Springer, Berlin (1988) • López de Medrano, S.: Topology of the intersection of quadrics in $\mathbf{R}^n$, in Algebraic Topology (Arcata, CA, 1986), Lecture Notes in Math., 1370, pp. 280–292. Springer, Berlin (1989) • López de Medrano, S., Verjovsky, A.: A new family of complex, compact, non-symplectic manifolds. Bol. Soc. Brasil. Mat. 28, 253–269 (1997) • McGavran, D.: Adjacent connected sums and torus actions. Trans. Amer. Math. Soc. 251, 235–254 (1979) • McMullen, P.: On simple polytopes. Invent. Math. 113, 419–444 (1993) • Meersseman, L.: A new geometric construction of compact complex manifolds in any dimension. Math. Ann. 317, 79–115 (2000) • Meersseman, L., Verjovsky, A.: Holomorphic principal bundles over projective toric varieties. J. Reine Angew. Math. 572, 57–96 (2004) • Milnor, J.W.: On the 3-dimensional Brieskorn manifolds $M(p,q,r)$, in Knots, Groups, and 3-Manifolds, Ann. of Math. Studies, 84, pp. 175–225. Princeton University Press, Princeton, NJ (1975) • Milnor, J.W., Stasheff, J.D.: Characteristic Classes. Princeton University Press, Princeton, NJ (1974) • Morrow, J., Kodaira, K.: Complex Manifolds. Holt, Rinehart and Winston, New York (1971) • Panov, T.E.: Cohomology of face rings and torus actions. To appear in London Math. Soc. Lecture Note Ser. arXiv:math.AT/0506526. • Stanley, R.P.: Combinatorics and Commutative Algebra. Progress in Mathematics, 41. Birkhäuser, Boston, MA (1983) • Taubes, C.H.: The existence of anti-self-dual conformal structures. J. Differential Geom. 36, 163–253 (1992) • Timorin, V.A.: An analogue of the Hodge–Riemann relations for simple convex polyhedra. Uspekhi Mat. Nauk 54(2), 113–162 (1999) (Russian). English translation in Russian Math. Surveys 54, 381–426 (1999)
$$\require{cancel}$$ # 14.7: Viscosity and Turbulence Skills to Develop • Explain what viscosity is • Calculate flow and resistance with Poiseuille's law • Explain how pressure drops due to resistance • Calculate the Reynolds number for an object moving through a fluid • Use the Reynolds number for a system to determine whether it is laminar or turbulent • Describe the conditions under which an object has a terminal speed In Applications of Newton’s Laws, which introduced the concept of friction, we saw that an object sliding across the floor with an initial velocity and no applied force comes to rest due to the force of friction. Friction depends on the types of materials in contact and is proportional to the normal force. We also discussed drag and air resistance in that same chapter. We explained that at low speeds, the drag is proportional to the velocity, whereas at high speeds, drag is proportional to the velocity squared. In this section, we introduce the forces of friction that act on fluids in motion. For example, a fluid flowing through a pipe is subject to resistance, a type of friction, between the fluid and the walls. Friction also occurs between the different layers of fluid. These resistive forces affect the way the fluid flows through the pipe. ### Viscosity and Laminar Flow When you pour yourself a glass of juice, the liquid flows freely and quickly. But if you pour maple syrup on your pancakes, that liquid flows slowly and sticks to the pitcher. The difference is fluid friction, both within the fluid itself and between the fluid and its surroundings. We call this property of fluids viscosity. Juice has low viscosity, whereas syrup has high viscosity. The precise definition of viscosity is based on laminar, or nonturbulent, flow. Figure 14.34 shows schematically how laminar and turbulent flow differ. When flow is laminar, layers flow without mixing. When flow is turbulent, the layers mix, and significant velocities occur in directions other than the overall direction of flow. Figure $$\PageIndex{1}$$: (a) Laminar flow occurs in layers without mixing. Notice that viscosity causes drag between layers as well as with the fixed surface. The speed near the bottom of the flow (vb) is less than speed near the top (vt) because in this case, the surface of the containing vessel is at the bottom. (b) An obstruction in the vessel causes turbulent flow. Turbulent flow mixes the fluid. There is more interaction, greater heating, and more resistance than in laminar flow. Turbulence is a fluid flow in which layers mix together via eddies and swirls. It has two main causes. First, any obstruction or sharp corner, such as in a faucet, creates turbulence by imparting velocities perpendicular to the flow. Second, high speeds cause turbulence. The drag between adjacent layers of fluid and between the fluid and its surroundings can form swirls and eddies if the speed is great enough. In Figure 14.35, the speed of the accelerating smoke reaches the point that it begins to swirl due to the drag between the smoke and the surrounding air. Figure $$\PageIndex{2}$$: Smoke rises smoothly for a while and then begins to form swirls and eddies. The smooth flow is called laminar flow, whereas the swirls and eddies typify turbulent flow. Smoke rises more rapidly when flowing smoothly than after it becomes turbulent, suggesting that turbulence poses more resistance to flow. (credit: “Creativity103”/Flickr) Figure $$\PageIndex{3}$$: shows how viscosity is measured for a fluid. The fluid to be measured is placed between two parallel plates. The bottom plate is held fixed, while the top plate is moved to the right, dragging fluid with it. The layer (or lamina) of fluid in contact with either plate does not move relative to the plate, so the top layer moves at speed v while the bottom layer remains at rest. Each successive layer from the top down exerts a force on the one below it, trying to drag it along, producing a continuous variation in speed from v to 0 as shown. Care is taken to ensure that the flow is laminar, that is, the layers do not mix. The motion in the figure is like a continuous shearing motion. Fluids have zero shear strength, but the rate at which they are sheared is related to the same geometrical factors A and L as is shear deformation for solids. In the diagram, the fluid is initially at rest. The layer of fluid in contact with the moving plate is accelerated and starts to move due to the internal friction between moving plate and the fluid. The next layer is in contact with the moving layer; since there is internal friction between the two layers, it also accelerates, and so on through the depth of the fluid. There is also internal friction between the stationary plate and the lowest layer of fluid, next to the station plate. The force is required to keep the plate moving at a constant velocity due to the internal friction. Figure $$\PageIndex{3}$$: Measurement of viscosity for laminar flow of fluid between two plates of area A. The bottom plate is fixed. When the top plate is pushed to the right, it drags the fluid along with it. A force F is required to keep the top plate in Figure 14.36 moving at a constant velocity v, and experiments have shown that this force depends on four factors. First, F is directly proportional to v (until the speed is so high that turbulence occurs—then a much larger force is needed, and it has a more complicated dependence on v). Second, F is proportional to the area A of the plate. This relationship seems reasonable, since A is directly proportional to the amount of fluid being moved. Third, F is inversely proportional to the distance between the plates L. This relationship is also reasonable; L is like a lever arm, and the greater the lever arm, the less the force that is needed. Fourth, F is directly proportional to the coefficient of viscosity, $$\eta$$ The greater the viscosity, the greater the force required. These dependencies are combined into the equation $$F = \eta \frac{vA}{L} \ldotp$$ This equation gives us a working definition of fluid viscosity $$\eta$$. Solving for $$\eta$$ gives $$\eta = \frac{FL}{vA} \tag{14.17}$$ which defines viscosity in terms of how it is measured. The SI unit of viscosity is $$\frac{N\; \cdotp m}{[(m/s)m^{2}]}$$ = (N/m2)s or Pa • s. Table 14.4 lists the coefficients of viscosity for various fluids. Viscosity varies from one fluid to another by several orders of magnitude. As you might expect, the viscosities of gases are much less than those of liquids, and these viscosities often depend on temperature. #### Table 14.4 - Coefficients of Viscosity of Various Fluids Fluid Temperature (°C) Viscosity $$\eta$$ (Pa • s) Air 0 0.0171 20 0.0181 40 0.0190 100 0.0218 Ammonia 20 0.00974 Carbon dioxide 20 0.0147 Helium 20 0.0196 Hydrogen 0 0.0090 Mercury 20 0.0450 Oxygen 20 0.0203 Steam 100 0.0130 Liquid water 0 1.792 20 1.002 37 0.6947 40 0.653 100 0.282 Whole blood 20 3.015 37 2.084 Blood plasma 20 1.810 37 1.257 Ethyl alcohol 20 1.20 Methanol 20 0.584 Oil (heavy machine) 20 660 Oil (motor, SAE 10) 30 200 Oil (olive) 20 138 Glycerin 20 1500 Honey 20 2000-10000 Maple syrup 20 2000-3000 Milk 20 3.0 Oil (corn) 20 65 ### Laminar Flow Confined to Tubes: Poiseuille’s Law What causes flow? The answer, not surprisingly, is a pressure difference. In fact, there is a very simple relationship between horizontal flow and pressure. Flow rate Q is in the direction from high to low pressure. The greater the pressure differential between two points, the greater the flow rate. This relationship can be stated as $$Q = \frac{p_{2} - p_{1}}{R}$$ where p1 and p2 are the pressures at two points, such as at either end of a tube, and R is the resistance to flow. The resistance R includes everything, except pressure, that affects flow rate. For example, R is greater for a long tube than for a short one. The greater the viscosity of a fluid, the greater the value of R. Turbulence greatly increases R, whereas increasing the diameter of a tube decreases R. If viscosity is zero, the fluid is frictionless and the resistance to flow is also zero. Comparing frictionless flow in a tube to viscous flow, as in Figure 14.37, we see that for a viscous fluid, speed is greatest at midstream because of drag at the boundaries. We can see the effect of viscosity in a Bunsen burner flame [part (c)], even though the viscosity of natural gas is small. Figure $$\PageIndex{4}$$: (a) If fluid flow in a tube has negligible resistance, the speed is the same all across the tube. (b) When a viscous fluid flows through a tube, its speed at the walls is zero, increasing steadily to its maximum at the center of the tube. (c) The shape of a Bunsen burner flame is due to the velocity profile across the tube. (credit c: modification of work by Jason Woodhead) The resistance R to laminar flow of an incompressible fluid with viscosity $$\eta$$ through a horizontal tube of uniform radius r and length l, is given by $$R = \frac{8 \eta l}{\pi r^{4}} \ldotp \tag{14.18}$$ This equation is called Poiseuille’s law for resistance, named after the French scientist J. L. Poiseuille (1799–1869), who derived it in an attempt to understand the flow of blood through the body. Let us examine Poiseuille’s expression for R to see if it makes good intuitive sense. We see that resistance is directly proportional to both fluid viscosity $$\eta$$ and the length l of a tube. After all, both of these directly affect the amount of friction encountered—the greater either is, the greater the resistance and the smaller the flow. The radius r of a tube affects the resistance, which again makes sense, because the greater the radius, the greater the flow (all other factors remaining the same). But it is surprising that r is raised to the fourth power in Poiseuille’s law. This exponent means that any change in the radius of a tube has a very large effect on resistance. For example, doubling the radius of a tube decreases resistance by a factor of 24 = 16. Taken together $$Q = \frac{p_{2} - p_{1}}{R}$$ and $$R = \frac{8 \eta l}{\pi r^{4}}$$ give the following expression for flow rate: $$Q = \frac{(p_{2} - p_{1}) \pi r^{4}}{8 \eta l} \ldotp \tag{14.19}$$ This equation describes laminar flow through a tube. It is sometimes called Poiseuille’s law for laminar flow, or simply Poiseuille’s law (Figure 14.38). Figure $$\PageIndex{5}$$: Poiseuille’s law applies to laminar flow of an incompressible fluid of viscosity η through a tube of length l and radius r. The direction of flow is from greater to lower pressure. Flow rate Q is directly proportional to the pressure difference p2 − p1, and inversely proportional to the length l of the tube and viscosity $$\eta$$ of the fluid. Flow rate increases with radius by a factor of r4. Example 14.8 ##### Using Flow Rate: Air Conditioning Systems An air conditioning system is being designed to supply air at a gauge pressure of 0.054 Pa at a temperature of 20 °C. The air is sent through an insulated, round conduit with a diameter of 18.00 cm. The conduit is 20-meters long and is open to a room at atmospheric pressure 101.30 kPa. The room has a length of 12 meters, a width of 6 meters, and a height of 3 meters. (a) What is the volume flow rate through the pipe, assuming laminar flow? (b) Estimate the length of time to completely replace the air in the room. (c) The builders decide to save money by using a conduit with a diameter of 9.00 cm. What is the new flow rate? ##### Strategy Assuming laminar flow, Poiseuille’s law states that $$Q = \frac{(p_{2} - p_{1}) \pi r^{4}}{8 \eta l} = \frac{dV}{dt} \ldotp$$ We need to compare the artery radius before and after the flow rate reduction. Note that we are given the diameter of the conduit, so we must divide by two to get the radius. ##### Solution 1. Assuming a constant pressure difference and using the viscosity $$\eta = 0.0181\; mPa\; \cdotp s$$, $$Q = \frac{(0.054\; Pa)(3.14)(0.09\; m)^{4}}{8(0.0181 \times 10^{-3}\; Pa\; \cdotp s)(20\; m)} = 3.84 \times 10^{-3}\; m^{3}/s \ldotp$$ 2. Assuming constant flow $$Q = \frac{dV}{dt} \approx \frac{\Delta V}{\Delta t}$$ $$\Delta t = \frac{\Delta V}{Q} = \frac{(12\; m)(6\; m)(3\; m)}{3.84 \times 10^{-3}\; m^{3}/s} = 5.63 \times 10^{4}\; s = 15.63\; hr \ldotp$$ 3. Using laminar flow, Poiseuille’s law yields $$Q = \frac{(0.054\; Pa)(3.14)(0.045\; m){4}}{8(0.0181 \times 10^{-3}\; Pa\; \cdotp s)(20\; m)} = 22.40 \times 10^{-4}\; m^{3}/s \ldotp$$Thus, the radius of the conduit decreases by half reduces the flow rate to 6.25% of the original value. ##### Significance In general, assuming laminar flow, decreasing the radius has a more dramatic effect than changing the length. If the length is increased and all other variables remain constant, the flow rate is decreased: $$\begin{split} \frac{Q_{A}}{Q_{B}} & = \frac{\frac{(p_{2} - p_{1}) \pi r_{A}^{4}}{8 \eta l_{A}}}{\frac{(p_{2} - p_{1}) \pi r_{B}^{4}}{8 \eta l_{B}}} = \frac{l_{B}}{l_{A}} \\ Q_{B} & = \frac{l_{A}}{l_{B}} Q_{A} \ldotp \end{split}$$ Doubling the length cuts the flow rate to one-half the original flow rate. If the radius is decreased and all other variables remain constant, the volume flow rate decreases by a much larger factor. $$\begin{split} \frac{Q_{A}}{Q_{B}} & = \frac{\frac{(p_{2} - p_{1}) \pi r_{A}^{4}}{8 \eta l_{A}}}{\frac{(p_{2} - p_{1}) \pi r_{B}^{4}}{8 \eta l_{B}}} = \left(\dfrac{r_{A}}{r_{B}}\right)^{4} \\ Q_{B} & = \left(\dfrac{r_{B}}{r_{A}}\right)^{4} Q_{A} \end{split}$$ Cutting the radius in half decreases the flow rate to one-sixteenth the original flow rate. ### Flow and Resistance as Causes of Pressure Drops Water pressure in homes is sometimes lower than normal during times of heavy use, such as hot summer days. The drop in pressure occurs in the water main before it reaches individual homes. Let us consider flow through the water main as illustrated in Figure 14.39. We can understand why the pressure p1 to the home drops during times of heavy use by rearranging the equation for flow rate: $$\begin{split} Q & = \frac{p_{2} - p_{1}}{R} \\ p_{2} - p_{1} & = RQ \ldotp \end{split}$$ In this case, p2 is the pressure at the water works and R is the resistance of the water main. During times of heavy use, the flow rate Q is large. This means that p2 − p1 must also be large. Thus p1 must decrease. It is correct to think of flow and resistance as causing the pressure to drop from p2 to p1. The equation p2 − p1 = RQ is valid for both laminar and turbulent flows. Figure $$\PageIndex{6}$$: During times of heavy use, there is a significant pressure drop in a water main, and p1 supplied to users is significantly less than p2 created at the water works. If the flow is very small, then the pressure drop is negligible, and p2 ≈ p1. We can also use p2 − p1 = RQ to analyze pressure drops occurring in more complex systems in which the tube radius is not the same everywhere. Resistance is much greater in narrow places, such as in an obstructed coronary artery. For a given flow rate Q, the pressure drop is greatest where the tube is most narrow. This is how water faucets control flow. Additionally, R is greatly increased by turbulence, and a constriction that creates turbulence greatly reduces the pressure downstream. Plaque in an artery reduces pressure and hence flow, both by its resistance and by the turbulence it creates. ### Measuring Turbulence An indicator called the Reynolds number NR can reveal whether flow is laminar or turbulent. For flow in a tube of uniform diameter, the Reynolds number is defined as $$N_{R} = \frac{2 \rho vr}{\eta}\; (flow\; in\; tube) \tag{14.20}$$ where $$\rho$$ is the fluid density, v its speed, $$\eta$$ its viscosity, and r the tube radius. The Reynolds number is a dimensionless quantity. Experiments have revealed that NR is related to the onset of turbulence. For NR below about 2000, flow is laminar. For NR above about 3000, flow is turbulent. For values of NR between about 2000 and 3000, flow is unstable—that is, it can be laminar, but small obstructions and surface roughness can make it turbulent, and it may oscillate randomly between being laminar and turbulent. In fact, the flow of a fluid with a Reynolds number between 2000 and 3000 is a good example of chaotic behavior. A system is defined to be chaotic when its behavior is so sensitive to some factor that it is extremely difficult to predict. It is difficult, but not impossible, to predict whether flow is turbulent or not when a fluid’s Reynold’s number falls in this range due to extremely sensitive dependence on factors like roughness and obstructions on the nature of the flow. A tiny variation in one factor has an exaggerated (or nonlinear) effect on the flow. Example 14.9 ##### Using Flow Rate: Turbulent Flow or Laminar Flow In Example 14.8, we found the volume flow rate of an air conditioning system to be Q = 3.84 x 10−3 m3/s. This calculation assumed laminar flow. (a) Was this a good assumption? (b) At what velocity would the flow become turbulent? ##### Strategy To determine if the flow of air through the air conditioning system is laminar, we first need to find the velocity, which can be found by $$Q = Av = \pi r^{2} v \ldotp$$ Then we can calculate the Reynold’s number, using the equation below, and determine if it falls in the range for laminar flow $$R = \frac{2 \rho vr}{\eta} \ldotp$$ ##### Solution 1. Using the values given: $$\begin{split} v & = \frac{Q}{\pi r^{2}} = \frac{3.84 \times 10^{-3}\; m^{3}/s}{3.14 (0.09\; m)^{2}} = 0.15\; m/s \\ R & = \frac{2 \rho vr}{\eta} = \frac{2 (1.23\; kg/m^{3})(0.15\; m/s)(0.09\; m)}{0.0181 \times 10^{-3}\; Pa\; \cdotp s} = 1835 \ldotp \end{split}$$Since the Reynolds number is 1835 < 2000, the flow is laminar and not turbulent. The assumption that the flow was laminar is valid. 2. To find the maximum speed of the air to keep the flow laminar, consider the Reynold’s number. $$\begin{split} R & = \frac{2 \rho vr}{\eta} \leq 2000 \\ v & = \frac{2000(0.0181 \times 10^{-3}\; Pa\; \cdotp s)}{2(1.23\; kg/m^{3})(0.09\; m)} = 0.16\; m/s \ldotp \end{split}$$ ##### Significance When transferring a fluid from one point to another, it desirable to limit turbulence. Turbulence results in wasted energy, as some of the energy intended to move the fluid is dissipated when eddies are formed. In this case, the air conditioning system will become less efficient once the velocity exceeds 0.16 m/s, since this is the point at which turbulence will begin to occur. ### Contributors • Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
# Graphene: Displacement of atoms out of the 2D Plane Tags: 1. Oct 4, 2012 ### Abelrevenge Hello- I am trying to find a reference describing the z(or c) component of the basis vectors for graphene. I seem to recall that there is a slight bend such that half of the atoms lie slightly above the plane. However, every paper I have found references the perfect 2D lattice of Graphene. Any help would be much appreciated. 2. Oct 6, 2012 ### PhysTech What property is it that you're trying to study? One example is the band gap at the $\textbf{K}$ and $\textbf{K}^\prime$ points. I can think of one reference where the gap at these points this discussed as a function of buckling of the honeycomb lattice. That reference is: http://prb.aps.org/abstract/PRB/v84/i19/e195430 However, this is discussed in the context of silicene, which is basically graphene but carbon replaced by silicon. There are only quantitative differences between graphene and silicene such as lattice constant, Fermi velocity, spin-orbit coupling strength etc. But qualitatively the properties of these materials are similar to graphene. You can see the gap as a function of buckling in figure 3 of the above reference.
Welcome to Chem Zipper.com......: The relative reactivity of primary : Secondary : tertiary hydrogen to chlorination is 1: 3.8: 5. Calculate the percentage of all the monochlorinated products obtained from 2-methylbutane. ## Search This Blog ### The relative reactivity of primary : Secondary : tertiary hydrogen to chlorination is 1: 3.8: 5. Calculate the percentage of all the monochlorinated products obtained from 2-methylbutane. Related Questions:
## General, Organic, and Biological Chemistry: Structures of Life (5th Edition) 81.0 g of $CO_2$ $C_3H_8$ : ( 1.008 $\times$ 8 )+ ( 12.01 $\times$ 3 )= 44.09 g/mol $$\frac{1 \space mole \space C_3H_8 }{ 44.09 \space g \space C_3H_8 } \space and \space \frac{ 44.09 \space g \space C_3H_8 }{1 \space mole \space C_3H_8 }$$ $CO_2$ : ( 12.01 $\times$ 1 )+ ( 16.00 $\times$ 2 )= 44.01 g/mol $$\frac{1 \space mole \space CO_2 }{ 44.01 \space g \space CO_2 } \space and \space \frac{ 44.01 \space g \space CO_2 }{1 \space mole \space CO_2 }$$ $$45.0 \space g \space C_3H_8 \times \frac{1 \space mole \space C_3H_8 }{ 44.09 \space g \space C_3H_8 } \times \frac{ 3 \space moles \space CO_2 }{ 1 \space mole \space C_3H_8 } \times \frac{ 44.01 \space g \space CO_2 }{1 \space mole \space CO_2 } = 135 \space g \space CO_2$$ $$actual \space yield =\frac{ percent \space yield \times theoretical \space yield }{100\%}$$$$actual \space yield =\frac{( 60.0 \%)\times ( 135 \space g \space CO_2 )}{100\%} = 81.0 \space g \space CO_2$$
# How to utilize the remainder theorem when the quotient is unknown? I encountered this question, and I am unsure how to answer it. When $$P(x)$$ is divided by $$x - 4$$, the remainder is $$13$$, and when $$P(x)$$ is divided by $$x + 3$$, the remainder is $$-1$$. Find the remainder when $$P(x)$$ is divided by $$x^2 - x - 12$$. How would I proceed? Thank you in advance! • HINT: $x^2-x-12=(x-4)(x+3)$ – Tito Eliatron Oct 12 '20 at 19:41 • I see. Is the answer 2x + 5 then? – zotz99 Oct 12 '20 at 19:52 Use the inverse isomorphism of the isomorphism in the Chinese remainder theorem: as $$x^2-x-12=(x+3)(x-4)$$, we have an isomorphism \begin{align} K[X]/(X^2-X-12)&\xrightarrow[\quad]\sim K[X]/(X+3)\times K[X]/(X-4) \\ P\bmod(X^2-X-12)&\longmapsto(P\bmod (X+3), P\bmod (X-4)&&(K\text{ is the base field}) \end{align} and given a Bézout's relation $$\;U(X)(X+3)+V(X)(X-4)=1$$, the inverse isomorphisme is given by $$(S\bmod (X+3), T\bmod(X-4))\longmapsto TU(X+3)+SV(X-4)\bmod(X^2\!-X-12) .$$ Now a Bézout's relation can be found with the extended Euclidean algorithm, but in the present case it is even shorter:$$(X+3)-(X-4)=7$$, so we simply have $$\frac17(X+3)-\frac17(X-4)=1$$ and given that $$\:P\bmod(X+3)=-1$$, $$P\bmod(X-4)=13$$, we obtain readily $$P\bmod(X^2-X-12)=\frac{13}7(X+3)+\frac17(X-4)=2X+5.$$ $$P(x)=(x^2-x-12)Q(x)+ax+b$$ $$P(4)=4a+b=13$$ $$P(-3)=-3a+b=-1$$ $$a=2, b=5$$ $$P(x)=(x^2-x-12)Q(x)+2x+5$$ $$R(x)=2x+5$$
MT Exercise-1-정태일 Exercise1) Find the matrix representing each of the following quadratic forms: 1. $x^2+4xy+3y^2$ 2. $x^2-y^2+z^2+4xz-5yz$ 3. $x^2-2y^2-3z^2+4xy+6xz-8yz$ 4. $3x_1y_1-2x_1y_2+5x_2x_1+7x_2y_2-8x_2y_3+4x_3y_2-x_3y_3$ Sol) 1) 2) 3) 4)
Read In Multiple Text Files to Calculate a Threshold Python 2 0 Entering edit mode 4.4 years ago I have ~ 400 text files of BLASTp results. P_1L_GRL111.out (contents) NP_387917.1 ADZ06570.1 44.29 289 153 4 2 289 1 282 7e-77 236 I have a file that reads in files ALL my P_1 files, sorts the e values, and then calculates a threshold. I would like to convert this into a function. I have made some attempts to define a function that can read in all my files, but as of now, the files only reads ALL P_1 files, or ALL P_2 files. I also feel like the function is not sorting the values correctly. def threshold_proteinsVStarget(protein_Files): # List of Lactobacillus databases created from BLAST commands Lactobacillus_DB = [ 'L_GRL1112', 'L_214','L_CTV-05','L_JV-V01','L_ST1','L_MV-1A', 'L_202-4','L_224-1','L_JV-V03', 'L_MV-22','L_DSM_13335', 'LactinV_03V1b', 'SPIN_1401G', 'UPII_143_D', 'L_1153','L_269-3', 'L_JV-V16','L_49540'] list_e_values = [] for prot in protein_Files: for db in Lactobacillus_DB: line = open (prot + db + '.out').readline() print line line2 = line.strip('\n') fields = line2.split('\t') e_val = float(fields[10]) list_e_values.append(e_val) return list_e_values file1 = ['P_3'] print threshold_proteinsVStarget(file1) Python E-Value • 2.9k views 1 Entering edit mode 4.4 years ago st.ph.n ★ 2.6k You don't specify what the threshold is. You don't have to specify the prefixes of the blast output. Look into glob, to read all the files in the directory, that end with .out, similarly if you were to do ls *.out . The code below will get you to a sorted list of e-values, from all files. #!/usr/bin/env python import glob for file in glob.glob('*.out'): with open(file, 'r') as f: for line in f: if float(line.strip().split('\t')[10]) < *X*: print line.strip().split('\t')[0], '\t', line.strip().split('\t')[1], '\t', line.strip().split('\t')[10]) 0 Entering edit mode This is very useful.... I was trying to work with the glob module, but I initially thought it would be best to combine ALL my files into one text file, then sort, then return the e-value...but that was not working properly. import glob list_files = [] for file in glob.glob('/Users/sueparks/BlastP_Results/*'): myfile = open(file,'r') #open each file with open('merge_BLASTP_results.out', 'a') as f: for line in lines: line.strip('\n') f.write(line) 1 Entering edit mode Using python to merge your files is unnecessary. Just cat all your files into one, then run your code on it, then. cat *.out > allblast.out Then you can use my above code, without the for loop with glob, and just open and read the file, append each evalue to a list, and sort it. 0 Entering edit mode Thanks st.ph.n! My last question, if I write another function to check e value, how can I append the first two columns with the e value. My last step: if eval is less than threshold: # append fields [0], [1],[10] print NP_387917.1 ADZ06570.1 7e-77 0 Entering edit mode See my updated answer. Replace X with the value you want. This will check the evalue as it loops through each line, instead of appending to list. If you need a list, use a tuple or dictionary: evalues = [] if line.strip().split('\t')[10] < *X*: evalues.append((line.strip().split('\t')[0], line.strip().split('\t')[1], line.strip().split('\t')[10])) for e in evalues: print e[0], '\t', e[1], '\t', e[2] 0 Entering edit mode The updated answer seemed to do the trick! How could I transform the updated code into a function? 0 Entering edit mode if all you're going to do is return a list from the function, you don't need to use a function. Just append everything to a list as int he comment. If you do need function, it should be trivial for you to do so. If this answer solves your problem, please accept the answer. 0 Entering edit mode 4.4 years ago You are correct, I did not upload the entire code. I have a separate function for the threshold (the median calculation). import subprocess, sys, math def median(a): a.sort() if len(a)%2!=0: median=a[len(a)/2] else: median=(float(a[len(a)/2]+a[(len(a)/2)-1]))/2 return median # Define a function that reads in .out files to a list def threshold_proteinsVStarget(protein_Files): # List of Lactobacillus databases created from BLAST commands Lactobacillus_DB = [ 'L_GRL1112', 'L_214','L_CTV-05','L_JV-V01','L_ST1','L_MV-1A', 'L_202-4','L_224-1','L_JV-V03', 'L_MV-22','L_DSM_13335', 'LactinV_03V1b', 'SPIN_1401G', 'UPII_143_D', 'L_1153','L_269-3', 'L_JV-V16','L_49540'] list_e_values = [] for prot in protein_Files: #print prot for db in Lactobacillus_DB: line = open (prot + db + '.out').readline() print line line2 = line.strip('\n') fields = line2.split('\t') #print fields e_val = float(fields[10]) list_e_values.append(e_val) return list_e_values print median (list_e_values)
Endre søk Begrens søket 45678910 301 - 350 of 1366 Referera Referensformat • apa • harvard1 • ieee • modern-language-association-8th-edition • vancouver • Annet format Fler format Språk • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Annet språk Fler språk Utmatningsformat • html • text • asciidoc • rtf Treff pr side • 5 • 10 • 20 • 50 • 100 • 250 Sortering • Standard (Relevans) • Forfatter A-Ø • Forfatter Ø-A • Tittel A-Ø • Tittel Ø-A • Type publikasjon A-Ø • Type publikasjon Ø-A • Eldste først • Nyeste først • Disputationsdatum (tidligste først) • Disputationsdatum (siste først) • Standard (Relevans) • Forfatter A-Ø • Forfatter Ø-A • Tittel A-Ø • Tittel Ø-A • Type publikasjon A-Ø • Type publikasjon Ø-A • Eldste først • Nyeste først • Disputationsdatum (tidligste først) • Disputationsdatum (siste først) Merk Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar. • 301. University of Aveiro. University of Exeter. University of Exeter. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. University of Aveiro. University of Exeter. University of Newcastle Upon Tyne. Limits to n-type doping in Ge: formation of donor-vacancy complexes2008Inngår i: Diffusion and defect data, solid state data. Part A, Defect and diffusion forum, ISSN 1012-0386, E-ISSN 1662-9507, Vol. 273-276, s. 93-98Artikkel i tidsskrift (Fagfellevurdert) Vacancies and interstitials in semiconductors play a fundamental role in both high temperature diffusion and low temperature radiation and implantation damage. In Ge, a serious contender material for high-speed electronics applications, vacancies have historically been believed to dominate most diffusion related phenomena such as self-diffusivity or impurity migration. This is to be contrasted with silicon, where self-interstitials also play decisive roles, despite the similarities in the chemical nature of both materials. We report on density functional calculations of the formation and properties of vacancy-donor complexes in germanium. We predict that most vacancy-donor aggregates are deep acceptors, and together with their high solubilities, we conclude that they strongly contribute for inhibiting donor activation levels in germanium. • 302. University of Exeter. University of Exeter. University of Newcastle Upon Tyne. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Oxygen and dioxygen centers in Si and Ge: density-functional calculations2000Inngår i: Physical Review B. Condensed Matter and Materials Physics, ISSN 1098-0121, E-ISSN 1550-235X, Vol. 62, nr 16, s. 10824-10840Artikkel i tidsskrift (Fagfellevurdert) Ab initio density-functional calculations using Gaussian orbitals are carried out on large Si and Ge supercells containing oxygen defects. The formation energies, local vibrational modes, and diffusion or reorientation energies of Oi, O2i, VO, VOH, and VO2 are investigated. The piezospectroscopic tensors for Oi, VO, and VO2 are also evaluated. The vibrational modes of Oi in Si are consistent with the view that the defect has effective D3d symmetry at low hydrostatic pressures but adopts a buckled structure for large pressures. The anomalous temperature dependence of the modes of O2i is attributed to an increased buckling of Si-O-Si when the lattice contracts. The diffusion energy of the dimer is around 0.8 eV lower than that of Oi in Si and 0.6 eV in Ge. The dimer is stable against VO or VO2 formation and the latter defect has modes close to the reported 894-cm-1 band. The reorientation energies for O and H in VO and VOH defects are found to be a few tenths of an eV and are greater when the defect has trapped an electron. • 303. School of Physics, University of Exeter. School of Physics, University of Exeter. Department of Physics, University of Newcastle. Institute of Solid State and Semiconductor Physics, Minsk. Department of Electrical Engineering and Electronics and Centre for Electronic Materials, University of Manchester Institute of Science and Technology. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Department of Physics, University of Lund. Interstitial carbon-oxygen center and hydrogen related shallow thermal donors in Si2002Inngår i: Physical Review B. Condensed Matter and Materials Physics, ISSN 1098-0121, E-ISSN 1550-235X, Vol. 65, nr 1, s. 014109-11Artikkel i tidsskrift (Fagfellevurdert) The interstitial carbon-oxygen defect is a prominent defect formed in e-irradiated Cz-Si containing carbon. Previous stress alignment investigations have shown that the oxygen atom weakly perturb the carbon interstitial but the lack of a high-frequency oxygen mode has been taken to imply that the oxygen atom is severely affected and becomes overcoordinated. Local vibrational mode spectroscopy and ab initio modeling are used to investigate the defect. We find new modes whose oxygen isotopic shifts give further evidence for oxygen overcoordination. Moreover, we find that the calculated stress-energy tensor and energy levels are in good agreement with experimental values. The complexes formed by adding both single (CiOiH) and a pair of H atoms (CiOiH2), as well as the addition of a second oxygen atom, are considered theoretically. It is shown that the first is bistable with a shallow donor and deep acceptor level, while the second is passive. The properties of CiOiH and CiO2iH are strikingly similar to the first two members of a family of shallow thermal donors that contain hydrogen. • 304. School of Physics, University of Exeter. School of Physics, University of Exeter. Department of Physics, University of Newcastle. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Institute of Solid State and Semiconductor Physics, Minsk. Centre for Electronic Materials, University of Manchester. Department of Physics, University of Lund. Over-coordinated oxygen in the interstitial carbon-oxygen complex2001Inngår i: Physica. B, Condensed matter, ISSN 0921-4526, E-ISSN 1873-2135, Vol. 308, s. 305-308Artikkel i tidsskrift (Fagfellevurdert) The interstitial carbon-oxygen complex is one of the most prominent defects formed in e-irradiated Cz-Si containing carbon. Stress alignment investigations have shown that the oxygen atom only perturbs the carbon interstitial but the lack of a high frequency oxygen mode has been taken to imply that the oxygen atom is over-coordinated. Local vibrational mode spectroscopy and ab initio modeling are used to investigate the defect. We find new modes whose oxygen isotopic shifts, along with the piezoscopic stress-energy tensor support the trivalent model, thus providing evidence for oxygen over-coordination. • 305. School of Physics, University of Exeter. School of Physics, University of Exeter. Institute of Solid State and Semiconductor Physics, Minsk. Centre for Electronic Materials, University of Manchester. Department of Physics, University of Lund. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Department of Physics, University of Newcastle. Thermal double donors and quantum dots2001Inngår i: Physical Review Letters, ISSN 0031-9007, E-ISSN 1079-7114, Vol. 87, nr 23, s. 235501-Artikkel i tidsskrift (Fagfellevurdert) Combined local mode spectroscopy and ab initio modeling are used to demonstrate for the first time that oxygen atoms in thermal double donors (TDD) in Si are in close proximity. The observed vibrational modes in 16O, 18O, and mixed isotopic samples are consistent with a model involving [110] aligned oxygen chains made up of an insulating core lying between electrically active ends. The model also explains the minute spin density observed on oxygen in TDD+ as well as the piezospectroscopic tensors of the donors. The analogy between the thermal donors and quantum dots is emphasized. • 306. Department of Physics, University of Aveiro. School of Physics, University of Exeter. School of Natural Science, University of Newcastle upon Tyne. School of Natural Science, University of Newcastle upon Tyne. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Optically active erbium-oxygen complexes in GaAs2004Inngår i: Applied Physics Letters, ISSN 0003-6951, E-ISSN 1077-3118, Vol. 84, nr 10, s. 1683-1685Artikkel i tidsskrift (Fagfellevurdert) Density functional modeling of Er and Er-O complexes in GaAs show that Er impurities at the Ga site are not efficient channels for exciton recombination, but decorative O atoms play crucial roles in inhibiting Er precipitation and in creating the necessary conditions for electron-hole capture. Among the defects studied, the ErGaOAs and ErGa(OAs)2 models have the symmetry and carrier trap location close to the defect responsible for the strong 1.54 µm photoluminescence band in Er, O codoped GaAs. • 307. Department of Physics, University of Aveiro, Campus Santiago. School of Physics, University of Exeter. Department of Physics, University of Aveiro, Campus Santiago. Department of Physics, University of Aveiro, Campus Santiago. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. School of Natural Science, University of Newcastle upon Tyne. Electronic structure and Jahn-Teller instabilities in a single vacancy in Ge2005Inngår i: Journal of Physics: Condensed Matter, ISSN 0953-8984, E-ISSN 1361-648X, Vol. 17, nr 48, s. L521-L527Artikkel i tidsskrift (Fagfellevurdert) Density functional modelling studies of the single vacancy in large Ge clusters are presented. We take a careful look at the origin of Jahn-Teller instabilities as a function of the vacancy net charge, resulting in a variety of structural relaxations. By comparing electron affinities of the vacancy with those from defects with well established gap states, we were able to estimate three acceptor states for the vacancy at E(-/0) ≤ Ev+0.2 eV, E(≤/-) ≤ Ec-0.5 eV and eV. As opposed to the Si vacancy, the defect in Ge is not a donor. We also show that these dissimilarities have fundamental consequences for the electronic/atomic picture of other centres, such as transition metals in germanium crystals. • 308. Department of Physics, University of Aveiro. School of Physics, University of Exeter. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. School of Natural Science, University of Newcastle upon Tyne. The formation, dissociation and electrical activity of divacancy-oxygen complexes in Si2003Inngår i: Physica. B, Condensed matter, ISSN 0921-4526, E-ISSN 1873-2135, Vol. 340, s. 523-527Artikkel i tidsskrift (Fagfellevurdert) Density functional calculations are carried out on divacancy-oxygen (V2O and V2O2) complexes in silicon, paying particular attention to their formation and dissociation mechanisms as well as their electrical activity. The formation of V2O around 220°C is controlled by the diffusion of V2 to immobile oxygen traps, while it dissociates around 300°C into VO and V. V2O and V2O2 are found to possess deep single and double acceptor levels as well as deep donor levels similar to those of V2. • 309. Department of Physics, University of Aveiro. Department of Physics, University of Aveiro. School of Physics, University of Exeter. School of Physics, University of Exeter. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. School of Natural Science, University of Newcastle upon Tyne. Ab initio modeling of defect levels in Ge clusters and supercells2006Inngår i: Materials Science in Semiconductor Processing, ISSN 1369-8001, E-ISSN 1873-4081, Vol. 9, nr 4-5, s. 477-483Artikkel i tidsskrift (Fagfellevurdert) Most density-functional studies of defects in semiconductors invariably use (i) a supercell that imitates the host crystal, as well as (ii) a local treatment of the exchange-correlation potential. The first approximation has had an enormous success in many materials, provided that the size of cell is large enough to minimize long-range interactions between the infinite lattice of defects. For instance, these may arise from strain fields or from the overlap between shallow defect states. The second approximation, when combined with the periodic boundary conditions, leads to an essentially metallic density of states in Ge, which can compromise any investigation of electronic transitions involving gap levels. Here, we report on two approaches to surmount these difficulties, namely (i) to open the gap by reducing the host to a Ge cluster of atoms whose states are confined by a surface potential and (ii) to use supercells, but choosing carefully the Brillouin zone sampling scheme, taking k-points that minimize the admixture between defect-related gap states and the host density of states. These methods are utilized in the calculation of the electronic structure of the vacancy, divacancy, and vacancy-donor pairs in Ge • 310. Department of Physics, University of Aveiro, Campus Santiago. Department of Physics, University of Aveiro, Campus Santiago. School of Physics, University of Exeter. School of Physics, University of Exeter. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. School of Natural Science, University of Newcastle upon Tyne. Calculation of deep carrier traps in a divacancy in germanium crystals2006Inngår i: Applied Physics Letters, ISSN 0003-6951, E-ISSN 1077-3118, Vol. 88, nr 9, s. 91919-Artikkel i tidsskrift (Fagfellevurdert) We present an ab initio density functional study on the electronic structure and electrical properties of divacancies in Ge. Although suffering essentially different Jahn-Teller distortions when compared to the analogous defect in Si, the relative location of the electrical levels in the gap does not differ radically in both materials. We propose a V2 model that is responsible for a donor level at Ev+0.03 eV, a first acceptor state at Ev+0.3 eV, and a second acceptor level at Ec-0.4 eV. The latter is only 0.1 eV deeper than an electron trap that has been recently linked to a divacancy in proton implanted material. • 311. Department of Physics, University of Aveiro. Department of Physics, University of Aveiro. School of Physics, University of Exeter. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. School of Natural Science, University of Newcastle upon Tyne. Electronic structure of divacancy-hydrogen complexes in silicon2003Inngår i: Journal of Physics: Condensed Matter, ISSN 0953-8984, E-ISSN 1361-648X, Vol. 15, nr 39, s. S2809-S2814Artikkel i tidsskrift (Fagfellevurdert) Divacancy-hydrogen complexes (V2H and V2H2) in Si are studied by ab initio modelling using large supercells. Here we pay special attention to their electronic structure, showing that these defects produce deep carrier traps. Calculated electrical gap levels indicate that V2H2 is an acceptor, whereas V2H is amphoteric, with levels close to those of the well known divacancy. Finally our results are compared with the available data from deep level transient spectroscopy and electron paramagnetic resonance experiments. • 312. Department of Physics, University of Aveiro, Campus Santiago. Department of Physics, University of Aveiro, Campus Santiago. Institut for Fysik og Astronomi, Århus Universitet. School of Physics, University of Exeter. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. School of Natural Science, University of Newcastle upon Tyne. Local vibrations on hydrogen dimers in dilute SiGe crystalline solutions2005Inngår i: Materials Science & Engineering: B. Solid-state Materials for Advanced Technology, ISSN 0921-5107, E-ISSN 1873-4944, Vol. 124/125, nr Suppl., s. 363-367Artikkel i tidsskrift (Fagfellevurdert) Atomic hydrogen is a concomitant impurity in semiconductors. Its presence in Si, Ge and SiGe alloys has been established by means of paramagnetic resonance, optical, electrical and theoretical modeling studies. Hydrogen self-trapping is known to occur in Si and Ge, resulting in the formation of molecular hydrogen and H2* interstitial dimers. Here we report on the properties of H22* complexes in dilute SiGe alloys, by using an ab initio density functional method. It is found that these complexes form preferentially within Si-rich regions. H2* dimers in Si-rich alloys show vibrational properties similar to those in pure Si. On the other hand, in Ge-rich material the minority Si atoms act as nucleation sites, with the consequent formation of at least one preferential H2*- Si defect variant, showing a distinct vibrational activity. • 313. Department of Physics, University of Aveiro, Campus Santiago. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Department of Physics, University of Aveiro, Campus Santiago. Department of Physics, University of Aveiro, Campus Santiago. School of Physics, University of Exeter. School of Natural Science, University of Newcastle upon Tyne. Donor-vacancy complexes in Ge: cluster and supercell calculations2006Inngår i: Physical Review B. Condensed Matter and Materials Physics, ISSN 1098-0121, E-ISSN 1550-235X, Vol. 73, nr 23, s. 235213-1Artikkel i tidsskrift (Fagfellevurdert) We present a comprehensive spin-density functional modeling study of the structural and electronic properties of donor-vacancy complexes (PV, AsV, SbV, and BiV) in Ge crystals. Special attention is paid to spurious results which are related to the choice of the boundary conditions (supercell-cluster approach), the resulting band-gap width, and the choice of the points to sample the Brillouin zone. The underestimated energy gap, resulting from the periodic conditions together with the local-density approximation to the exchange-correlation energy, leads to defect-related gap states that are strongly coupled to crystalline states within the center of the zone. This is shown to produce a strong effect even on relative energies. Our results indicate that in all E centers the donor atom occupies a nearly substitutional site, as opposed to the split-vacancy form adopted by the SnV complex in Si. The E centers can occur in four charge states, from positive to double negative, and produce occupancy levels at E(0/+)=Ev+0.1 eV, E(-/0)=Ev+0.3 eV, and E(=/-)=Ec-0.3 eV. • 314. Department of Physics, University of Aveiro, Campus Santiago. Photon Science Institute, University of Manchester. Photon Science Institute, University of Manchester. Photon Science Institute, University of Manchester. Scientific-Practical Materials Research Center of NAS of Belarus. Scientific-Practical Materials Research Center of NAS of Belarus. Department of Physics, Oslo University. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. School of Electrical, Electronic and Computer Engineering, University of Newcastle upon Tyne. Electronic and dynamical properties of the silicon trivacancy2012Inngår i: Physical Review B. Condensed Matter and Materials Physics, ISSN 1098-0121, E-ISSN 1550-235X, Vol. 86, nr 17Artikkel i tidsskrift (Fagfellevurdert) The trivacancy (V3) in silicon has been recently shown to be a bistable center in the neutral charge state, with a fourfold-coordinated configuration, V3[FFC], lower in energy than the (110) planar one [ V. P. Markevich et al. Phys. Rev. B 80 235207 (2009)]. Transformations of the V3 defect between different configurations, its diffusion, and disappearance upon isochronal and isothermal annealing of electron-irradiated Si:O crystals are reported from joint deep level transient spectroscopy measurements and first-principles density-functional calculations. Activation energies and respective mechanisms for V3 transformation from the (110) planar configuration to the fourfold-coordinated structure have been determined. The annealing studies demonstrate that V3 is mobile in Si at T>200 ∘C and in oxygen-rich material can be trapped by interstitial oxygen atoms so resulting in the appearance of V3O complexes. The calculations suggest that V3 motion takes place via consecutive FFC/planar transformation steps. The activation energy for the long-range diffusion of the V3 center has been derived and agrees with atomic motion barrier from the calculations • 315. University of Aveiro. University of Aveiro. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. University of Exeter. University of Exeter. University of Exeter. University of Newcastle Upon Tyne. Early stage donor-vacancy clusters in germanium2007Inngår i: Journal of materials science. Materials in electronics, ISSN 0957-4522, E-ISSN 1573-482X, Vol. 18, nr 7, s. 769-773Artikkel i tidsskrift (Fagfellevurdert) There is considerable experimental evidence that vacancies in Ge dominate several solid state reactions that range from self-diffusivity to metal and dopant transport. It is therefore vital that we fully understand how vacancies interact with other point defects in Ge. Here we have a look at the properties of small donor-vacancy (Sb n V m with m,n ≤ 2) complexes in Ge by ab-initio density functional modeling. Particular attention has been payed to binding energies and to the electronic activity of the complexes. We found that all aggregates may contribute to the n→ p type conversion that is typically observed under prolonged MeV irradiation conditions. In general, Sb n V m defects are double acceptors. It is also suggested that spontaneous formation of Sb3V complexes may limit the activation level of donors introduced by ion implantation. • 316. Harbin University of Science and Technology. Adam Mickiewicz University, Poznan. University of Jammu. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Composition operators in Orlicz spaces2004Inngår i: Journal of the Australian Mathematical Society, ISSN 1446-7887, E-ISSN 1446-8107, Vol. 76, nr 2, s. 189-206Artikkel i tidsskrift (Fagfellevurdert) Composition operators Cτ between Orlicz spaces L (Ω, Σ, μ) generated by measurable and nonsingular transformations τ from Ω into itself are considered. We characterize boundedness and compactness of the composition operator between Orlicz spaces in terms of properties of the mapping τ, the function and the measure space (Ω, Σ, μ). These results generalize earlier results known for Lp-spaces. • 317. Department of Mathematics, Faculty of Civil Engineering, University of Zagreb. Faculty of Textile Technology, University of Zagreb. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. A generalized Simpson, Trapetsoid and Ostrowski type inequality for convex functions2005Inngår i: Soochow Journal of Mathematics, ISSN 0250-3255, Vol. 31, nr 4, s. 617-627Artikkel i tidsskrift (Fagfellevurdert) In this paper, the authors prove a new generalization and unification of some recent generalizations of Simpson, trapezoid and Ostrowski type inequalities for convex functions. Both the case with the usual canonical partition and the case with the generalized partition are considered • 318. Department of Mathematics, Faculty of Civil Engineering, University of Zagreb. Faculty of Textile Technology, University of Zagreb. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. A note on Simpson type numerical integration2003Inngår i: Soochow Journal of Mathematics, ISSN 0250-3255, Vol. 29, nr 2, s. 191-200Artikkel i tidsskrift (Fagfellevurdert) Some new results concerning Simpson type numerical integration are proved, discussed and compared with other similar results in the literature. A technique to improve the error estimates also in some other recent related results is pointed out • 319. Department of Mathematics, Technion - Israel Institute of Technology. University of Memphis. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Department of Mathematical Analysis, Faculty of Mathematics and Physics, Charles University, Sokolovsk. Are generalized Lorentz "spaces" really spaces?2003Rapport (Annet vitenskapelig) • 320. Department of Mathematics, Technion - Israel Institute of Technology. University of Memphis. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Department of Mathematical Analysis, Faculty of Mathematics and Physics, Charles University, Sokolovsk. Are generalized Lorentz "spaces" really spaces?2004Inngår i: Proceedings of the American Mathematical Society, ISSN 0002-9939, E-ISSN 1088-6826, Vol. 132, nr 12, s. 3615-3625Artikkel i tidsskrift (Fagfellevurdert) Let $w$ be a non-negative measurable function on $(0,\infty)$, non-identically zero, such that $W(t)=\int_0^tw(s)ds<\infty$ for all $t>0$. The authors study conditions on $w$ for the Lorentz spaces $\Lambda^p(w)$ and $\Lambda^{p,\infty}(w)$, defined by the conditions $\int_0^\infty (f^*(t))^pw(t)dt<\infty$ and $\sup_{00,$$it is shown that, if$\varphi$satisfies the$\Delta_2$-condition and$w>0$, then$\Lambda_{\varphi,w}$is a linear space if and only if$W$satisfies the$\Delta_2\$-condition. • 321. Department of Mathematics, Technion - Israel Institute of Technology. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Washington University, St. Louis. Lund Institute of Technology. Jaak Peetre, the man and his work2002Inngår i: Function spaces, interpolation theory and related topics: proceedings of the International Conference in Honour of Jaak Peetre on His 65th Birthday, Lund, Sweden, August, 17 - 22, 2000 / [ed] Michael Cwikel, Berlin: Walter de Gruyter, 2002, s. 1-22Konferansepaper (Fagfellevurdert) • 322. Dasht, Johan Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Rothe's method for parabolic equations on non-cylindrical domains2006Inngår i: Advances in Algebra and Analysis, ISSN 0973-2306, Vol. 1, nr 1, s. 59-80Artikkel i tidsskrift (Fagfellevurdert) • 323. Dasht, Johan Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Degeneracy in stochastic homogenization2003Inngår i: Proceedings of the International Conference on Composites Engineering: ICCE/10 / [ed] David Hui, 2003Konferansepaper (Fagfellevurdert) • 324. Dasht, Johan Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Numerical analysis of the convergence in homogenization of composites2002Inngår i: Proceedings of the International Conference on Composites Engineering: ICCE/9 / [ed] David Hui, 2002Konferansepaper (Fagfellevurdert) • 325. Department of Numerical Methods and Analysis, Technical University, Sofia. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Sharp generalized Carleman inequalities with minimal information about the spectrum1994Inngår i: Mathematische Nachrichten, ISSN 0025-584X, E-ISSN 1522-2616, Vol. 168, s. 61-77Artikkel i tidsskrift (Fagfellevurdert) • 326. Department of Numerical Methods and Analysis, Technical University, Sofia. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. On sharpness, applications and generalizations of some Carleman type inequalities1996Inngår i: Tohoku mathematical journal, ISSN 0040-8735, Vol. 48, nr 1, s. 1-22Artikkel i tidsskrift (Fagfellevurdert) • 327. Narvik University College, 8505 Narvik, Norway. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Narvik University College, 8505 Narvik, Norway. A new generation of wavelet shrinkage: adaptive strategies based on composition of Lorentz-type thresholding and Besov-type non-threshold shrinkage2007Inngår i: Wavelet Applications in Industrial Processing V: 11 - 12 September 2007, Boston, Massachusetts, USA / [ed] Frédéric Truchetet; Olivier Laligant, Bellingham, Wash.: SPIE - International Society for Optical Engineering, 2007, s. 676304-Konferansepaper (Fagfellevurdert) This article is a systematic overview of compression, smoothing and denoising techniques based on shrinkage of wavelet coefficients, and proposes (in Sections 5 and 6) an advanced technique for generating enhanced composite wavelet shrinkage strategies. • 328. Narvik University College, 8505 Narvik, Norway. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Relations between functional norms of a non-negative function and its square root on the positive cone of Besov and Triebel-Lizorkin spaces2009Inngår i: Applications of Mathematics in Engineering and Economics: proceedings of the 35th International Conference, Sozopol, Bulgaria, 7 - 12 June 2009 / [ed] George Venkov; Ralitza Kovacheva; Vesela Pasheva, Melville, NY: American Institute of Physics (AIP), 2009, s. 3-15Konferansepaper (Fagfellevurdert) In this communication we study in detail the relations between the smoothness of f and √f in the case when the smoothness of the univariate non-negative functions f is measured via Besov and Triebel-Lizorkin space scales. The results obtained can be considered also as embedding theorems for usual Besov and Triebel-Lizorkin spaces and their analogues in Hellinger metric. These results can be used in constrained approximation using wavelets, with applications to probability density estimation in speech recognition, non-negative non-parametric regression-function estimation in positron-emission tomography (PET) imaging, shape/order-preserving and/or one-sided approximation and many others. • 329. Narvik University College, 8505 Narvik, Norway. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Sharp error estimates for approximation by wavelet frames in Lebesgue spaces2003Inngår i: Journal of Analysis and Applications, ISSN 0972-5954, Vol. 1, nr 1, s. 11-31Artikkel i tidsskrift (Fagfellevurdert) • 330. Narvik University College, 8505 Narvik, Norway. Narvik University College, 8505 Narvik, Norway. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Wavelet compression, data fitting and approximation based on adaptive composition of Lorentz-type thresholding and Besov-type non-threshold shrinkage2010Inngår i: Large-Scale Scientific Computing: 7th International Conference, LSSC 2009, Sozopol, Bulgaria, June 4-8, 2009 / [ed] Ivan Lirkov; Svetozar Margenov; Jerzy Wasniewski, Berlin: Springer Science+Business Media B.V., 2010, s. 738-746Konferansepaper (Fagfellevurdert) In this study we initiate the investigation of a new advanced technique, proposed in Section 6 of [3], for generating adaptive Besov-Lorentz composite wavelet shrinkage strategies. We discuss some advantages of the Besov-Lorentz approach compared to firm thresholding. • 331. Luleå tekniska universitet. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Process capability plots: a quality improvement tool1999Inngår i: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 15, nr 3, s. 213-227Artikkel i tidsskrift (Fagfellevurdert) • 332. Luleå tekniska universitet. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Process capability studies for short production runs1998Inngår i: International Journal of Reliability, Quality and Safety Engineering (IJRQSE), ISSN 0218-5393, Vol. 5, nr 4, s. 383-401Artikkel i tidsskrift (Fagfellevurdert) The current trend in modern production is directed towards shorter and shorter production runs. The two major reasons causing this trend are the rapid spread of the just in time (JIT) philosophy and the constantly increasing multiplicity of customer demands. The short runs of modern production not only constitute a challenge for production management, but they also cause some problems when applying traditional statistical methods, designed to be used for large sets of data. One of these methods is process capability studies. Since theories on how to use process capability studies in short production environments are incomplete, the aim of this paper is to present some ideas which will partly fill this gap. The theories of process capability studies for short runs presented are based on ideas of focusing on the process, not on the products, and on using data transformation. By using the transformation presented, it is possible to conduct process capability studies in a traditional straightforward manner. A simulation study shows that the suggested transformation technique works satisfactorily in real situations. Finally, the ....-plot is introduced as a method of interpreting and analyzing the capability of a short run production process. By using the .... -plot, additional information is obtained concerning the capability of a process, compared to using traditional process capability indices only. • 333. School of Physics, University of Exeter. School of Physics, University of Exeter. School of Physics, University of Exeter. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Departamento de Fisica, Universidade de Aveiro. Department of Physics, University of Newcastle. Nitrogen-Hydrogen Defects in GaP1998Inngår i: Physica status solidi. B, Basic research, ISSN 0370-1972, E-ISSN 1521-3951, Vol. 210, nr 2, s. 321-326Artikkel i tidsskrift (Fagfellevurdert) Models of the nitrogen-hydrogen defect in GaP, which contain one and two H atoms, are investigated using ab initio density functional cluster theory. We find that a single H atom binding to N possesses two infrared absorption frequencies close to those attributed to an NH2 defect. The modes shift with its charge state consistent with the photo-sensitivity found for the defect. A third mode observed for this centre is assumed to be an overtone of the bend mode. The isotope shifts of the calculated modes are in excellent agreement with experiment in contrast with the model which contains two H atoms • 334. Department of Mathematics, University of Timisoara. Faculty of Textile Technology, University of Zagreb. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Some inequalities of Hadamard type1995Inngår i: Soochow Journal of Mathematics, ISSN 0250-3255, Vol. 21, nr 3, s. 335-341Artikkel i tidsskrift (Fagfellevurdert) • 335. Department of Mathematics, University of Timisoara. Faculty of Textile Technology, University of Zagreb. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Properties of some functionals related to Jensen's inequality1995Inngår i: Acta Mathematica Hungarica, ISSN 0236-5294, E-ISSN 1588-2632, Vol. 69, nr 4, s. 129-143Artikkel i tidsskrift (Fagfellevurdert) • 336. Luleå tekniska universitet. Luleå tekniska universitet. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Flervariabelanalys med numeriska metoder för tekniska högskolor1987Bok (Annet (populærvitenskap, debatt, mm)) • 337. Luleå tekniska universitet. Luleå tekniska universitet. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Flervariabelanalys med numeriska metoder1990Bok (Annet (populærvitenskap, debatt, mm)) • 338. Luleå tekniska universitet. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Creative Teaching by Mistakes1980Inngår i: The College Mathematics Journal, ISSN 0746-8342, E-ISSN 1931-1346, Vol. 11, nr 5, s. 296-300Artikkel i tidsskrift (Fagfellevurdert) • 339. University of Zagreb, Croatia. University of Zagreb, Croatia. Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, Byggkonstruktion och brand. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, Byggkonstruktion och brand. Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, Byggkonstruktion och brand. School of Civil Engineering, Southeast University, Nanjing, China. Damage Detection in Structures – Examples2019Inngår i: IABSE Symposium 2019: Towards a Resilent Built Environment - Risk and Asset Management, 2019Konferansepaper (Fagfellevurdert) Damage assessment of structures includes estimation of location and severity of damage. Quite often it is done by using changes of dynamic properties, such as natural frequencies, mode shapes and damping ratios, determined on undamaged and damaged structures. The basic principle is to use dynamic properties of a structure as indicators of any change of its stiffness and/or mass. In this paper, two new methods for damage detection are presented and compared. The first method is based on comparison of normalised modal shape vectors determined before and after damage. The second method uses so-called 𝑙1-norm regularized finite element model updating. Some important properties of these methods are demonstrated using simulations on a Kirchhoff plate. The pros and cons of the two methods are discussed. Unique aspects of the methods are highlighted. • 340. School of Physics, University of Exeter. Max-Planck-Institut für Eisenforschung GmbH. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Department of Physics, University of Newcastle. School of Physics, University of Exeter. Effect of charge on the movement of dislocations in SiC2006Inngår i: Applied Physics Letters, ISSN 0003-6951, E-ISSN 1077-3118, Vol. 88, nr 8, s. 82113-Artikkel i tidsskrift (Fagfellevurdert) SiC bipolar devices show a degradation under forward-biased operation which has been linked with a current induced motion of one of the two glide dislocations having either Si or C core atoms. We have carried out calculations of the core structures and dynamics of partial dislocations in 3C and 2H-SiC. In this work we present results on the effect of charge on the dislocation kinks. The calculations show that silicon kinks have a deep filled band above the valence band and the trapping of holes into this band permits motion at room temperature. • 341. School of Physics, University of Exeter. School of Physics, University of Exeter. School of Physics, University of Exeter. Department of Physics, University of Newcastle. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Alphabet luminescence lines in 4H-SiC2002Inngår i: Physical Review B. Condensed Matter and Materials Physics, ISSN 1098-0121, E-ISSN 1550-235X, Vol. 65, nr 18, s. 184108-4Artikkel i tidsskrift (Fagfellevurdert) First-principles density functional calculations are used to investigate antisite pairs in 4H-SiC. We show that they are likely to be formed in close proximity under ionizing conditions, and they possess a donor level and thermal stability consistent with the series of 40 photoluminescent lines called the alphabet lines. Moreover, the gap vibrational mode of the silicon antisite defect is close to a phonon replica of the b1 line and possesses a weak isotopic shift with 13C in agreement with observation. • 342. School of Physics, University of Exeter. School of Physics, University of Exeter. Max-Planck-Institut für Eisenforschung GmbH. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. School of Natural Science, University of Newcastle upon Tyne. Movement and pinning of dislocations in SiC2007Inngår i: Physica Status Solidi. C, Current topics in solid state physics, ISSN 1610-1634, E-ISSN 1610-1642, Vol. 4, nr 8, s. 2923-2928Artikkel i tidsskrift (Fagfellevurdert) SiC bipolar devices show a degradation under forward-biased operation due to the formation and rapid propagation of stacking faults in the active region of the device. It is believed that the observed rapid stacking fault growth is due to a recombination-enhanced dislocation glide (REDG) mechanism at the bordering partial dislocations having either Si or C core atoms. We investigated the effect of charge on the dislocation kinks and found that only silicon kinks have a deep filled band above the valence band. Trapping of holes into this band permits dislocation glide at room temperature. This mechanism is distinct from REDG as it requires only holes to be trapped at a Si partial and not in addition electrons in stacking fault states. We furthermore looked at the pinning of dislocations by nitrogen and boron and found a strong pinning of the C core by N and of the Si core by B. • 343. School of Physics, University of Exeter. School of Physics, University of Exeter. Department of Physics, University of Newcastle. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Grown-in and radiation-induced defects in 4H-SiC2005Inngår i: Semiconductor defect engineering: materials, synthetic structures and devices : symposium held March 28 - April 1 2005, San Francisco, California, U.S.A. ; [Symposium E, held at the 2005 MRS spring meeting] / [ed] S. Ashok, Warrendale, Pa: Materials Research Society, 2005, s. 3-13Konferansepaper (Fagfellevurdert) SiC is a material that seems ideal for high-power, high frequency and high temperature electronic devices. It does not suffer from large reverse recovery inefficiencies typical for silicon when switching. In contrast to silicon. SiC is however difficult to dope by diffusion, and instead ion-implantation is used to achieve selective area doping. The drawback of this technique is that irradiating the crystal with dopant atoms creates a great, deal of lattice damage including vacancies, interstitials, antisites and impurity-radiation defect complexes. Although many of the point defects can be eliminated through thermal annealing, some however, e.g. the photoluminescence (PL) D1 and DLTS Z1/Z2 centers in 4H-SiC, are stable to high temperatures. In this polytype, D1 and the related alphabet lines are the most prominent PL signals. The latter can be seen directly after low energy irradiation while D1 usually dominates the PL spectrum of implanted and irradiated SiC after annealing. Not only implantation but also rapid growth of SiC by CVD methods leads to a deterioration in quality with an increase in electrically active grown in defects. Among these, the Z1/Z2 defects are dominant in n-type 4H-SiC. as well as material that has been exposed to radiation. We use first principles density functional calculations to investigate defect models for the above mentioned defects in 4H-SiC and relate their electrical and optical activity to experiments • 344. School of Physics, University of Exeter. School of Physics, University of Exeter. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. School of Natural Science, University of Newcastle upon Tyne. Density functional theory calculation of the DI optical center in SiC2006Inngår i: Physical Review B. Condensed Matter and Materials Physics, ISSN 1098-0121, E-ISSN 1550-235X, Vol. 74, nr 14, s. 144106-1Artikkel i tidsskrift (Fagfellevurdert) The DI center is a prominent defect which is detected in as-grown or irradiated SiC. It is unusual in that its intensity grows with heat treatments and survives anneals of 1700 °C. It has been assigned recently to either a close-by antisite pair or to the close-by antisite pair adjacent to a carbon antisite. We show here using local density functional calculations that these defects are not stable enough to account for DI. Instead, we assign DI to an isolated Si antisite and the four forms of the close-by antisite pair in 4H-SiC to the a, b, c, and d members of the alphabet series. The assignments allow us to understand the concentration of DI following growth, the recombination enhanced destruction of these alphabet defects and the annealing behavior of the remaining members of the series. • 345. University of Exeter. University of Exeter. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. University of Newcastle Upon Tyne. Shallow acceptors in GaN2007Inngår i: Applied Physics Letters, ISSN 0003-6951, E-ISSN 1077-3118, Vol. 91, nr 13, s. 132105-Artikkel i tidsskrift (Fagfellevurdert) Recent high resolution photoluminescence studies of high quality Mg doped GaN show the presence of two acceptors. One is due to Mg and the other labeled A1 has a shallower acceptor defect. The authors investigate likely candidates for this shallow acceptor and conclude that CN is the most likely possibility. The authors also show that the CN is passivated by H and the passivated complex is more stable than MgGa-H • 346. School of Physics, University of Exeter. School of Physics, University of Exeter. School of Physics, University of Exeter. School of Physics, University of Exeter. School of Physics, University of Exeter. Department of Physics, University of Newcastle. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Self-interstitial clusters in silicon2001Inngår i: Physica. B, Condensed matter, ISSN 0921-4526, E-ISSN 1873-2135, Vol. 308-310, s. 454-457Artikkel i tidsskrift (Fagfellevurdert) Although there have been made many calculations for structures of the self-interstitial in Si and small aggregates of interstitials, In, there have been relatively few attempts to relate these defects with experimental data. Here, we discuss the assignments of the self-interstitial to the AA12 EPR centre and the di-interstitial to the P6 EPR centre. • 347. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. A software package for sparse orthogonal factorization and updating2002Inngår i: ACM Transactions on Mathematical Software, ISSN 0098-3500, E-ISSN 1557-7295, Vol. 28, nr 4, s. 448-482Artikkel i tidsskrift (Fagfellevurdert) Although there is good software for sparse QR factorization, there is little support for updating and downdating, something that is absolutely essential in some linear programming algorithms, for example. This article describes an implementation of sparse LQ factorization, including block triangularization, approximate minimum degree ordering, symbolic factorization, multifrontal factorization, and updating and downdating. The factor Q is not retained. The updating algorithm expands the nonzero pattern of the factor L, which is reflected in the dynamic representation of L. The block triangularization is used as an ordering for sparsity' rather than as a prerequisite for block backward substitution. In symbolic factorization, something called element counters' is introduced to reduce the overestimation of the number of nonzeros that the commonly used methods do. Both the approximate minimum degree ordering and the symbolic factorization are done without explicitly forming the nonzero pattern of the symmetric matrix in the corresponding normal equations. Tests show that the average time used for a single update or downdate is essentially the same as the time used for a single forward or backward substitution. Other parts of the implementation show the same range of performance as existing code, but cannot be replaced because of the special character of the systems that are solved. • 348. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. CMregr - A Matlab software package for finding CM-Estimates for Regression2004Inngår i: Journal of Statistical Software, ISSN 1548-7660, E-ISSN 1548-7660, Vol. 10, nr 3, s. 1-11Artikkel i tidsskrift (Fagfellevurdert) This paper describes how to use the Matlab software package CMregr, and also gives some limited information on the CM-estimation problem itself. For detailed information on the algorithms used in CMregr as well as extensive testings, please refer to Arslan, Edlund & Ekblom (2002) and Edlund & Ekblom (2004). • 349. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Linear M-estimation algorithms in optimization1996Licentiatavhandling, med artikler (Annet vitenskapelig) • 350. Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper. Linear M-estimation with bounded variables1997Inngår i: BIT Numerical Mathematics, ISSN 0006-3835, E-ISSN 1572-9125, Vol. 37, nr 1, s. 13-23Artikkel i tidsskrift (Fagfellevurdert) A subproblem in the trust region algorithm for non-linear M-estimation by Ekblom and Madsen is to find the restricted step. It is found by calculating the M-estimator of the linearized model, subject to anL 2-norm bound on the variables. In this paper it is shown that this subproblem can be solved by applying Hebden-iterations to the minimizer of the Lagrangian function. The new method is compared with an Augmented Lagrange implementation. 45678910 301 - 350 of 1366 Referera Referensformat • apa • harvard1 • ieee • modern-language-association-8th-edition • vancouver • Annet format Fler format Språk • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Annet språk Fler språk Utmatningsformat • html • text • asciidoc • rtf
# Right (bi)adjoint of the inclusion of $\mathbf{Grpd}$ in $\mathbf{Cat}$ Let $$\mathbf{Grpd}$$ and $$\mathbf{Cat}$$ be respectively the 2-categories of small groupoids and of small categories. At the 1-categorical level, the inclusion $$\mathbf{Grpd}\rightarrow\mathbf{Cat}$$ has a right adjoint, namely the core. Thinking about the definition of the core of a category, I don't think that there is a way of extending it to natural transformations, essentially because if I have two functors $$F, G: \mathcal{C}\rightarrow\mathcal{D}$$ and a natural transformation $$\alpha: F\Rightarrow G$$, $$core(\alpha) : core(F)\Rightarrow core(G)$$ should be a natural isomorphism, and it is easy to find examples for which this cannot be true (one could be the determinant). So my question is: does the inclusion $$\mathbf{Grpd}\rightarrow\mathbf{Cat}$$ have a right biadjoint? My impression is that the answer should be no, but I don't know how to prove this. No, it doesn't. Bicategorical left adjoints preserve tensors with small categories. (If $$x\in B$$ is an object of a bicategory and $$J$$ is a category, the tensor $$x\otimes J$$ represents the pseudofunctor $$y\mapsto B(x,y)^J$$.) If $$x$$ is a groupoid and $$[1]$$ denotes the category freely generated by the graph $$0\to 1$$, then $$x\otimes [1]$$ is just $$x\times I$$, where $$I$$ is the groupoid freely generated by an isomorphism. But of course, after including $$x$$ into the bicategory of categories, $$x\otimes [1]$$ is simply $$x\times [1]$$, so the inclusion of groupoids into categories cannot be a left biadjoint.
# Reidemeister trace and its generalization Date: 2016/09/15 Thu 16:00 - 17:00 Room: Room 604, Building No.6 Speaker: Mitsunobu Tsutaya Affiliation: Kyushu University Abstract: Reidemeister trace was originally studied in the fixed point problem, which was generalized for the coincidence problem of maps between manifolds of the same dimensions. In this talk, we give a construction of the Reidemeister trace for maps between manifolds of arbitrary dimensions, which is realized as a homology class of the homotopy equalizer. In the construction, shriek maps appearing in string topology play an important role. We also give a technique to compute the Reidemeister trace using Serre spectral sequences.
B) < C) = What Number Am I? A) 87 B) 79 C) 90 D) 94. 10. Which Number Is Less Than 689 But Greater Following Quiz Provides Multiple Choice Questions (MCQs) Related To Order Of Operations With Whole Numbers. You Will Have To Read All The Given Answers And Click Over The Correct Answer. If You Are Not Sure About The Answer Then You Can Check The Answer Using Show Answer Button. You Can Use Next Quiz Button To Check New Set Of Questions In The Use These Resources - A Multiple-choice Quiz And A Printable Worksheet - To Study How To Order Integers. Quiz Questions Will Assess What You Know Ordering Numbers Calculator: This Calculator Orders A List Of Number In Ascending Or Descending Order. Simply Enter Your Number List Separating Each Number By Commas And Press The Appropriate Button Compare Real Numbers - Displaying Top 8 Worksheets Found For This Concept.. Some Of The Worksheets For This Concept Are Comparing And Ordering Real Numbers, Name Comparing And Ordering Real Numbers Work Write, Lesson Format Resources, Multi Part Lesson 12 1 Rational Numbers, Rational Numbers, Whole Numbers S1, Sets Of Real Numbers Date Period, Hands On Compare And Order Whole Numbers. Comparing And Ordering Real Numbers DRAFT. 5 Months Ago. By Hstinebrickner. Played 4630 Times. 10. Least Common Multiple (LCM) 6.8k Plays . 15 Qs . Least Common Comparing And Ordering Real Numbers Answer Key - Displaying Top 8 Worksheets Found For This Concept.. Some Of The Worksheets For This Concept Are Comparing And Ordering Rational Numbers, Name Comparing And Ordering Real Numbers Work Write, Comparing And Ordering Numbers, Ccomparing And Ordering Real Numbersomparing And Ordering, Work Compare And Order Rational Numbers, Hands On Compare And Comparing And Ordering Integers' Worksheets Have A Variety Of Activities To Compare The Integers. Real-life Word Problems Are Included. Ample Exercises To Order Integers In The Increasing And Decreasing Order Are Also Here For Practice. These Printable Worksheets Are Ideal For 5th Grade, 6th Grade, And 7th Grade Students. 8.2 Number And Operations. The Student Applies Mathematical Process Standards To Represent And Use Real Numbers In A Variety Of Forms. The Student Is Expected To: (D) Order A Set Of Real Numbers Arising From Mathematical And Real-world Contexts Kindle Interest In Kids With Our Printable Ordering Numbers Worksheets. Whether It Is Arranging Single-digit, 2-digit, 3-digit, 4-digit, 5-digit, Or 6-digit Numbers In Ascending Order Or Descending Order, Our Pdfs Are Loads Of Fun To Keep Your Kids In Kindergarten Through Grade 5 Glued For Hours. This Christmas Math Activity Is The Perfect Way For Your 8th Grade Math Students To Review Ordering Real Numbers. Students Will Compare And Order Both Rational And Irrational Numbers. There Are A Total Of 16 Different Ornament Numbers To Be Placed In The Correct Order. A Calculator Is Required To Following Quiz Provides Multiple Choice Questions (MCQs) Related To Ordering Integers. You Will Have To Read All The Given Answers And Click Over The Correct Answer. If You Are Not Sure About The Answer Then You Can Check The Answer Using Show Answer Button. You Can Use Next Quiz Button To Check New Set Of Questions In The Quiz. Order Real Numbers. Subsets Of Real Number System. Simplify Numbers Before Classifying % Progress . MEMORY METER. This Indicates How Strong In Your Memory This This Bundle Includes A PDF Document Containing Resources For Teaching The Real Number System. It Also Includes A Song (MP3) That Goes Along With The Lesson. PDF Document: - 3 Worksheets On Classifying Real Numbers. - 1 Multiple Choice Quiz On Classifying Real Numbers; 10 Questions - Answer Key To In The Above Order, Write The Corresponding Real Number To Its Square To Write The Given Real Numbers In The Order From Least To Greatest. 2√3, √17, 3√2, 5, 3√3, 4√2 Apart From The Stuff Given Above, If You Need Any Other Stuff In Math, Please Use Our Google Custom Search Here. Comparing And Ordering Numbers Requires An Understanding Of The Concepts Of "greater Than," "less Than," And "equal To." An Early Understanding Of This Can Make Higher Mathematics Much Easier For Kids, And Can Be Encouraged Through The Use Of Ordering Name Math Games. In Numeric Slider Multiple Choice Question The Respondent Can Slide And Answer The Question By Actually Giving It A Score Between 0 To 100. In Other Words The Preferences Will Be Numeric Values. Lesser The Value The More Unsatisfied Is The Customer And Vice Versa. Thumbs Up/Down Multiple Choice Question Adding & Subtracting Rational Numbers: 79% - 79.1 - 58 1/10 Our Mission Is To Provide A Free, World-class Education To Anyone, Anywhere. Khan Academy Is A 501(c)(3) Nonprofit Organization. Here Is A Least To Greatest Calculator And Organizer, Which Will Take An Input Of Mixed Numbers And Return The Numbers Sorted From Lowest To Highest.Enter Decimals, Fractions, Whole Numbers, And Percentages And The Tool Will Sort Your Numbers In Ascending Order For You. Comparing And Ordering Rational Numbers To Order Rational Numbers (Fractions, Decimals, Percents & #’s Written In Scientific Notation) 1. Convert All Numbers To A Common Format. 2. Put Them Required Ordered. 3. Rewrite In The Original Format. Review: Order The Following Decimals From Least To Greatest: By (date), When Given (5) Sets Of At Least (5) Real Numbers (e.g., Rational And Irrational Numbers, Negative And Positive Integers, Percentages With Fractions) From Various Contexts (e.g., Real-world Or Mathematical), (name) Will Compare And Order The Set Of Numbers For (4 Out Of 5) Sets. Comparing And Ordering Rational Numbers - SOL 7.1 Guided Practice 1. Which Set Of Numbers Is Written In Order From Greatest To Least? A, 4.4 X 101, 72%, 0.67 A) B) 4.4 X 101, 720/0, 0.67 4.4 X 101 0.67, 72%, 0.67, 72%, 4.4 X 101 Reminder: Select Answer Choice A As Your Starting Point. Original Number Decimal 44 Line Up 0.72 Abcd Be Careful Here!! Integers And Real Numbers. This Practice Test Contains 10 Questions. You Can Review The Questions After Answering All The Questions. Group: Http://www.freemathvideos.com In This Video Series I Show You How To Graph And Order Numbers. It Is Important When Graphing And Ordering Numbers That We Have How To Order Real Numbers Order Of Operations Factors & Primes Fractions Long Arithmetic Decimals Exponents & Radicals Ratios & Proportions Percent Modulo Mean, Median & Mode Scientific Notation Arithmetics Algebra Equations Inequalities System Of Equations System Of Inequalities Basic Operations Algebraic Properties Partial Fractions Polynomials Rational Expressions Comparing And Ordering Rational Numbers . To Order Fractions, Decimals & Percents: 1. Convert All Numbers To A Common Format. (Hint: Look For The Conversion That Is Easiest) 2. Put Them In The Required Ordered. (Least To Greatest Or Greatest To Least) 3. Rewrite In The Original Format. Start Studying Real Numbers:Multiple Choice Practice. Learn Vocabulary, Terms, And More With Flashcards, Games, And Other Study Tools. This Contains 20 Multiple Choice Questions For Mathematics Sequences And Series Of Real Numbers -1 (mcq) To Study With Solutions A Complete Question Bank. The Solved Questions Answers In This Sequences And Series Of Real Numbers -1 Quiz Give You A Good Mix Of Easy Questions And Tough Questions. Identify The Subset(s) Of The Real Number System To Which A Number Belongs. 0806.2.5 . Links Verified On 7/25/2014. Real Number System - This Five Page .pdf File Includes A Twenty Question Quiz To Print, Answers Are On A Separate Page ; Real Numbers - Flashcards Or A Concentration Game From Quia If You Accept The Following: The Standard Ordering Among Real Numbers Agrees With The Standard Ordering Among Natural Numbers, I.e. Nonnegative Integers. This Takes Some Work To Prove Rigorously, But The Sociological Proof Is Simple: Any Sane Mathematician Would Refuse To Accept As Standard Any Ordering Among Real Numbers That Didn't Agree With The Standard Ordering Among Natural Numbers. Ordering And Comparing Real Numbers 1. Read The Problem Carefully To Determine If You Are Putting In Order From Greatest To Least Or Least To Greatest. 2. Change All Numbers To Decimals 3. Draw Yourself A Number Line With The Numbers In Both Decimal And Original Form. Smaller Greater Ordering Numbers Worksheets. After Learning To Count Numbers, Order Numbers Activities Are A Useful Way To Test A Students Number Sense. Many Times, A Student Is Capable Of Counting Numbers, But Much Of This Memorization Is More About The Recitation Than About Developing An Actual Sense Of 'how Big' A Particular Number Is. UNIT 7 Number System And Bases CSEC Multiple Choice Questions 5. (2323+) (−) = (A) 1 (B) 443+ (C) 443− (D) 7 6. Ab=+ =31 31, − Which Of The Following Numbers Are Rational? (A) Ab+ (B) A2 (C) B2 (D) (ab+)2 7. The Following Number Is In Base 2. 1 1 1 0 0 1 What Is Its Value In Base 10 ? (A) 22 (B) 39 (C) 57 (D) 114 8. The Number 102 Written Number Sets For Any Of The Following Questions One Or More Answers Are Correct. Try To Find Out Exactly These And Click The According Checkboxes. Take A Sheet Of Paper Whenever You Feel This Might Help, In Particular For The Questions Indicated By The Symbol At The Right. You May Also This Page And Use It As Worksheet. Any Question Allows An 1.1 Comparing And Ordering Real Numbers Worksheet. Write The Numbers In Ascending (lowest To Highest) Order. Graph The Numbers -0.2, 7 10 , -1, 2 , And-4 On A Order The Lengths From Shortest To Longest. Just As Positive And Negative Integers Can Be Represented On A Number Line, So Can Positive And Negative Rational Numbers. You Can Use A Number Line To Help You Compare And Order Negative Rational Numbers. Compare Decimals Replace The With <, >, Or = To Make A True Statement.-1.2 0.8 Operations On Real Numbers Rules The Following Pointers Are To Be Kept In Mind When You Deal With Real Numbers And Mathematical Operations On Them: When The Addition Or Subtraction Operation Is Done On A Rational And Irrational Number, The Result Is An Irrational Number. Signed Numbers And Order Of Operations ADDITION: To Add Numbers With The Same Sign, Add Their Absolute Values. The Sum Has The Same Sign As The Original Numbers Being Added. To Add Numbers With Different Signs, Subtract The Smaller Absolute Value From The Larger Absolute Value. The Answer Has The Sign Of The Number With The Larger Absolute Value. The Number A Can Be One Of The 100 Numbers 1;3;5;:::;199. Thus Among The 101 Integers Thus Among The 101 Integers Chosen, Two Of Them Must Have The Same A ’s When They Are Written In The Form, Say, 2 R ¢ A And 2 S ¢ A With R 6= S . If 806.2.1 Order And Compare Rational And Irrational Numbers And Locate On The Number Line Rational Number ~ Any Number That Can Be Made By Dividing One Integer By Another. The Word Comes From The Word "ratio". Examples: 1/2 Is A Rational Number (1 Divided By 2, Or The Ratio Of 1 To 2) 0.75 Is A Rational Number (3/4) 1 Is A Rational Number (1/1) Math 120 Final Review: Multiple Choice Version 1 Find The Domain Of The Following Functions: 1. A. B. C. D. The Items Are Numbered 1 Through 4. To Answer, Either Select The Numerical Order From A List Of Multiple Choices, Or Enter The Numbers In The Desired Order For Filling In The Blank. Example: Statement: Put The Following Four Colors In Alphabetical Order, From First To Last: (1) Red, (2) Yellow, (3) Green, (4) Blue. Answer: 4312. CHAPTERS 1 TO 6, SAMPLE MULTIPLE CHOICE QUESTIONS Correct Answers Are In Bold Italics.. This Scenario Applies To Questions 1 And 2: A Study Was Done To Compare The Lung Capacity Of Coal Miners To The Lung Capacity Of Farm Workers. The Researcher Studied 200 Workers Of Each Type. First Arrange The Numbers In A Numerical Sequence: 3,7, 17, 19, 20, 21, 31, 43, 46. Next Find The Middle Number. The Median = 20 3. Where W 1, W 2, …, W N Are Real Numbers, Interpreted As The ‘voting Weights’ Of The N Individuals.. Two Points About The Concept Of An Aggregation Rule Are Worth Noting. First, Under The Standard Definition, An Aggregation Rule Is Defined Extensionally, Not Intensionally: It Is A Mapping (functional Relationship) Between Individual Inputs And Collective Outputs, Not A Set Of Explicit Advanced English Grammar Tests Includes Challenging Grammar Test For Those Who Are Really Good At English Grammar. Try To Take All Tests In This Category So As To Be Sure That There Aren’t Any Topics That You Haven’t Learnt Yet. 2. All Variables And Expressions Used Represent Real Numbers Unless Otherwise Indicated. 3. Figures Provided In This Test Are Drawn To Scale Unless Otherwise Indicated. 4. All Figures Lie In A Plane Unless Otherwise Indicated. 5. Unless Otherwise Indicated, The Domain Of A Given Function . F. Is The Set Of All Real Numbers . X. For Which . F (x Order Of Operations With Negative And Positive Integers (Four Steps) (1186 Views This Week) Order Of Operations With Whole Numbers (Four Steps) (977 Views This Week) Order Of Operations With Whole Numbers And No Exponents (Three Steps) (700 Views This Week) Order Of Operations With Whole Numbers (Two Steps) (646 Views This Week) Order Of Operations With Positive Fractions (Three Steps) (628 They Are The Numbers Upon Which We Easily Perform Mathematical Operations. All The Numbers Which Are Not Imaginary Are Real Numbers. For Example, 22, -11, 7.99, 3/2, π(3.14), √2, Etc. Also Check: Important 2 Marks Questions For CBSE 10th Maths; Important 3 Marks Questions For CBSE 10th Maths; Important 4 Marks Questions For CBSE 10th Maths Number Patterns Are A Very Common Type Of Problem Where A Student Is Given A Sequence Of Numbers And Asked To Identify How That List Is Generated And What The Next Values Will Be. They Are Regular Features On Standardized Tests And You Will Also Find Them As Part Of The Common Core Standard (specifically 4.0A.C.5 ) In The United States. Multiple-choice Questions: Tips For Optimizing Assessment In-seat And Online. Scholarship Of Teaching And Learning In Psychology, 2 (2), 147-158. An Article Highlighting The Research Covered In The Xu, Et Al. Appeared In The November 2016 Issue Of The Teaching Professor. MCQ Quiz On Rational Numbers Multiple Choice Questions And Answers On Rational Numbers MCQ Questions Quiz On Rational Numbers Objectives Questions With Answer Test Pdf For Interview Preparations, Freshers Jobs And Competitive Exams. Professionals, Teachers, Students And Kids Trivia Quizzes To Test Your Knowledge On The Subject. There Are Multiple Choice Tests, Gap Fill And More - Self-grading Algebra Quizzes. About Algebra4children.com Algebra For Children Is An Excellent Web Site For Parents And Teachers Who Wish To Help Their Kids Get Better Algebra Practice. S-10 - Multiple Choice - Classroom Objects S-11 - Illustrated Squares - School Classes S-12 - Multiple Choice - Classroom Objects S-13 - Making School Items Plural S-14 - Matching Columns - Language For The Classroom - Score 10 S-15 - Put In Order - Classes & Ordinal Numbers S-16 - Multiple Choice - Things To Do In School Enjoy Our Ultimate Guide To Multiple Choice Questions. While It May Be The Question Type That’s Most Straightforward, Understanding The Different Types Of Multiple Choice Question And Their Uses Is More Nuanced. Learn How To Leverage These Structured Survey Responses Effectively Today For FREE. Question: Consider The Statement That Min(a, Min(b, C)) = Min(min(a, B), C) Whenever A, B, And C Are Real Numbers. Identify The Set Of Cases That Are Required To Prove The Given Statement Using Proof By Cases. Multiple Choice. Choose Correct One. Elt-els Free English Teaching & Learning Resources, Exercises, Worksheets, Multiple Choice Tests, English Grammar, Reading Materials And Stories This Response Is The Result Of Either Converting The Decimals To Fractions, Or The Fractions To Decimals, Then Comparing The Results, And Placing The Original Numbers In The Correct Order. B. 1⁄2, 4⁄5, 0.6, 0.07, 1⁄8 By (date), When Given A Series Of (12) Real-world* Word Problems Involving (2) Sets Of Rational Numbers (including Complex Fractions), And Requiring Any Of The Four Operations (i.e., Addition, Subtraction, Multiplication, Division), (name) Will Select An Operation And Solve (10 Out Of 12) Problem Correctly. Learn More At Mathantics.comVisit Http://www.mathantics.com For More Free Math Videos And Additional Subscription Based Content! The Best Source For Free Math Worksheets And Distance Learning. Easier To Grade, More In-depth And Best Of All 100% FREE! Kindergarten, 1st Grade, 2nd Grade, 3rd Grade, 4th Grade, 5th Grade And More! Multiple Choice Questions: Set A Try The Following Questions To Test Your Understanding Of This Chapter. Once You Have Finished, Click On The 'Submit Answers For Grading' Button To Get Your Results. This Activity Contains 15 Questions. 3. If All Page Frames Are Initially Empty, And A Process Is Allocated 3 Page Frames In Real Memory And References Its Pages In The Order 1 2 3 2 4 5 2 3 2 4 1 And The What Two Numbers Do You Need To Compute The Unemployment Rate? A. The Number Of Unemployed Persons And The Size Of The Population B. The Number Of Unemployed Persons And The Number Of Discouraged Workers C. The Number Of Employed Persons And The Number Of Unemployed Persons D. The Number Of Persons In The Labor Force And The Size Of The Population CCSS.Math.Content.1.MD.A.2 Express The Length Of An Object As A Whole Number Of Length Units, By Laying Multiple Copies Of A Shorter Object (the Length Unit) End To End; Understand That The Length Measurement Of An Object Is The Number Of Same-size Length Units That Span It With No Gaps Or Overlaps. The ACT Contains Multiple-choice Tests In Four Areas: English, Mathematics, Reading And Science. ACT's Writing Test Is Optional And Will Not Affect Your Composite Score. English Practice Test Questions One Of The Biggest Criticisms Of Multiple Choice Questions Is That They Only Test Factual Knowledge. But It Doesn’t Have To Be That Way. We Can Also Use Multiple Choice Questions To Assess Higher-order Thinking. Higher Order Thinking Goes Beyond Memorizing And Recalling Facts And Data. It Even Goes Beyond Comprehension. Higher- Choice Boards Can Be Organized So Students Need To Finish One Square Before Moving To The Next, They Can Be Random, Or Can Be Organized In A Specific Way. The Level Of Difficulty Of The Activities Can Vary Or Stay Consistent." Tic-Tac-Toe Boards And Menus Are Two Different Types Of Choice Boards. The E-mail Simply Asked Customers To Provide Their Address, Date Of Birth, Social Security Number, And Current Mortgage Information In Order To Receive A Free Loan Quote. Suspicious Of The Offer, Charlie Researched The Company And Discovered That The E-mail Was A Fraud. Up To Five Months After Your Test Date: If You Didn't Order Student Answer Service With Your Registration, There Are Three Ways You Can Order: Order Online By Signing Into Your College Board Account And Selecting Order Verification Within My Scores. Call Customer Service At 866-756-7346 (U.S. And Canada) And 212-713-7789 (International). CBSE Class 10 Mathematics Constructions MCQs Set A With Answers Available In Pdf For Free Download. The MCQ Questions For Class 10 Construction With Answers Have Been Prepared As Per The Latest 2021 Syllabus, NCERT Books And Examination Pattern Suggested In Standard 10 By CBSE, NCERT And KVS. As Part Of The Study, Data Was Collected On Order Processing Time As Given In The Table Below. Using The Two High Order Digits Of The Random Numbers (RN) (below), Determine The Expected Waiting Time For A Customer To Receive An Order For A Ten-sample Simulation. Multiplying Exponents – If The Bases Are The Same Then Add The Exponents – So -5 + 5 = 0 And -3 + 3 = 0 Which Gives X 0 / X 0 And Any Number Raised To The Power Of 0 Is 1, So 1/1 = 1. 7. C To Multiple Exponents With The Same Base – Add The Exponents. Students Develop Understanding By Solving Equations And Inequalities Intuitively Before Formal Solutions Are Introduced. Students Begin Their Study Of Algebra In Books 1-4 Using Only Integers. Books 5-7 Introduce Rational Numbers And Expressions. Books 8-10 Extend Coverage To The Real Number System. => Learn More See A Video From Microsoft With Multiple Choice Questions Here. Simulations Where Candidates Get Specific Tasks To Perform Can Closely Mimic A Real-life Task. I Remember I Used To Get Many Of These When I Did My MCSE 2003 Exams, But I Have Not Seen Many Simulations Recently. Instead, Microsoft Is Adding More Questions Of New Types. Rather Than Cut Out The Middlemen And Go Straight To Buyers, As So Many Early Similar Web Companies Sought To Do, Homestore Included Real Estate Agents. The Company Has A 200-person Sales Force That Calls On Real Estate Agents Every Day, Educating Agents As Well As Pushing Subscriptions To Homestore's Network Of Websites. Multiple Choice Questions And Answers On Multimedia MCQ Questions PDF Covers Topics: Analysis Of Algorithms, Audio And Video Compression, Data Packets, Moving Picture Experts Group, Streaming Live Audio Video, Real Time Interactive Audio Video, Real Time Transport Protocol, SNMP Protocol, And Voice Over IP. Even Those Instructors Who Conduct Classroom Sessions May Want To Augment Essay Questions With Multiple-choice In Order To Take Advantage Of Some Of The Latter’s Efficiencies. For Example, Compared To Essay Questions, Multiple-choice Questions Can Be Graded Faster And More Reliably By People Other Than The Instructor, And By The Computer. Concentrate On Being Calm And Staying Focused On The Material In Front Of You, And Passing Your Real Estate Exam Will Be A Breeze. 4. Read Every Word. Take The Time To Read Every Word Of Every Question On Your Real Estate Exam. This May Seem Obvious Or Trivial. But, In Timed Multiple-choice Exams, Missing Or Misreading One Word Can Make A Big ThatQuiz Public Test Library. Thatquiz.org < > < Browse By: Teacher | Date | Popularity | Language The Number Of The Line Of Ovals On Your Answer Document Is The Same As The Number Of The Question You Are Answering And That You Mark Only One Answer For Each Question. If You Are Taking The ACT Online, Be Sure You Select The Intended Response. 9 Erase Completely. If You Want To Change A Multiple-choice Answer On Paper, Be Unfortunately, Such A Choice Would Have A Number Of Disadvantages: Not Suitable For Mixers And Translators, Due To The Absense Of SSRC. The Total Reduction In Overhead Is Modest: A G.723.1 Packet With An Audio Payload Of 20 Bytes Would Shrink From A Total Of 60 Bytes To 50 Bytes, Or 20%. The CFP® Exam Is A 170-question, Multiple-choice Test That Consists Of Two 3-hour Sections During One Day. Each Section Is Divided Into Two Distinct Subsections. The Exam Includes Stand-alone Questions, As Well As Questions Associated With Case Studies. Teaching With A CRS Types Of Questions. Many Instructors See Multiple-choice Questions As Limited To Testing Students’ Recall Of Facts. However, Multiple-choice Clicker Questions Can Actually Serve Many Other Purposes In The Class, Including Assessing Students’ Higher-order Thinking Skills. Start Studying Ch. 8 Consideration Of Internal Control In An Information Technology Environment- Multiple Choice Q's. Learn Vocabulary, Terms, And More With Flashcards, Games, And Other Study Tools. The Multistate Bar Examination (MBE) Is A Six-hour, 200-question Multiple-choice Examination Developed By NCBE And Administered By User Jurisdictions As Part Of The Bar Examination On The Last Wednesday In February And The Last Wednesday In July Of Each Year. Order Of Operations (Absolute Value)Worksheet 5 - Here Is A 15 Problem Worksheet Where You Will Asked To Simplify Expressions That Contain Absolute Values While You Execute The Correct Order Of Operations. This Worksheet Can Get A Little Complicated As You Become Familiar With The Negative Root Of An Absolute Value. This Leads To A Real Problem However Since That Means $$v$$ Must Be, $v = \int{{\ln X\,dx}}$ In Other Words, We Would Need To Know The Answer Ahead Of Time In Order To Actually Do The Problem. So, This Choice Simply Won’t Work. Therefore, If The Logarithm Doesn’t Belong In The $$dv$$ It Must Belong Instead In The $$u$$. Multiple Choice (80 Points, 5 Points Each) Identify The Choice That Best Completes The Statement Or Answers The Question. 1. All Real Numbers : D. K: 5> 1 2: 11. Therefore, In Order For The Sum To Be Zero, Half Of The Numbers Must Be Negative. In Particular, For Every Positive Number In The Sequence Is A Corresponding Negative Number. Thus, The Sequence Should Be -4, -2, 0, 2, 4. Below Is An Aptitude Test With 8 Multiple-choice Questions About Easy Number Sequences. What Are The Missing Numbers In The Number Sequences Shown Below? Enjoy Practising! Practice The Numerical Reasoning Tests Used By Employers At JobTestPrep. Free Math Worksheets With Multiple Choice Answers. Type Keywords And Hit Enter. Free Math Worksheets With Multiple Choice Answers Collection. This Is An Interactive Version Of The Multiple Choice Rorschach (Harrower-Erickson, 1945). Background The Rorschach Test Is A Projective Psychological Test Developed In 1921 By Hermann Rorschach To Measure Thought Disorder For The Purpose Of Identifying Mental Illness. Multiple Choice Exams Have More To Do With Eliminating The Wrong Answers Than Finding The Right Answers. This May Not Instantly Make Sense But Consider This: There Are 3 Wrong Answers And 1 Right Answer. It’s Often Easier To Recognize 3 Wrong Answers Than Recognizing The Single Right Answer. There Are Unlimited Ways An Answer Can Be Wrong. Find The Numerical Answer To Equation - Powered By WebMath. This Page Will Try To Find A Numerical (number Only) Answer To An Equation. Get Homework Help Fast! Search Through Millions Of Guided Step-by-step Solutions Or Ask For Help From Our Community Of Subject Experts 24/7. Try Chegg Study Today! This Article Will Help You Answer IELTS Reading Multiple Choice Questions More Effectively. On Both The Academic And General IELTS Reading Papers You Are Likely To Be Asked Multiple Choice Questions (MCQs). Your Job Is To Simply Choose The Correct Answer From A List Of Possible Choices. This Post Will: Look At Example Questions Online Polls Let You Check In With Your Audience Or Customers At Any Time. Get A Sense Of What People Are Thinking Or Feeling. There Are Lots Of Polling Websites To Choose From. If You Don’t Already Have A SurveyMonkey Account, Sign Up For Free And You Can Create And Launch Your Online Poll In Minutes. Get Started Now. Using The Right Type Of Number. When Choosing The Correct Type Of Number Column To Use, The Choice To Use A Whole Number Or Currency Type Should Be Straightforward. The Choice Between Using Floating Point Or Decimal Numbers Requires More Thought. Decimal Numbers Are Stored In The Database Exactly As Specified. Book Adventure Is A AR Alternative For Homeschoolers And Parents. Our Independent Reading Program Offers Online Book Tests, Reading Comprehension Quizzes, Book Word Lists, Spelling And Vocabulary Lessons, Reading Time Logs And Literacy Interactive Learning Activities Including Graphic Organizers For Student's Children's Books. I Am A Professor At The Department Of Mathematics, UCLA.I Work In A Number Of Mathematical Areas, But Primarily In Harmonic Analysis, PDE, Geometric Combinatorics, Arithmetic Combinatorics, Analytic Number Theory, Compressed Sensing, And Algebraic Combinatorics. Whole Numbers Test. You Can Print The Whole Numbers Test Before You Start Taking The Test. Then Try To Answer All The Questions. The Last 4 Questions On The Test Are Word Problems. 28. Two Thousand Numbers Are Selected Randomly; 960 Were Even Numbers. A. State The Hypotheses To Determine Whether The Proportion Of Odd Numbers Is Significantly Different From 50%. B. Compute The Test Statistic. C. At 90% Confidence Using The P-value Approach, Test The Hypotheses. ANS: A. H0: P = 0.5 Ha: P 0.5 B. Ndenotes The Number Of Data Points And Ddenotes The Dimensionality. 1 Multiple-Choice/Numerical Questions 1. Choose The Options That Are Correct Regarding Machine Learning (ML) And Arti Cial Intelligence (AI), (A) ML Is An Alternate Way Of Programming Intelligent Machines. (B) ML And AI Have Very Di Erent Goals. 0115 - Insects And Spiders - Multiple Choice Colors Introduction 0140 - Multiple Choice 0141 - Spelling Game - Simon Game - Flood Colors More Practice Months Of The Year Introduction 0125 - Spelling Days Of The Week Introduction 0128 - Spelling Numbers Game Telling Time Listening Bathroom Introduction - 10 Words Introduction - 20 Words 0320 A. In A Speech About Skin Tone, Ask Audience Members To Pinch Their Elbow Skin, And Explain How To Judge Skin Tone From The Number Of Seconds It Takes For The Skin To "pop" Back. B. In A Speech About Blindness, Ask Audience Members To Close Their Eyes For Twenty Seconds. C. Oxford University Press USA Publishes Scholarly Works In All Academic Disciplines, Bibles, Music, Children's Books, Business Books, Dictionaries, Reference Books, Journals, Text Books And More. Computer Network MCQ (Multiple Choice Questions) With Tutorial, Features, Types Of Computer Network, Components, Cables And Connectors, Intranet, Uses Of Computer Network, Hub, Software And Hardware, Etc. A Real Estate Broker Had A Listing Agreement With A Seller That Specified A 6% Commission. The Broker Showed The Home To A Prospective Buyer. The Next Day, The Buyer Called The Seller Directly And Offered To Buy The House For 5% Less Than The Asking Price. Some Of The Worksheets Below Are Factors And Multiples Worksheet With Answers, Understand The Difference Between Multiples, Factors And Primes, Find All The Factor Pairs Of Any Whole Number, Common Factors, Greatest Common Factor, Finding The GCF, Several Problems For Practicing, … (E) The Number Of Nearest Neighbors Increases. AP Chemistry: Atomic Structure Multiple Choice 22. 61s 2 22s 2p 3s 3p3 Atoms Of An Element, X, Have The Electronic Configuration Shown Above. The Compound Most Likely Formed With Magnesium, Mg, Is… (A) MgX (B) Mg 2 X (C) MgX 2 (D) MgX 3 (E) Mg 3 X 2 43. EOQ Is The Order Quantity That Over Our Planning Horizon. Minimizes Total Ordering Costs Minimizes Total Carrying Costs Minimizes Total Inventory Costs The Required Safety Stock 11. A B2B Exchange Is A Internet Marketplace That Matches Supply And Demand By Real-time Auction Bidding. Buyer-to-business Business-to-business Multiple Multiple Choice Questions, Jia Tolentino Personal Essay Is Dead, Homework Helpers English Language Composition, Mla Essay Sample Owl Purdue Note That The First Generation May Take Longer, But Subsequent Generation On Same Topic Multiple Multiple Choice Questions Will Be Almost Instant. Order - Controls The Pane Icon Order On The Bottom Of The Related Items Pane. Use The Following Guidelines: Icons With Low Order Numbers Appear To The Left. Icons With The Same Order Number Are Sorted Alphanumerically. The Order Number Can Be Any Positive Or Negative Integer. The Ordinal Numbers And Values Indicate A Direction, In Addition To Providing Nominal Information. We Can Also Assign Numbers To Ordinal Data To Show Their Relative Position. But We Can Not Do Math With Those Numbers. For Example: “first, Second, Third…etc.” With This In Mind, We Cannot Treat Ordinal Variables Like Quantitative Variables. Here Are Instruction For Establishing Sign Charts (number Line) For The First And Second Derivatives. To Establish A Sign Chart (number Lines) For F' , First Set F' Equal To Zero And Then Solve For X. Mark These X-values Underneath The Sign Chart, And Write A Zero Above Each Of These X-values On The Sign Chart. Chapter 4 : Multiple Integrals. In Calculus I We Moved On To The Subject Of Integrals Once We Had Finished The Discussion Of Derivatives. The Same Is True In This Course. Now That We Have Finished Our Discussion Of Derivatives Of Functions Of More Than One Variable We Need To Move On To Integrals Of Functions Of Two Or Three Variables. The CAPM Practice Test Contains 150 Multiple Choice Questions That Must Be Answered In A Duration Of 180 Minutes. This CAPM Test Helps Candidates Identify The Areas Of Project Management They Are Weak In And Need To Work. NUMERICAL METHODS MULTIPLE CHOICE QUESTIONS The Order Of Convergence Of Regular-falsi Method Is A) 1.235 B) 3.141 C) 1.618 D) 2.792 3. Large Number Of Sub Ranking, Matrix/Rating Scale, Multiple Choice, Multiple Textboxes, And Slider Questions Calculate An Average Or Weighted Average. See Each Question Type Article For Details On How The Results For Each Question Type Are Calculated In The Analyze Results Section. Combining Or Hiding Answer Choices Multiple Choice Questions On Operating System Topic CPU Scheduling. Practice These MCQ Questions And Answers For Preparation Of Various Competitive And Entrance Exams. A Directory Of Objective Type Questions Covering All The Computer Science Subjects. While Multiple-choice Test Items Typically Only Carry 1 Point Per Item, Constructed-response Items Can Account For As Few As 2 Points Or As Many As 10 Points Of The Total Raw Score For Each Question. Depending On The State, Constructed-response Items May Account For As Much As 25 To 50 Percent Of The Composition Of The Total Test That Students Create Free Online Surveys, Quizzes And Forms With Our Easy To Use Drag And Drop Builder. Then Collect And Analyze Your Data With Advanced Reporting Tools. CCSS.Math.Content.6.NS.B.4 Find The Greatest Common Factor Of Two Whole Numbers Less Than Or Equal To 100 And The Least Common Multiple Of Two Whole Numbers Less Than Or Equal To 12. Use The Distributive Property To Express A Sum Of Two Whole Numbers 1-100 With A Common Factor As A Multiple Of A Sum Of Two Whole Numbers With No Common Factor. GeorgiaStandards.Org (GSO) Is A Free, Public Website Providing Information And Resources Necessary To Help Meet The Educational Needs Of Students. This Product Order Form Template Is A Fast Way To Get Started Selling Online. The Template Is Fully Customizable, Enabling You To Add New Fields, Design It To Match Your Brand, And Add New Products To Sell. Usually, We Should Focus On The Re-Order Paragraphs And Fill In The Blanks Because Those Contain The Bulk Of The Marks And That’s Why We Tell Students To Focus On Those Tasks. – Multiple Choice (Choose Single Answer) – Multiple Choice (Choose Multiple Answer) – Re-order Paragraph – Reading: Fill In The Blanks Testlet Number 1 In The Auditing And Attestation Section Has 36 Multiple-choice Questions, And Testlet Number 2 In The Same Section Has Another 36 Multiple-choice Questions. Similarly, The Other Three Sections Have Two Testlets Of Multiple-choice Questions Each. Testlet Number 1 In Each Of The Sections Is Always A Medium Testlet. To Form A Useful Bill Of Material Matrix It Is Convenient To Order The Items By Levels. The Level Of An Item Is The Maximum Number Of Stages Of Assembly Required To Get The Item Into An End Product. Example 2 Consider A System With Two End Items, Item 1 And Item 2. Item 1 Requires Two Units Of Item A And One Unit Of Item C. 4Tests.com - Your Free, Practice Test Site For High School, College, Professional, And Standardized Exams And Tests - Your Free Online Practice Exam Site! 7. Ordinary Differential Equations – First Order & First Degree. The Section Contains Multiple Choice Questions And Answers On First Order First Degree Differential Equations, Homogeneous Form, Seperable And Homogeneous Equations, Bernoulli Equations, Clairauts And Lagrange Equations, Orthogonal Trajectories, Natural Growth And Decay Laws, Newtons Law Of Cooling And Escape Velocity, Simple US Government Completes Its First 'real World' Look At COVID-19 Vaccines. Here's What They Found By Beau Bowman And Chris Gothner. The Independent Repair Provider Program Provides Genuine Parts For Out-of-warranty Repairs. Order Of Operations Practice Problems With Answers There Are Nine (9) Problems Below That Can Help You Practice Your Skills In Applying The Order Of Operations To Simplify Numerical Expressions. The Exercises Have Varying Levels Of Difficulty Which Are Designed To Challenge You To Be More Extra Careful In Every Step While You Apply The … Order Of Operations Practice Problems Read More » Multiple Choice Questions ForReview In Each Case There Is One Correct Answer (given At The End Of The Problem Set). (Real Numbers) Lo-13 Sets Of Numbers N(Natural Preface These Are Answers To The Exercises In Linear Algebra By J Hefferon. An Answer LabeledhereasOne.II.3.4isforthequestionnumbered4fromthefirstchapter,second Order Of Operations Related Teacher Resources Here Is A Wide Range Of Resources For A Deeper Understanding Of This Topic. Basic Math Operations Lesson Plans To Quote Poundstone, “This Is In Line With Experimental Findings That The Quality Of Randomizing Decreases As The Number Of Options Increases.” So, B And E Are Better Guesses In A 4-option And 5-option Multiple-choice Tests, Respectively, Than Picking The Middle Answer, A Common Guess Hack, Or A Random Guess. MULTIPLE CHOICE QUESTIONS 1. Good Marketing Is No Accident, But A Result Of Careful Planning And _____. Execution Selling Strategies Research 2. Marketing Management Is _____. Managing The Marketing Process Monitoring The Profitability Of The Company’s Products And Services A Mobile Device App That Turns Your IPhone, IPad, Or Android Device Into An Optical Scanner For Grading Paper Multiple-choice Assessments. Great For Quizzes, Exit Tickets, And Larger Exams Of Up To 100 Questions. Take The Quiz To Test Your Understanding Of The Key Concepts Covered In The Chapter. Try Testing Yourself Before You Read The Chapter To See Where Your Strengths And Weaknesses Are, Then Test Yourself Again Once You’ve Read The Chapter To See How Well You’ve Understood.Tip: Click On Each Link To Expand And View The Content. Do Multiple-choice Or Short-answer Tests Measure Important Student Achievement? These Kinds Of Tests Are Very Poor Yardsticks Of Student Learning. They Are Weak Measures Of The Ability To Comprehend Complex Material, Write, Apply Math, Understand Scientific Methods Or Reasoning, Or Grasp Social Science Concepts. Sign In With Google. Socrative We Can Calculate The Order Quantity As Follows: Multiply Total Units By The Fixed Ordering Costs (3,500 × $15) And Get 52,500; Multiply That Number By 2 And Get 105,000. Divide That Number By The 5. If Ais A 10 8 Real Matrix With Rank 8, Then 1. There Exists At Least One B2R10 For Which The System Ax= Bhas In Nite Number Of Least Square Solutions. 2. For Every B2R10, The System Ax= Bhas In Nite Number Of Solutions. 3. There Exists At Least One B2R10 Such That The System Ax= Bhas A Unique Least Square Solution. Count Using Multiple Choice Objects Up To 100. K.4 / Count By Typing I. Put Numbers In Order Up To 30. K.64 / How To Make A Number With Sums Up To 10. Positions A Mentimeter Quiz Is The Perfect Way To Test, Engage And Entertain Your Audience In Any Number Of Different Situations. This Article Provides Tips To Help Perfect Your Quiz-hosting! 55 Free Trivia And Fun Quiz Question Templates The Commutative Property, Therefore, Concerns Itself With The Ordering Of Operations, Including The Addition And Multiplication Of Real Numbers, Integers, And Rational Numbers. For Example, The Numbers 2, 3, And 5 Can Be Added Together In Any Order Without Affecting The Final Result: Home » Financial Accounting Basics » Financial Accounting Basics Multiple Choice Questions Correct! The Income Statement Displays All Revenues And Expenses Recorded In A Period In A Single Report. MULTIPLE CHOICE TEST Equal Number Of Dependent And Independent Variables. 1 St Order. Complete Solution . Multiple Choice Questions On Other Topics. Pseudocode Examples. An Algorithm Is A Procedure For Solving A Problem In Terms Of The Actions To Be Executed And The Order In Which Those Actions Are To Be Executed. An Algorithm Is Merely The Sequence Of Steps Taken To Solve A Problem. If You Are A Guest User Or Are Not Logged Into Your Account, Your Opt-out Choice Will Only Be Effective For This Browser Or Application. If You Remove Or Clear All Your Cookies, Your Selections Will Not Be Saved And You Will Need To Opt Out Again When You Return To The Site. Real Analysis And Multivariable Calculus Igor Yanovsky, 2005 5 1 Countability The Number Of Elements In S Is The Cardinality Of S. S And T Have The Same Cardinality (S ’ T) If There Exists A Bijection F: S ! T. Card S • Card T If 9 Injective1 F: S ! T. Card S ‚ Card T If 9 Surjective2 F: S ! T. S Is Countable If S Is flnite, Or S ’ N The Georgia Real Estate Salesperson Examination Is A 4-hour, 152-question Multiple-choice Exam. The Test Is Divided Into Two Parts, Each Of Which Is Pass Or Fail. The First Part Is The Real Estate Salesperson National Exam. You Must Correctly Answer At Least 75 Out Of The 100 Questions Presented To Pass. Access Quality Crowd-sourced Study Materials Tagged To Courses At Universities All Over The World And Get Homework Help From Our Tutors When You Need It. Welcome To The Virginia State Standards Of Learning Practice Tests! All Of The Questions On This Site Come From Test Materials Released By The Virginia Department Of Education And Are Used Here With Permission. Social Studies (History And Government – Canadian Or US) 50 Multiple Choice Questions — 2 Hours (75 Min. For Multiple Choice, 45 Min. For The Essay) Language Arts Writing And Reading: Reading 40 Multiple Choice Questions — 1 Hour 20 Minutes And Writing 50 Multiple Choice Questions, One Essay — 1 Hour, 5 Minutes. Summer 2010 15-110 (Reid-Miller) Two-Dimensional Arrays • Two-dimensional (2D) Arrays Are Indexed By Two Subscripts, One For The Row And One For The Column. 12 Very Useful Multiple Choice, Topic-based Functional Maths Questions. Covers Estimating Proportion / Fractions, Equivalents, And Fractions Of Amounts. Ideal For Assessment Or Revision. Editor’s Note No Answer Sheet. This Is Just One Of A Set Of 10 Worksheets. Get Your Tax Transcript Online Or By Mail. Find Line By Line Tax Information, Including Prior-year Adjusted Gross Income (AGI) And IRA Contributions, Tax Account Transactions Or Get A Non-filing Letter. RGB And COLOR Search Engine Match Color Data To Commercial Colors. All You Need To Match Your RGB And Color Data With Paint, Ink, Color Standards And Commercial Color Collections. A Short Primer On Core Ideas From Behavioral Economics. By Alain Samson, PhD, Editor Of The BE Guide And Founder Of The BE Group. Berkshire’s 2005 Annual Report Explains The Company’s Position: “If A Management Makes Bad Decisions In Order To Hit Short-term Earnings Targets, And Consequently Gets Behind The Eight-ball This Is The Traditional, Most Frequently Used Multiple-choice Question Format On The Examination. Example Single-Best-Answer Question A 22-year-old Woman Is Brought To The Emergency Department By Ambulance 30 Minutes After She Was Struck By An Oncoming Motor Vehicle While Bicycling. Equivalent And Simplifying Fractions Is A Complete Lesson Including Worksheets, Multiple Choice, Bingo And Blooms Questioning. Fractions Of Amounts Is Another Differentiated Complete Lesson, Including Bingo And Questions. Adding And Subtracting Fractions Has A Detailed Tutorial Focusing On Finding Common Denominators. Internal Rate Of Return IRR Is A Financial Metric For Cash Flow Analysis, Used Often For Evaluating Investments, Capital Acquisitions, Project Proposals, And Business Case Scenarios. By Definition, IRR Compares Returns To Costs By Finding An Interest Rate That Yields Zero NPV For The Investment Cash Flow Stream. However, Finding Practical Guidance For Investors And Decision Makers In IRR Students Can Solve NCERT Class 10 Science Light Reflection And Refraction Multiple Choice Questions With Answers To Know Their Preparation Level. Class 10 Science MCQs Chapter 10 Light Reflection And Refraction. 1. When Light Falls On A Smooth Polished Surface, Most Of It (a) Is Reflected In The Same Direction (b) Is Reflected In Different A Pdf File Contains 30 Multiple Choice Questions On Real Analysis, Commutative Algebra And Linear Algebra. If G Is A Group Of Order 2, Then The Number Of Subgroups Of G C. The Multiple Optimal Solution Exist D. A & B But Not C 54. An Assignment Problem Is Considered As A Particular Case Of A Transportation Problem Because A. The Number Of Rows Equals Columns B. All X Ij = 0 Or 1 C. All Rim Conditions Are 1 D. All Of The Above 55. An Optimal Assignment Requires That The Maximum Number Of Lines That Can Be Drawn If Using A Tablet, Touch The Sum Input Area To Activate Keypad. Type An Answer For Each Negative Number Addition Or Subtraction Problem. Use The Next, TAB And SHIFT+TAB Keys, Or The Mouse (our Touch Screen), To Move Between Problems. After Adding And Subtracting All 10 Negative Number Problems, Check Your Answers. NCI Supports A Number Of Projects, Including Clinical Trials, In The Area Of Symptom Management And Palliative Care. Call NCI's Cancer Information Service At 1-800-4-CANCER (1-800-422-6237) For Information About Clinical Trials Of Supportive And Palliative Care. The Written Exam Is Multiple Choice And Based On The 75-hour Pre-licensing Curriculum. Applicants Will Be Allowed 1 1/2 Hours To Complete The Test. The Allotted Time Begins At The Conclusion Of The Instructions. This Exam Is Offered In The Following Languages: Spanish, Korean, Russian And Chinese. Choose From A Variety Of Activity Types That Let You Visualize Responses In Real Time, Like Open-ended Q&As, Multiple Choice, And Word Clouds. Each Activity Type Encourages Audience Participation And Helps You Collect A Different Kind Of Feedback. A Multiple-choice Question (MCQ) Is Composed Of Two Parts: A Stem That Identifies The Question Or Problem, And A Set Of Alternatives Or Possible Answers That Contain A Key That Is The Best Answer To The Question, And A Number Of Distractors That Are Plausible But Incorrect Answers To The Question. Multiple-choice Polls: The Audience Chooses From The Response Options You Provide (for Example A Choice Of Either True Or False.) You Can Also Upload Images To Serve As Response Options. Multiple Choice Polls Accept Both Text Message Responses And Web Responses. Open-ended Questions: The Audience Responds Freely, With Anything They Wish. Open Cheap Paper Writing Service Provides High-quality Essays For Affordable Prices. It Might Seem Impossible To You That All Custom-written Essays, Research Papers, Speeches, Book Reviews, And Other Custom Task Completed By Our Writers Are Both Of High Quality And Cheap. Grade Or No Grade Is A Fun Game About Multiplying And Dividing Integers. What Grade Can You Get? Answer Well And Don't Let The Examiners Trick You! Create Your Own Online Sign Up Sheets To Help Organize Volunteers And Donations. Coordinate Civic Groups, Meal Donations, And Potluck Dinners With Ease. Also Great For Church Groups, Youth Groups, Sports Teams, Baseball Teams, Soccer Teams, Swim Teams, Ski Clubs, PTAs, PTOs, And Many, Many More! 40. Which Of The Following Is NOT A Component Of A Multiple Choice Question? A. The Stem B. The Root C. Distracters D. Alternatives 9 Final Examination GRM697 The Research Process 41. Name A Multiple Choice Item Which Provides A Plausible But Wrong Answer. A. Attracter B. Alternative C. Detractor D. Distracter 42. The Nature Conservancy Is A Nonprofit, Tax-exempt Charitable Organization (tax Identification Number 53-0242652) Under Section 501(c)(3) Of The U.S. Internal Revenue Code. Donations Are Tax-deductible As Allowed By Law. Properties Of Real Numbers Defines The Properties Of Real Numbers And Then Provides Examples Of The Properties By Rewriting And Simplifying Expressions.These Include The Distributive Property, Factoring, The Inverse Properties, The Identity Properties, The Commutative Property, And The Associative Property. Examples: Use The Properties Of Real Suppose You're Taking Another Multiple Choice Test, This Time Covering Particle Physics. The Test Consists Of 40 Questions, Each Having 5 Options. If You Guess At All 40 Questions, What Are The Mean And Standard Deviation Of The Number Of Correct Answers? [ Reveal Answer ] Practice Addition Mental Math Skills. (Do Not Try Subtraction Due To Negative Numbers). Choose The Green Button For One-digit Number Operations, Yellow Digit For Two-digit Number Operations, And The Red Button For Three- And Four-digit Number Operations. I Recommend The Yellow Button!!! Mental Math Tutorial The Number In The Second Column Indicates How Many Times The Topic Has Appeared On A High School CCSS Regents Exam. GEO (below) AII: PRECALCULUS : QUICK TOPICS: ALGEBRA I : ALGEBRA I LESSON PLANS (zipped) PDF DOC: Topic # State Standard: NUMBERS, OPERATIONS AND PROPERTIES : Order Of Operations : 6.EE.A.2: Evaluating Expressions Hence The Number N Of Numbers That We May Form Is Given By N = 1 × 4 × 4 × 4 = 64; B) 1 Choice For The First Digit. 3 Choices For The Second Digit Of The Number To Be Formed Since Repetition Is Allowed. 2 Choices For The Third Digit Of The Number To Be Formed. 1 Choice For The Fourth Digit Of The Number To Be Formed. MCQ Quiz On Machine Learning Multiple Choice Questions And Answers On Machine Learning MCQ Questions On Machine Learning Objectives Questions With Answer Test Pdf For Interview Preparations, Freshers Jobs And Competitive Exams. Professionals, Teachers, Students And Kids Trivia Quizzes To Test Your Knowledge On The Subject. The Least Common Multiple (LCM) Find The Least Common Multiple In This Fun Online Math Test Activity. Mixed Numbers And Improper Fractions Convert Mixed Numbers To Improper Fractions In This Interactive Test Of Math Understanding. Area Of Triangles Test See How Well You Can Calculate The Area Of A Triangle In This Online Activity. Online Homework And Grading Tools For Instructors And Students That Reinforce Student Learning Through Practice And Instant Feedback. PTE Writing, Reading & Speaking Lessons, Essential Tips And Practice Test To Help You Prepare Successfully For Your PTE Academic Exam. This Page Contains Everything You Need To Know And The Essential Skills For A High Score In PTE Exam. The Banks And Big Business "wish To" And "are" Taking Over Our World On Behalf Of The NWO Dream't Up Over A Century Ago. Remember The Bilderberg Group (world's Elite) Own All The Major Banking Corporations And Media Outlets So Control What We Learn, And What We Earn, They Set The Price Of Gold And Control The Stockmarkets In Doing So And Thus Control Our Economy And Finances Whilst Profiting You Can Set A Time Limit, And Specify The Max Number Of Times A Student Can Take Your Test. When The Student Finishes, You Can Choose To Display Their Score, Their Responses, The Explanations, And/or The Correct Answers. The Alamo Colleges District And It's Five Colleges Serve The Bexar County Community Through Its Programs And Services That Help Students Succeed In Acquiring The Knowledge And Skills Needed In Today's World. Tony Santilli Of Grandville, Mich., Asks:I’m A High School Teacher. Two Members Of Our Math ­department Disagree On The Best Strategy For Guessing When You Don’t Know The ­answer To Games, Auto-Scoring Quizzes, Flash Cards, Worksheets, And Tons Of Resources To Teach Kids The Multiplication Facts. Free Multiplication, Addition, Subtraction, And Division Games. Find And Study Online Flashcards And Class Notes At Home Or On Your Phone. Visit StudyBlue Today To Learn More About How You Can Share And Create Flashcards For Free! ANSWERS FOR THE MULTIPLE CHOICE QUESTIONS 1. B The Sociological Perspective Is An Approach To Understanding Human Behavior By Placing It Within Its Broader Social Context. (4) 2. D Sociologists Consider Occupation, Income, Education, Gender, Age, And Race As Dimensions Of Social Location.(4) The SAT Includes A Critical Reading, Math, And Writing Section, With A Specific Number Of Questions Related To Content. The Unscored Section. In Addition, There Is One 25-minute Unscored Section, Known As The Variable Or Equating Section. This Unscored Section May Be Either A Critical Reading, Math, Or Writing Multiple-choice Section. ‪Balancing Chemical Equations‬ The TEAS Test Is A Multiple Choice Exam That Contains Four Sections. The Overall Test Contains 170 Questions And Has A Three Hour And 29 Minute (209 Minute) Time Limit. Of The 170 Total Questions, 20 Questions Are Used For Internal Testing Purposes And Not Scored. Inkling Is A Modern Learning Platform Designed For Today’s Learners And Today’s Work. We Provide Easy Content Authoring, A Learner-centric UX, Structured Learning Paths, And Meaningful Analytics So You Can Activate Your Organization’s Most Critical Knowledge With Simplicity. Choice Amongst Multiple Nodes, Both The BFS And DFS Algorithms Will Choose The Left-most Node First. Starting From The Green Node At The Top, Which Algorithm Will Visit The Least Number Of Nodes Before Visiting The Yellow Goal Node? As A First Step, I Would Advise To Put All Your Question Texts, Answer Texts, Choice Texts And Correct Answer Texts In Lists. Then You Can Write A Function That Iterates Over These Lists (at The Same Time) And Does The Actual Game. For This You Can Use Zip. Take A Big Step Towards Australian Permanent Residency. Australian Visa Applications. This PTE Test Is Available In Sydney And Other Australian Cities As Well And Is Accepted By All Australian Universities, As Well As By Professional Associations And State Government Departments PTE Academic Is The English Test You Can Use To Prove Your English Ability As Part Of All Australian Visa Applications. 1. Multiple The Opposites 100 X 12 = 1200 2. Divide By The Remaining Number 42.85 28 1200.00 42.9% 28 Total Students -16 Men 12 Women 35 Correct Answers 45 Total Answers 1. Multiple The Opposites 100 X 35 = 3500 2. Divide By The Remaining Number 77.777 45 77.78% (rounded To Hundredth) MCQs For CBSE Class 9 Subjects Like Maths, Science, Social Science And English Are Available Here In PDF. All These MCQs Are Important For The CBSE Annual Exam 2021. IXL Offers Hundreds Of Sixth Grade Math Skills To Explore And Learn! Not Sure Where To Start? Go To Your Personalized Recommendations Wall To Find A Skill That Looks Interesting, Or Select A Skill Plan That Aligns To Your Textbook, State Standards, Or Standardized Test. Numbers Up To 1,000. Order Of Numbers Up To 1,000; Counting Patterns Up To 1,000; Addition And Subtraction Up To 1,000; Adding And Subtracting Up To 1,000; Comparison With Addition And Subtraction; Place Value Of Digits; Even And Odd; Word Problems; Multiplication And Division; Division Word Problems; Numbers Up To 1,000,000. Word Names For Numbers Multiple Choice Questions With Answers 1. List Of Organizational Behaviour Multiple Choice Questions With Answers: Q1. Organization Structure Primarily Refers To A. How Activities Are Coordinated & Controlled B. How Resources Are Allocated C. The Location Of Departments And Office Space D. The Policy Statements Developed By The Firm Q2. Best Social Media Marketing Objective Type Questions And Answers. Dear Readers, Welcome To Social Media Marketing Objective Questions And Answers Have Been Designed Specially To Get You Acquainted With The Nature Of Questions You May Encounter During Your Job Interview For The Subject Of Social Media Marketing Multiple Choice Questions. Interpret Quotients Of Rational Numbers By Describing Real World Contexts. CC.7.NS.1 Apply And Extend Previous Understandings Of Addition And Subtraction To Add And Subtract Rational Numbers. CC.7.NS.1.c Understand Subtraction Of Rational Numbers As Adding The Additive Inverse, P – Q = P + (–q). Use Our Drag & Drop Form Builder To Easily Create Your Own Online Form Or Survey. Choose From Over 100 Customizable Templates And 40 Question Types To Create Registrations, Customer Surveys, Order Forms, Lead Forms And More. Valuing Real Estate Is Difficult Since Each Property Has Unique Features Such As Location, Lot Size, Floor Plan, And Amenities. General Real Estate Market Concepts Like Supply And Demand In A Why Is The C Not Solved For? Remember That There Are An Infinite Number Of Equations For The Line, Each Of Which Is Multiple Of The Other. We Can Factor Out C (or Set C = 1 For The Same Result) And Get (1/3)x + (1/3)y =1 As One Choice Of Equation For The Line. Another Choice Might Be C = 3: X+y = 3, Which Has Cleared The Denominators. 71 Thoughts On “ Multiple Meaning Words – Activities, Worksheets, Word Lists, And More ” Charles Collins Thursday At 12:57 Pm. Hello, I Enjoyed Your Site. I Was Wondering If Our Kindergarted Class Could Link To Here And Use This Information For Our Children? 14 RULES FOR WRITING MULTIPLE-CHOICE QUESTIONS 1. Use Plausible Distractors (wrong-response Options) • Only List Plausible Distractors, Even If The Number Of Options Per Question Changes • Write The Options So They Are Homogeneous In Content • Use Answers Given In Previous Open-ended Exams To Provide Realistic Distractors 2. At A Number Of Independent Variables – Gender (female=1 Vs. Male=0) – Age (continuous) – Frequency Of Eating In Restaurants (frequent=1 Vs. Infrequent=0) – Race/ethnicity (Black, White, Asian, Or Hispanic) • Note That The Race/ethnicity Variable Has Four Categories. In Order To Look At This Variable In A This Service Has Been Retired. Faculty Profile Information Has Been Migrated To UMassD Sites And The University's DirectoryUMassD Sites And The University's Directory If You Need More Than$2000 Worth Of Gift Cards, Bulk Ordering Is For You. Orders Require A New Account On The Bulk Portal And Prepayment Via ACH, Wire Transfer, Or Corporate Credit Card By Approval. Buy Now Write, Interpret, And Explain Statements Of Order For Rational Numbers In Real-world Contexts. For Example, Write -3 O C > -7 O C To Express The Fact That -3 O C Is Warmer Than -7 O C. Understand The Absolute Value Of A Rational Number As Its Distance From 0 On The Number Line; Interpret Absolute Value As Magnitude For A Positive Or Negative 2.6 For Each Of The Following Scenarios Identify Which Data Processing Method (batch Or Online, Real-time) Would Be The Most Appropriate.. 2.7 After Viewing The Web Sites, And Based On Your Reading Of The Chapter, Write A 2 Page Paper That Describes How An ERP Can Connect And Integrate The Revenue, Expenditure, Human Resources/payroll, And Financing Cycles Of A Business. Flocabulary Is A Library Of Songs, Videos And Activities For K-12 Online Learning. Hundreds Of Thousands Of Teachers Use Flocabulary's Educational Raps And Teaching Lesson Plans To Supplement Their Instruction And Engage Students. R Multiple Choice Questions And Answers – Part 2 Here, We Are Providing You With Some Multiple-choice Questions Of R Programming With Answers. This Quiz Will Help You To Brush Up R Programming Concepts. C++ Multiple Inheritance. In C++ Programming, A Class Can Be Derived From More Than One Parents. For Example: A Class Bat Is Derived From Base Classes Mammal And WingedAnimal. It Makes Sense Because Bat Is A Mammal As Well As A Winged Animal. Example 2: Multiple Inheritance In C++ Programming Regents Prep Is An Exam Prep Course And Online Learning Center Designed To Help Students Pass Their Exams, Become Certified, Obtain Their Licenses, And Start Their Careers. Even Though Once They Get Their Braces Off No Spaces Will Exist, When Numbering Their Teeth, The Person Will Still Not Have Teeth Assigned The Numbers 4, 13, 20 And 29. Ready To Take The Quizzes? Use These Links! » Tooth Numbering Quiz #1 » #2 » #3. Exam Terms And Definitions. Permanent Teeth And Their Assigned Numbers. A Multiple-choice Testing Script That Reads Questions And Answer Choices From A Notecard And Presents Them In Dialog Boxes. Racter: Wizardry And Steamworks: In-world, Multi-purpose Chatterbot (Eliza/A.L.I.C.E. Inspired) Supporting Multiple Configurable Hot-swappable Brain-files With A Wide Range Of Applications. Rainbow_palette: Rui Clary This Converts To Real Values, I.e. Valued At The Prices Of The Base Year For The Price Index. Adjusted Present Value (APV): The Sum Of The Discounted Value Of A Project's Operating Cash Flows (assuming Equity Financing) Plus The Value Of Any Tax-shield Benefits Of Interest Associated With The Project's Financing Minus Any Flotation Costs. Learn Web Design & Development With SitePoint Tutorials, Courses And Books - HTML5, CSS3, JavaScript, PHP, Mobile App Development, Responsive Web Design Title: Microsoft Word - Numbers10100 Multiplechoice Author: Kissy Created Date: 1/19/2009 5:19:47 PM Now, In Order To Make Things Neater And More Clear, Let Us Move All The Numbers (except For The Number 4 – We Have To Get Rid Of It In A Different Way) To The Right Side Of The Equation. Like This: 4 * X = 30 – 10. To Simplify Things Further, Let Us Perform The Subtraction. 4 * X = 20 The RMSE And Adjusted R-squared Statistics Already Include A Minor Adjustment For The Number Of Coefficients Estimated In Order To Make Them "unbiased Estimators", But A Heavier Penalty On Model Complexity Really Ought To Be Imposed For Purposes Of Selecting Among Models. For Instance, You Would Plot The Complex Number 3 – 2i Like This: This Leads To An Interesting Fact: When You Learned About Regular ("real") Numbers, You Also Learned About Their Order (this Is What You Show On The Number Line). But X,y-points Don't Come In Any Particular Order. You Can't Say That One Point "comes After" Another Point In The The Ability To Write Numbers Is Elementary Indeed And It Is Often Taken For Granted. But It Is One Of The Fundations Of Literacy And There Are Some Rules That Should Be Obeyed In Order To Do It Properly. Try To Spell Out All Single-digit Numbers And Use Numerals For Numbers That Are Greater Than Nine. Also, Try To Be Consistent It Doing It. Math Solving Programm, What Is The Greatest Common Factor Of 125 And 250, Ti 83 Calculator Download, Common Multiple Of 18, 28, 38, Order Numbers From Least To Greatest. Mathematic Worksheet For A 2nd Grader, Free Online 9th Grade English, Free Nonlinear Equation Solver. Create Plot Spanning Multiple Rows Or Columns. To Create A Plot That Spans Multiple Rows Or Columns, Specify The Span Argument When You Call Nexttile. For Example, Create A 2-by-2 Layout. Plot Into The First Two Tiles. Then Create A Plot That Spans One Row And Two Columns. ©A Q2i0 D1K29 JK Ku Lt Pau LS Vo Lf Gtyw Eatr 5ej VLALsCC.H 9 VA Pl 0l X 6rli AgchZtusm Tr2easheUrjv8e EdF. 4 N SMgaSdLek Tw MiQtBh1 8I XnRffi 3n Mi0t 4eQ RA7l 2g WepbUrKa1 X1N. G Worksheet By Kuta Software LLC Home Page >> Grammar Exercises >> Pre-Intermediate >> Modals Multiple Choice Exercise Modals Multiple Choice Exercise Choose The Correct Modal Verb For Each Of These Sentences. 28. At The Time Of The Eruption, The Wind Direction Was Primarily From The (1) East (3) North (2) West (4) South Numbers 10-100. Numbers 11-20. Olympics And Paralympics 1. Olympics And Paralympics 2. Parts Of A Building. Parts Of The Body - Head. Parts Of The Body 1. Parts Of The Year End Of Lansdown Is 31 December. The Company Pays For Its Electricity By A Standing Order Of $100 Per Month. On 1 January 2005 The Statement From The Electricity Supplier Showed That The Company Had Overpaid By$25. 25. At Which Sport In The 1960s Did Peggy Fleming Become A Household Name? A. Equestrian B. Tennis C. Sprinting D. Figure Skating. 26. Who Revolutionised High Jumping When He Used His Flop Technique To Win An Olympic Gold Medal In Mexico In 1968? Choice 1 – Compute And Display The Sum Of The 4 Numbers Choice 2 – Compute And Display The Average Of The 4 Numbers Choice 3 – Compute And Display The Product Of The 4 Numbers Choice 4 – Compute And Display The Difference Of The Highest And The Lowest Number Choice 5 – End Of Program Each Choice Must Have Its Own Function. If The User Sample Test Questions Part 1 Rational Numbers 1. 5 2 15 8 Y 2. 15 4 3 6 5 1 Percent 3. 42 Is 30 % Of What Number? 4. The Smiths Spend 23% Of Their Monthly Income On Food. 89. Company A And Company B Sell Their Products For Exactly The Same Sales Price. Both Have The Same Annual Fixed Costs. Company A’s Variable And Fixed Costs At Break-even Total $60,000 And$30,000 Respectively. Join Over 440,000 Law Students Who Have Used Quimbee To Achieve Academic Success In Law School Through Expert-written Outlines, A Massive Bank Of Case Briefs, Engaging Video Lessons, Comprehensive Essay Practice Exams With Model Answers, And Practice Questions. Categorical Independent Variables Can Be Used In A Regression Analysis, But First, They Need To Be Coded By One Or More Dummy Variables (also Called Tag Variables). Each Such Dummy Variable Will Only Take The Value 0 Or 1 (although In ANOVA Using Regression, We Describe An Alternative Coding That Takes Values 0, 1 Or -1). Yahoo Answers Is A Great Knowledge-sharing Platform Where 100M+ Topics Are Discussed. Everyone Learns Or Shares Information Via Question And Answer. PROFED BOOSTERS 1.The Most Reliable Measure Of Central Tendency When There Are Extreme Scores Median 2. Iah's Score From Her LET Are The Following: 92, 88, 91. What Is The Median? 91 3. What Can Be Multiple–Choice Examinations. A Key To Success In Taking Multiple-choice Examinations Is To Make Steady Progress Through The Questions. Do Not Spend A Disproportionate Amount Of Time On A Single Question With Which You Are Having Trouble. Move On And Come Back To It If There’s Time. Simulations Pseudorandom Numbers Are Often Used In Simulations Because They Can Be Used To Mimic Random Variations In The Real World. Loops Are Used In Simulations For Many Reasons: To Loop Over Different Points In Time, Different Regions In Space, Different Components Of A Device, Different Objects In An Environment, Different Initial Conditions. The British Council Is The United Kingdom’s International Organisation For Cultural Relations And Educational Opportunities. Test One - Multiple Choice. Before The Test Starts You’ll Be Given Instructions On How The Test Works. You Can Also Choose To Go Through A Practice Session Of The Multiple Choice Questions To Get Used To The Layout Of The Test. At The End Of The Practice Session The Real Test Will Begin. The Number Of Miles Driven By Either Jamie Or Rhonda Will Work. We Need To Just Choose One And Move To Step 3. Let’s Assign A Variable To Represent The Number Of Miles Driving By Rhonda Let’s Call It R. Step 3: Write Down What The Variable Represents. Let R = The Number Of Miles Driven By Rhonda Step 4: Write An Equation. The Discriminant Of The Equation D = B^2 - 4ac = (-2)^2 + 4*1*(-1) = 4 + 4 = 8 Is A Positive Real Number, So The Roots Are Real Numbers. Next, According To The Vieta's Theorem, The Product Of The Roots Is Equal To The Constant Term, Which Is -1 In This Case, So The Roots Are Of Opposite Signs: One Root Is A Positive Real Number, While Another We Use Statistics Such As The Mean, Median And Mode To Obtain Information About A Population From Our Sample Set Of Observed Values. Mean. The Mean (or Average) Of A Set Of Data Values Is The Sum Of All Of The Data Values Divided By The Number Of Data Values. The Support Team Will View It After The Order Form And Payment Is Complete And Then They Will Find An Academic Writer Who Matches Your Order Description Perfectly. Once You Submit Your Instructions, While Your Order Is In Progress And Even After Its Completion, Our Support Team Will Monitor It To Provide You With Timely Assistance. The Tongue Aids In The Digestion Of The Food. The Saliva Changes Some Of The Starches In The Food To Sugar. The Tongue Keeps The Food In Place In The Mouth KS2 Maths Worksheet For Homework Or Class Work. Order Numbers Including Decimals, Temperature And Using < And > Signs. We Are No Longer Accepting New Registrations For CourseSites By Blackboard And On August 31, 2021, This Service Will No Longer Be Available To Current Users. In Order To Manage Such A Work Force, You Need To Have Proper Management Policies In Place. Same Technique Can Be Applied To Manage Classes Of Your Software System. In Order To Manage The Classes Of A Software System, And To Reduce The Complexity, System Designers Use Several Techniques, Which Can Be Grouped Under Four Main Concepts Named. 1. Visualize Employee Feedback In Real Time With A Variety Of Activities — Then Measure Engagement, Follow Up On Feedback, And Uncover Next Steps. Engaging Distance Learning Use Poll Everywhere Activities To Take Attendance, Give Quizzes, And Gauge Understanding Whether Your Students Are Near Or Far. LearnZillion’s Programs Have Earned Impressive "all Green" Ratings By EdReports, Signifying That Their Standards Alignment And Usability Meet Expectations Across Multiple Gateways. LearnZillion Illustrative Mathematics For 6–8 And Algebra 1, Geometry, And Algebra 2 Even Received Full Scores. LearnZillion Science Will Be Reviewed In 2021. Individuals Can Use Behavioral Processes Or Cognitive Processes In Order To Attempt To Restore Equity. Examples Include Decreasing Productivity At Work, Finding A New Job, Asking For A Wage Increase, Changing The Comparative Other, Or Attempting To Distort Or Justify Changes In Their Perceptions Of Inputs And/or Outcomes (Adams, 1963). It Has Rewarded The Kind Of Thinking That Lends Itself To Multiple Choice Machine-graded Assessment. We Now Recognize That The Assessment Of The Future Must Focus On Higher – Not Lower – Order Thinking; That It Must Assess More Reasoning Than Recall; That It Must Assess Authentic Performances, Students Engaged In Bona Fide Intellectual Work. A Computer Science Portal For Geeks. It Contains Well Written, Well Thought And Well Explained Computer Science And Programming Articles, Quizzes And Practice/competitive Programming/company Interview Questions. Our Team Of Expert Instructors Draws On Decades Of Real-world Experience To Deliver Comprehensive Coursework That Translate Concepts Into Real-life Examples. The Right Choice Becker Has Prepared People From More Than 2,900 Accounting Firms, Alliances, Corporations, Government Agencies And Universities. "I'm Tara, I Create Resources And Write Blog Posts For TeacherVision. I Love Using The TeacherVision Resources In My Own Classroom, Especially The Choice Boards And Holiday Activities. When I Don't Have A Lot Of Time For Planning Or A Last-minute Change To My Schedule Has Me Scrambling For An Activity, TeacherVision Always Saves The Day. The Numbers Below (e.g., Section 1, Section 2, Etc.) Correspond To The Provisions In The Agreement. Please Review The Document In Its Entirety Before Starting The Step-by-step Process. • Introduction. Identifies The Document As The Termination Of An Existing Lease. Write In The Parties And The Date On Which You Want The Termination To Be Multiple Choice Questions On Scientific Method; Multiple Choice Questions On Scientific Method: Experimentation In Science; Answers: 1. B. Tests Experimental And Control Groups In Parallel. 2. C. Hypotheses Usually Are Relatively Narrow In Scope; Theories Have Broad Explanatory Power. 3. C.The Fish Swam In A Zigzag Motion. 4. D. Buddhists Currently Number Around 400 Million Worldwide, And The Philosophy's Two Major Traditions Are Theraveda—practiced Primarily In Sri Lanka, Burma, Thailand, Cambodia, And Laos—and 104 Quotes From Of Mice And Men: ‘Maybe Ever’body In The Whole Damn World Is Scared Of Each Other.’ Multiple Definition Is - Consisting Of, Including, Or Involving More Than One. How To Use Multiple In A Sentence. Real-World Business Forms, With Instructions And Examples Available In Both Excel And PDF Format. Stop Scouring The Web For Accounting Forms! AccountingCoach PRO Gives You 80+ Real Business Documents Like Financial Statements, Financial Ratios And Analysis Forms, And Depreciation And Amortization Forms. Statistics Canada (2012) Reports That The Number Of Unmarried, Common-law Couples Grew By 35 Percent Between 2001 And 2011 To Make Up A Total Of 16.7 Percent Of All Families In Canada. Cohabitating, But Unwed, Couples Account For 16.7 Percent Of All Families In Canada. Some May Never Choose To Wed (Jayson 2008). This Multiple Choice Sheet Template Is A Convenient Tool To Design A Multiple Choice Quiz. Because This Type Of Quiz Mostly Involves Multi-level Numbering (for Both The Item Numbers And The Answer Letters), Which For Some Word Processor Users Is Considered Difficult To Handle, A Template With Its Automatic Processing Of Numbering Will Surely Be Question 401660: If A Multiple Choice Test Consists Of 5 Questions Each With 4 Possible Answers Of Which Only One Is Correct, A, In How Many Different Ways Can A Student Check Off One Answer To Each Question? B, In How Many Ways Can A Student Check Off One Answer To Each Question And Get All The Answers Wrong? Answer By Stanbon(75887) (Show The Future Starts Here. The HiSET ® Exam Helps Adult Learners Achieve Their College And Career Goals To Expand Opportunities And Changes Lives. Find A Test Center Location Near You, Or Learn More About Taking The Test At Home. I Also Had The Same Requirement Where I Didn't Have Choice To Pass Like Operator Multiple Times By Either Doing An OR Or Writing Union Query. Earn 10 Reputation Good Choice. One Thing To Note, Though - Enumerable.Range Has Arguments Int Start And Int Count. Your Examples Wont Work Right The Way They Were Written. You Write It As If The Second Argument Is Int End. For Example - Enumerable.Range(11,20) Would Result In 20 Numbers Starting With 11, And Not Numbers From 11 To 20. – Gabriel McAdams Aug 2 Choose Two Prime Numbers, P And Q. From These Numbers You Can Calculate The Modulus, N = Pq. Select A Third Number, E, That Is Relatively Prime To (i.e., It Does Not Divide Evenly Into) The Product (p-1)(q-1). The Number E Is The Public Exponent. Calculate An Integer D From The Quotient (ed-1)/[(p-1)(q-1)]. The Number D Is The Private Exponent. Millions Trust Grammarly’s Free Writing App To Make Their Online Writing Clear And Effective. Getting Started Is Simple — Download Grammarly’s Extension Today. Providing Educators And Students Access To The Highest Quality Practices And Resources In Reading And Language Arts Instruction. This Multiple Choice Format Makes It Faster And Easier For The Respondent. Simple Fill-in-the-number, Circle-the-range Or Exact Answer Questions Are Also Widely Used. A Market Researcher Might Ask How Many Suppliers You Use For Household Appliances Or What Is Your Salary Range (multiple Choices Given With Ranges Of Salaries). Free Source Code And Tutorials For Software Developers And Architects.; Updated: 27 Mar 2021 Bible Questions Answered. With Over 7,500 Answers To Frequently Asked Bible Questions Published Online, Approximately 85% Of The Questions We Are Asked Already Have Answers Available To You Instantly. Lesson 3: Dividing Decimals By One-Digit Numbers Lesson 4: Dividing By 10, 100, 1000, And 10 000 Lesson 5: Solving Problems By Working Backward For Example, If A Dice Is Rolled 6000 Times And The Number '5' Occurs 990 Times, Then The Experimental Probability That '5' Shows Up On The Dice Is 990/6000 = 0.165. On The Other Hand, Theoretical Probability Is Determined By Noting All The Possible Outcomes Theoretically, And Determining How Likely The Given Outcome Is. Enter A Number: 25. 25 Is An Odd Number. Answer: The Following Is An Algorithm For This Program Using A Flow Chart. We Can Use A Modulus Operator To Solve This Problem. There Will Be No Remainder For Even Number When We Modulus The Number By 2. The Source Code: #include Int Main() { Int Num = 0, Remainder = 0; // While -1 Not Entered Cabrillo College Empowers Students To Be Effective Communicators, Critical Thinkers, And Responsible World Citizens. These Multiple-choice Trivia Questions Mostly Consist Of General Knowledge, But It Also Contains Some Bible Trivia, TV Show, And Movie Trivia, Geography Trivia, Literature Trivia, And So Much More. You Can Find The Correct Answer To Each Multiple-choice Trivia Question At The Bottom. “Memory Is The Process Of Maintaining Information Over Time.” (Matlin, 2005) “Memory Is The Means By Which We Draw On Our Past Experiences In Order To Use This Information In The Present’ (Sternberg, 1999). Free 5-8 Day Shipping Within The U.S. When You Order $25.00 Of Eligible Items Sold Or Fulfilled By Amazon. Or Get 4-5 Business-day Shipping On This Item For$5.99 . (Prices May Vary For AK And HI.) Probability Is The Chance That The Given Event Will Occur. Use This Online Probability Calculator To Calculate The Single And Multiple Event Probability Based On Number Of Possible Outcomes And Events Occurred. Introducing The Lexile Framework® For Listening . Similar To The Reading Framework, The Listening Framework Is A Scientific Approach To Measuring Both Students' Listening Ability And Complexity Of Audio Materials On The Same Lexile Developmental Scale. In Part 1 Of The Preliminary English Test (PET) You Listen To Seven Short Recordings And For Each Recording You Have To Choose The Best Of Three Pictures. You Can Listen To The Audio Twice Trate The Types Of Multiple-choice Questions In The Test. When You Take The Test, You Will Mark Your Answers On A Separate Machine-scorable Answer Sheet. Total Testing Time Is Two Hours And Fifty Minutes; There Are No Separately Timed Sections. Following Are Some General Test-taking Strategies You May Want To Consider. Welcome To Theory Test Online For Car Drivers. Specially Designed By Driver Training Experts LDC To Ensure You Pass Both Parts Of The UK Driving Theory Test (i.e. Multiple Choice And Hazard Perception) For Car Drivers With Ease No Matter How You Connect To The Internet (i.e. Via PC, Mac, Laptop, Smartphone Or Tablet). The Medical College Admission Test® (MCAT®), Developed And Administered By The AAMC, Is A Standardized, Multiple-choice Examination Created To Help Medical School Admissions Offices Assess Your Problem Solving, Critical Thinking, And Knowledge Of Natural, Behavioral, And Social Science Concepts And Principles Prerequisite To The Study Of Medicine. Visit Our FAQ Page For Updates On The MCAT Let Our Consultants Help You Overcome The Challenges Of A Dissertation, Reduce The Number Of Revisions, And Help You Move On To Your Best Life. At The End Of The Day, We Analogize The Dissertation Process To Climbing A Mountain: Many Can Scale A Mountain In Good Weather, But When The Storms Of Revisions From The Chair, Committee, IRB, URR Come Get High-quality Papers At Affordable Prices. With Solution Essays, You Can Get High-quality Essays At A Lower Price. This Might Seem Impossible But With Our Highly Skilled Professional Writers All Your Custom Essays, Book Reviews, Research Papers And Other Custom Tasks You Order With Us Will Be Of High Quality. Scalar Chain Refers To The Number Of Levels In The Hierarchy From The Ultimate Authority To The Lowest Level In The Organization. It Should Not Be Over-stretched And Consist Of Too-many Levels. Order. Both Material Order And Social Order Are Necessary. The Former Minimizes Lost Time And Useless Handling Of Materials. #!/usr/bin/env Php K6 Æ Wp-cli.phar"php/WP_CLI/CommandWithDBObject.phpÿ QØ=Xÿ “_ R¶ Php/WP_CLI/CommandWithMeta.php‘ QØ=X‘ BëþǶ Php/WP_CLI Write The Following Integers As Rational Numbers With Denominator 1: -19, 27, 71, -101. 6. Write Down The Rational Number Whose Numerator Is The Smallest Four Digit Number And Denominator Is The Largest Five Digit Number. 7. A Year Number Which Is Divisible By 4 But Not By 100, It Is A Leap Year. All Other Year Numbers Are Common Years, I.e. No Leap Years. As A Little Useful Gimmick, We Added A Possibility To Output A Date Either In British Or In American (Canadian) Style. """ For More Than 100 Years We’ve Supported Educators To Inspire Generations Of Pupils. Fast Forward To Today And Our Innovative Digital Tools And Services Are Helping School Leaders, Like You, In Schools All Across The World. Choice Of Alternative Course Of Action When Forecast Are Available And Premises Are Established, A Number Of Alternative Course Of Actions Have To Be Considered. For This Purpose, Each And Every Alternative Will Be Evaluated By Weighing Its Pros And Cons In The Light Of Resources Available And Requirements Of The Organization. The Best Cloud Based Small Business Accounting Software. Send Invoices, Track Time, Manage Receipts, Expenses, And Accept Credit Cards. Free 30-day Trial. Newsletter Sign Up. Take A Sneak Peak At The Movies Coming Out This Week (8/12) Rewatching The Rugrats Passover Episode For The First Time Since I Was A 90s Kid Contact - (1-844)-294-6471 Roku Contact Phone Number (1-844)-294-6471Is The Roku Customer Help Number. If You Have Any Issue With Your Roku Account. Both Of These Are Of 20 Minutes Duration. Next Section Is Critical Reading Of 25 Minutes And The Last Section Is Writing- Multiple Choice Which Is Of 10 Minutes Duration And Includes Improving Sentences. So The Total Duration Of The Exam Is 3 Hours 45 Minutes And The Number Of Questions Is 202-212. Download SAT Papers. Download SAT Question Paper 1 Read Independent And Unbiased Reviews, Product Tests, Articles, Information And Buying Guides From The Experts At CHOICE. Includes Appliances, Electronics, Technology, Food And Drink, Babies And Kids, Outdoor, Health And Body And Home Improvement. In Order To Become Skilled In Mathematics You Need
Entanglement of heavy quark impurities and generalized gravitational entropy # Entanglement of heavy quark impurities and generalized gravitational entropy S. Prem Kumar    and Dorian Silvani ###### Abstract We calculate the contribution from non-conformal heavy quark sources to the entanglement entropy (EE) of a spherical region in SUSY Yang-Mills theory. We apply the generalized gravitational entropy method to non-conformal probe D-brane embeddings in AdSS, dual to pointlike impurities exhibiting flows between quarks in large-rank tensor representations and the fundamental representation. For the D5-brane embedding which describes the screening of fundamental quarks in the UV to the antisymmetric tensor representation in the IR, the EE excess decreases non-monotonically towards its IR asymptotic value, tracking the qualitative behaviour of the one-point function of static fields sourced by the impurity. We also examine two classes of D3-brane embeddings, one which connects a symmetric representation source in the UV to fundamental quarks in the IR, and a second category which yields the symmetric representation source on the Coulomb branch. The EE excess for the former increases from the UV to the IR, whilst decreasing and becoming negative for the latter. In all cases, the probe free energy on hyperbolic space with increases monotonically towards the IR, supporting its interpretation as a relative entropy. We identify universal corrections, depending logarithmically on the VEV, for the symmetric representation on the Coulomb branch. institutetext: Department of Physics, Swansea University, Singleton Park, Swansea, SA2 8PP, U.K. ## 1 Introduction The holographic correspondence maldacena (); witten (); gkp () between gauge theories and gravity has revealed an intriguing link between quantum entanglement and geometry Ryu:2006bv (); Ryu:2006ef (); Casini:2011kv (); Maldacena:2013xja (). The prescription of Ryu:2006bv (); Ryu:2006ef (); Casini:2011kv () relating the entanglement entropy of some subsystem within a quantum system to the area of an extremal surface in a classical dual gravity framework, was put on firm footing in Lewkowycz:2013nqa (), where the replica trick was implemented in the gravity setting dual to the subsystem of interest, by using the method of Callan:1994py (). This involves identifying a circle in the asymptotic geometry, which could be a compact Euclidean time direction, varying its periodicity in a well-defined manner and calculating the resulting variation in the action so as to obtain a gravitational or geometric entropy. A natural extension of these ideas is to study the effect of excitations above the vacuum state or inclusion of new degrees of freedom in the form of flavours or defects. Here it was understood that even for flavours or defects in the quenched approximation, the application of the Ryu-Takayanagi prescription Ryu:2006bv (); Ryu:2006ef () appears to require knowledge of the backreaction from the corresponding probe degrees of freedom in the dual gravitational description Chang:2013mca (); Jensen:2013lxa (); Kontoudi:2013rla (); Jones:2015twa (); Erdmenger:2015spo (). It has been subsequently pointed out in karchuhl () that this procedure can be circumvented by applying the gravitational entropy method of Lewkowycz:2013nqa () to the quenched degrees of freedom propagating in the un-backreacted gravitational backgrounds. In this paper, we will study pointlike defects or “impurities” that have a simple interpretation, namely they are test charges or heavy quarks introduced into the vacuum state of a large- QFT. The coupling of the heavy quark to the quantum fields affects the entanglement of any region that contains the impurity, with the rest of the system. Specifically, we are interested in the change in entanglement entropy (EE) of a spherical region of some radius upon introduction of a test quark in the supersymmetric gauge theory in 3+1 dimensions, with gauge group. This question becomes particularly interesting if one can deform the quantum mechanics of the pointlike impurity so that the system is not conformally invariant and the degree of entanglement is a nontrivial function of the deformation strength. Our goal will be to examine and identify general scale dependent properties of EE across different tractable examples of such impurities at strong ’t Hooft coupling in the large- theory. In Lewkowycz:2013laa () the excess EE due to such heavy quarks in large rank symmetric and antisymmetric tensor representations were computed (both at weak and strong coupling) by exploiting conformal invariance and relating them to known results drukkerfiol (); paper1 (); yamaguchi (); passerini (); paper2 () for supersymmetric Wilson/Polyakov loops in the theory. In this paper we will apply the method of karchuhl () based on gravitational entropy contributions to obtain the EE excess due to the corresponding probes (D-branes) in the gravity dual, including the effect of deformations that trigger flows on the impurity. The main results of this paper are summarized below: • We focus attention on heavy quark probes in the symmetric and antisymmetric tensor representations of rank , with (within the theory at large-), which are dual to D3 and D5-brane probes in AdSS. In the conformal case, the worldvolume of the probe contains an AdS factor, reflecting the conformal nature of the quantum mechanics on the impurity. We calculate the contribution to the generalized gravitational entropy from these probe branes using the proposal of karchuhl () and find a match with the results of Lewkowycz:2013laa () deduced via independent arguments. A nontrivial aspect of the calculation and observed agreement is the role played by the background Ramond-Ramond (RR) flux and its associated four-form potential, specifically in the case of the D3-brane probe dual to the symmetric representation source. The generalised gravitational entropy receives a contribution from the coupling of this potential to the D3-brane probe, and matching with the CFT arguments of Lewkowycz:2013laa () picks out a special choice of gauge for the four-form potential. • We then study certain deformations on the probes which appear as simple one-parameter BPS solutions for the D-brane embeddings. The D5-brane solution, first found in Callan:1998iq (), interpolates between sources in the fundamental representation at short distances, and an impurity transforming in the antisymmetric representation at long distances. The deformation appears as a dimensionful parameter in the UV111This is a puzzling aspect of both the D3- and D5-brane non-conformal solutions we study, as both appear to be triggered by the VEV of a dimension one operator in the UV picture Kumar:2016jxy (), and implies spontaneous breaking of conformal invariance, which should not be possible in quantum mechanics (on the impurity)., and has the effect of screening the fundamental sources into the representation . This is most directly seen by examining the profiles of the gauge theory operators (e.g. ) sourced by the impurity where the strength of the source first increases on short scales, subsequently turns around and decreases monotonically (figure 4) at large distances to an asymptotic value determined by the representation . We calculate the EE excess due to this impurity within a spherical region of radius surrounding the source, by mapping the causal development of the spherical region to the Rindler wedge which is conformal to the hyperbolic space with temperature . The contribution of the probe to the gravitational entropy is obtained by varying the temperature of the dual hyperbolic AdS black hole. As a function of the dimensionless radius , we find that the EE excess displays the same qualitative behaviour (figure 3) as the profiles of gauge theory fields, namely an increase on short scales accompanied by eventual decrease at large radii towards the asymptotic value governed by the representation . We also find that although the EE is a non-monotonic function of the radius , the impurity free energy on which can be interpreted as a relative entropy, increases monotonically from the UV to the IR. • For the D3-brane probes, a simple BPS deformation exists which was discussed relatively recently in Schwarz:2014rxa () and Kumar:2016jxy (). There are two categories of these solutions (figure 1): One yields a symmetric representation source in the UV “dissociating” into coincident quarks in the IR, while the second category describes a heavy quark in representation on the Coulomb branch of the theory with broken to . We apply the gravitational entropy method to these sources taking care to employ the correct gauge for the RR potential which yields the expected result for the undeformed conformal probe. In both cases the EE excess displays non-monotonic behaviour over short scales - first increasing as a function of , and reaching a maximum. At large distances, however, the two categories display qualitatively distinct features. The EE excess for the first class of solutions saturates in the IR (figure 8) at a higher value (that of fundamental sources) than in the UV (corresponding to the representation ). For the Coulomb branch solution, we find the the EE excess decreases monotonically in the IR without bound with some universal features (figure 9). In all cases however, the free energy on for each of the probes increases monotonically from the UV to the IR, consistent with the interpretation as a relative entropy. The IR asymptotics of this free energy for the Coulomb branch solution exhibits certain universal features, namely, quadratic and logarithmic dependence on the Coulomb branch VEV with the coefficient of the logarithmic term being universal. We further confirm that D3-brane impurities with the deformations turned on, display a screening of the source in the representation . We see this for both categories of solutions by calculating the spatial dependence of gauge theory condensates sourced by the heavy quark impurities. The paper is organized as follows: In section 2 we review the argument of karchuhl () for calculating the EE of probes without backreaction. We also review known results for the EE of conformal probes, and for completeness, we also explictly write out the trasnformations from AdS to AdS-Rindler and hyperbolic-AdS spacetimes. Section 3 is devoted to the analysis of the D5-brane probe embeddings and their entanglement entropies. In Section 4 we review the D3-brane BPS solutions. All details of the EE calculation for the D3-brane impurities are presented in Section 5. We summarize our results and further questions in Section 6. Certain technical aspects of the calculations including transformations of D3-brane worldvolume integrals from one coordinate system to another and evaluation of certain integrals are relegated to the Appendix. ## 2 Generalized gravitational entropy for probe branes It was argued in karchuhl () that the entanglement entropy contribution from a finite number of flavour degrees of freedom, introduced into a large- CFT (with a holographic gravity dual), can be computed without having to consider explicit backreaction of flavour fields. A key element in this approach is the method of Lewkowycz:2013nqa () which can, in principle, be adapted to include the backreaction from flavour fields. However, this turns out to be unnecessary as the leading contribution at order is determined completely by an integral over the flavour branes in a geometry without backreaction. The entanglement entropy of a spatial region in a CFT in spacetime dimensions can be calculated by a holographic version of the replica trick in Euclidean signature. This is performed by considering smooth, asymptotically AdS geometries with a finite size Euclidean circle at the conformal boundary of period (and ) going around the boundary of the spatial region of interest. The classical action for these geometries then yields the holographic entanglement entropy via, (1) This quantity only receives non-zero contribution from a boundary term within the bulk, arising from the locus of points where the circle shrinks. This corresponds to the Ryu-Takayanagi minimal surface Ryu:2006bv (). Upon introducing probe branes (defects or flavours) the complete action for the gravitational system can be separated into ‘bulk’ and ‘brane’ components: Sg=Sbulk+ϵ0Sbrane, (2) where the brane contribution is parametrically smaller by a factor of . To paraphrase the argument of karchuhl (), if one views the backreacted metric as a small perturbation about (the -fold cover of) AdS, the deviation of the bulk action from AdS only appears at order . Then the probe contribution to the gravitational entropy at order is completely determined by an integral over the brane worldvolume alone. Furthermore, the brane embedding need only be known in ordinary AdS spacetime (with ), since the inclusion of backreaction will only affect the probe action at order and deviations of the embedding functions at order will also contribute to the action at order , since the embedding solves the equations of motion. To compute the entanglement entropy of the region one applies the well known method of Casini:2011kv () for the specific case when is a sphere . This maps the causal development of the region within the sphere to a Rindler wedge. The spherical boundary of the entangling region is mapped to the origin of the Rindler wedge. In this process the reduced density matrix for the degrees of freedom inside the sphere then corresponds to the Rindler thermal state with inverse temperature . The latter is also conformal to a spacetime with hyperbolic spatial slices , so that Casini:2011kv (). The entanglement entropy of the region is then given by the thermal entropy of the CFT on : For theories possessing a holographic dual, the computation of requires a bulk (AdS) extension of the boundary Rindler wedge away from the Rindler temperature . This becomes possible for the case of a CFT where we may transform the bulk extension of the wedge to hyperbolically sliced geometry. The thermal partition function on is computed holographically by the classical action of the bulk Euclidean geometry with hyperbolic slices, the replica trick is implemented by allowing the inverse temperature of the hyperbolic black hole to deviate from the value : The unique extension of the bulk hyperbolically sliced geometry, away from , is related to the replica method via the observation of Lewkowycz:2013nqa (). In particular, the value of the replicated partition can be replaced by times the replicated partition function with the time interval restricted to the domain , and eq.(3) reduces to, The method reproduces the vacuum EE area formula of Ryu:2006bv () and will allow us to extract the EE excess due to the insertion of defects without the need for backreaction on either the background or the defect itself. ### 2.1 Conformal defects from D3/D5-branes and EE A point-like impurity in gauge theory arises most naturally upon the introduction of a Wilson line or heavy quark transforming in some representation of the gauge group. Wilson lines in fundamental , rank- antisymmetric and symmetric representations of are particularly nice from the perspective of gauge/gravity duality as they have simple realisations in terms probe string and brane sources malpol (); yamaguchi (); paper1 (); paper2 (); passerini (). Such sources compute BPS Wilson lines in different representations in the supersymmetric gauge theory at strong coupling and large-, and are introduced as probes in the dual background. In the absence of any probe deformations, the world volume metric on such probes includes an factor, so that the dual impurity theory is a (super)conformal quantum mechanics. The excess contribution from such an impurity to the EE of a spherical region in SYM was calculated in Lewkowycz:2013laa () using the method described above, leading to eq.(3) but where is replaced by the impurity partition function in hyperbolic space, computed by a Polyakov loop or circular Wilson loop . One way to understand the appearance of the circular Wilson loop is to note that upon mapping the causal development of a spherical region to the Rindler wedge, the worldline of the heavy quark maps to the hyperbolic trajectory of a uniformly accelerated particle. Upon Euclidean continuation, the hyperbolic trajectory turns into a circle. Therefore, Simp=(1−β∂∂β)lnW∘|β=2π=lnW∘|β=2π+∫S1β×H3√g⟨Tττ⟩W∘, (6) where in the final expression we are required to compute the expectation value of the field theory stress tensor on , in the presence of the Wilson/Polyakov loop insertion. As argued in Lewkowycz:2013laa (), conformal invariance fixes the form of the stress tensor, and the expectation value of the energy density integrated over depends on a single normalisation constant : ∫S1β×H3√g⟨Tττ⟩W∘=−8π2hw. (7) The normalisation constant for SYM was calculated in Gomis:2008qa () by relating it to the expectation of a dimension two chiral primary field, with net result, Simp=(1−43λ∂λ)lnW∘. (8) While localization results can, in principle, be used to determine the circular Wilson loop in various representations for any and gauge coupling, we will focus attention on the strict large- limit at strong ’t Hooft coupling paper1 (); drukkerfiol (). In this limit, the following results can be deduced for the EE contributions from the conformal impurities in the three different representations described above222The results quoted here differ from those of Lewkowycz:2013laa () by an overall factor of . We clarify the reason for this normalization below eqs.(14) and (25). : S□=√λ6, (9) SAk=N9π√λsin3θk,π(1−κ)=θk−sinθkcosθk,κ≡kN, SSk=N(sinh−1~κ−13~κ√~κ2+1),~κ≡√λk4N. Our aim will be to reproduce these results for the conformal impurities using the method of karchuhl () and then apply the same to the case of the non-conformal impurity flows that were discussed in Kumar:2016jxy (). Now we review the maps that take the AdS-extension of the causal development of the spatial sphere in to hyperbolically sliced . This will help set our conventions, and will be important subsequently since the evaluation of EE for non-conformal impurities will involve computation of integrals over specific brane embeddings in hyperbolic-AdS geometry, and the explicit calculation of these will require us to go back and forth between different coordinate systems. We first consider the transformation, xα=~xα+cα2R(~x2+~z2)1+cR⋅~x+c24R2(~x2+z2)−cαR,α=0,…3, (10) z=~z1+cR⋅~x+c24R2(~x2+z2), where . Here is the radial AdS coordinate, with the conformal boundary at . This is the extension of the boundary CFT special conformal transformation to an isometry of AdS. The map has the following actions: • On the conformal boundary at , the ball : at is mapped to the half-line . The causal development of is mapped to the Rindler wedge . • The world line of the impurity on the boundary, located at the spatial origin , is mapped to the trajectory of a uniformly accelerated particle, , with . In Euclidean signature this maps to one half of the circular Wilson loop with . • The transformation acts on the Poincaré patch metric as an isometry: ds2=dz2+dxαdxαz2→d~z2+d~xαd~xα~z2, (11) while the boundary metric itself transforms by a conformal factor. The holographic extension of the causal development of the ball into the AdS bulk (entanglement wedge) is given by the causal development of the hemisphere (defined at ). This is mapped by the above isometry to the Rindler-AdS wedge . The Rindler-AdS wedge is further mapped to hyperbolically sliced AdS by the transformations listed below. First we parametrize the Rindler-AdS wedge by defining the coordinates, ~x1=r1cosht,~x0=r1sinht,~x2=r2cosϕ,~x3=r2sinϕ, (12) so that The wordline of the heavy quark on the boundary is given by . In order to perform the replica trick it is crucial that we move to Euclidean signature, via the replacement , so we obtain AdS in “double polar” coordinates, and the heavy quark impurity then traces out a Polyakov loop at , ds2E=1~z2(d~z2+dr21+r21dτ2+dr22+r22dϕ2),−π2≤τ≤π2. (14) The Euclidean time must be restricted to the domain where is positive, so that . The -coordinate is periodic under the shifts which also ensures that the “double polar” geometry is free of conical singularities. The map to hyperbolically sliced AdS is achieved by the transformations ~z=2Rρω,r1=2Rρω√ρ2−1,r2=2Rωsinhusinθ, (15) ω=(coshu−sinhucosθ), which yield the Euclidean AdS black hole with hyperbolic horizon, Once again we have the restriction on the range of the Euclidean time which has periodicity , guaranteeing that the space caps off smoothly at . Finally, it will be useful to to recall the coordinate transformations which directly map the entanglement wedge in the original AdS spacetime, to the hyperbolic AdS black hole (16) with inverse temperature . The relevant coordinate transformations are (in Lorentzian signature): z=Rρcoshu+√ρ2−1cosht,x0=√ρ2−1zsinht,r=ρzsinhu. Upon continuation to imaginary time , we must restrict to the domain of to . It can be shown that the pre-image of the Euclidean hyperbolic AdS black hole, given this domain, is the interior of the hemisphere in the original (Euclidean) AdS geometry, x20+r2+z2≤R2,r,z≥0. (19) ##### Hyperbolic AdS and replica method: The replica method requires that we consider a hyperbolic AdS black hole in which the Euclidean time has period where , so that The Hawking temperature of the black hole is β−1=TH=f′(ρ+)4π=2ρ2+−12πρ+. (21) It is clear that implementation of the replica trick is equivalent to varying the Hawking temperature of the black hole, ensuring as usual, the absence of a conical singularity in the Euclidean geometry. In this approach the entanglement entropy is given by the thermal entropy evaluated in the hyperbolic AdS geometry. In particular, using eq.(5), we have S=limβ→2πβ∂βI2π(β). (22) Here is the action of the hyperbolic AdS geometry including any probes dual to the impurities or defects under consideration, and where the integration over Euclidean time is restricted to the domain . ### 2.3 Warmup: A single fundamental quark As a warmup, we compute the EE excess due to the insertion of a single fundamental quark into the spherical entangling region. In the AdS dual, this is achieved by inserting a probe fundamental string (F1) into the hyperbolic AdS geometry and computing the thermal entropy from the Nambu-Goto worldsheet action in this geometry. The F1-string worldsheet is placed at , and stretches from the hyperbolic horizon at to the conformal boundary at . The tension for the fundamental string, in units of the AdS radius is TF1=12πα′=√λ2π, (23) where is the ’t Hooft coupling for the theory. Then the action for the static F-string embedding stretched along the radial AdS coordinate is IF1(β)=√λ2π∫ρ∞ρ+dρ∫π2−π2dτ√det∗g+IF1c.t. (24) The determinant of the induced metric on the worldsheet for this embedding is unity, and the boundary counterterm which regularises the worldsheet action is independent of the temperature as it is only sensitive to UV details. Varying with respect to , we thus obtain S□=β∂IF1(β)∂β=√λ6. (25) Our result differs by a factor of two from that of Lewkowycz:2013laa (), as the range of integration over Euclidean time is restricted to , which corresponds to one half of the Polyakov loop on . ##### EE from stress tensor evaluation: For this simple example it is instructive to verify how the above result can be reproduced holographically, using eq.(8) which relies on the expectation value of the stress tensor in the presence of the temporal Wilson line in Rindler frame. This computes the expectation value of the entanglement Hamiltonian which generates time translations along the compact time direction. In particular, the EE for the impurity is given as S□=lnZ□H+∫H√gH⟨Tττ⟩□. (26) The ingredients in the computation can be calculated either directly in the AdS Poincaré patch, or after translating to the hyperbolic AdS picture. In the Poincaré patch, we need to ensure that all integrals over the Euclidean string worldsheet are restricted to the domain, D:x20+z2≤R2,z>0. (27) Therefore, the impurity action in hyperbolic space is given by integrating the (Euclidean) Nambu-Goto action in the Poincaré patch of AdS over : −lnZ□H=I□=√λ2π[∫Rϵdz1z2∫√R2−z2−√R2−z2dx0−∫R−Rdx01ϵ]=−√λ2. (28) The second term is the worldsheet counterterm induced on the conformal boundary at , as is taken to zero. The stress tensor expectation value333The worldsheet stress tensor for the string embedding is obtained by varying with respect to the spacetime metric, so that , in Lorentzian signature. for the heavy quark source in the Unruh state, or equivalently, in hyperbolic space would normally be computed by reading off the normalizable mode of the metric sourced by the probe string in the bulk. Alternatively, from the Hamiltonian formulation of the AdS/CFT correspondence, the (regularized) energy of the probe should directly yield the energy of the corresponding source (impurity) in the boundary CFT Karch:2008uy (). The result for the energy of the probe string is thus of the form ∫H⟨Tττ⟩□=√λ2π[∫π/2−π/2dτ∫ρ∞1dρgττ], (29) where is the UV cutoff. Keeping only the finite terms, we find ∫H⟨Tττ⟩□=−√λ3, (30) so that the contribution to the EE of the spherical region from the heavy quark is S□=√λ6. (31) ## 3 D5-brane impurity In this section we will focus our attention on the D5-brane embedding which computes the BPS Wilson loop in SYM, in the antisymmetric tensor representation. The embedding admits a deformation which can be interpreted as an RG flow on the worldvolume of the impurity Kumar:2016jxy (). Our goal will be to extract the behaviour of the impurity EE along this flow. ### 3.1 AdS embeddings of the D5-brane The D5-brane embedding, dual to a straight Wilson line in the theory, preserves an subgroup of the global R-symmetry. This is realized geometrically, by having the D5-brane wrapping an latitude of the five-sphere in AdS. In the non-conformal “flow” solution described in Kumar:2016jxy (), the polar angle associated to this latitude varies as a function of the radial position in AdS. We can choose the worldvolume coordinates to be , where parametrises the non-compact spatial coordinate on the brane. We will eventually choose the gauge . The induced metric for such an embedding in (Euclidean) AdS is, ∗ds2=dσ2(z′(σ)2z2+θ′(σ)2)+dx20z2+sin2θdΩ24. (32) The action for the D5-brane consists of the standard Dirac-Born-Infeld (DBI) and Wess-Zumino (WZ) terms. The latter supports the configuration when a non-zero, radial world-volume electric field is switched on. In Euclidean signature this is purely imaginary and will be denoted in terms of the real quantity : G=−2πiα′F0σ. (33) The Wess-Zumino term for the D5-brane embedding is induced by the pullback of the RR four-form potential determined by the volume form on AdS. In particular, the relevant component of is C(4)=1gs[32(θ−π)−sin3θcosθ−32sinθcosθ]ω4, (34) where is the volume form of the unit four-sphere. The four-form potential is chosen so that the five-form flux comes out proportional to the volume form of : F(5)=dC(4)=1gs4sin4θdθ∧ω4. (35) The D5-brane embedding is then determined by the equations of motion following from the action ID5=TD5∫d6σe−ϕ√∗g+2πα′F−igsTD5∫2πα′F∧C(4)+Ic.t.. (36) The action is regularized by counterterms . The dilaton vanishes in the AdS background dual to the theory, and the D5-brane tension can be expressed in terms of gauge theory parameters as TD5=N√λ8π4,λ=4πgsN. (37) The counterterms can be split in two pieces: one which regulates the UV divergences in the action and another which fixes the number of units of string charge carried by the embedding to be drukkerfiol (); Kumar:2016jxy (), Ic.t.=IUV+IU(1), (38) IUV=−∫dx0(zδIδ(∂σz)+(θ(σ)−θ∣∣z=0))δIδ(∂σθ))∣∣z=ϵ. IU(1)=−i∫dx0dσFμνδIδFμν=ik∫dx0dσF0σ. The counterterm enforces a Lagrange multiplier constraint that fixes the number of units of string charge carried by the configuration. Putting together all these ingredients, choosing the gauge , the final form for the D5-brane action is ID5=TD58π23∫dx0∫ϵdz[sin4θ√z−4+z−2θ′2−G2−D(θ)G]+IUV, (39) with D(θ)≡sin3θcosθ+32(sinθcosθ−θ+π(1−κ)),κ≡kN. (40) #### 3.1.1 The constant embedding It is easy to check that the equations of motion yield a constant solution: θ=θκ,sinθκcosθκ−θκ+π(1−κ)=0. (41) This solution is BPS and has vanishing regularized action in Poincaré patch. It yields the straight BPS Wilson loop in the antisymmetric tensor representation yamaguchi (); passerini (); paper1 (). In all respects the constant solution is identical to the F-string solution for a fundamental quark, except for the normalization of the action which is controlled by . The contribution to the EE of a spherical region can be calculated by applying the formula eq.(22) to the constant embedding in hyperbolic AdS space (20). Repeating the above excercise for the solution which yields , we obtain the regularized action as a function of the temperature of the hyperbolic AdS black hole: ID5(β)=TD58π23∫π2−π2dτ∫ρ∞ρ+dρ[sin4θκ√1−G2−D(θκ)G]+IUV. (42) where . The entanglement entropy contribution from the impurity in the antisymmetric tensor representation is then, SAk=limβ→2πβ∂βID5(β)=N9π√λsin3θκ. (43) #### 3.1.2 The D5 flow solution The Poincaré patch action for the D5-brane embedding permits a non-constant zero temperature BPS solution Callan:1998iq (). This solution interpolates between a spike or bundle of coincident strings in the UV and the blown-up D5-brane configuration corresponding to the antisymmetric representation reviewed above. In the boundary gauge theory, the flow can be interpreted as the screening of coincident quarks in the fundamental representation to a source in the antisymmetric tensor representation Kumar:2016jxy (). As seen in Kumar:2016jxy (), the flow appears as a result of a condensate for a dimension one operator in the UV worldline quantum mechanics of the impurity. The Poincaré patch BPS embedding solves the first order equation, zdθdz=−∂θ~D~D,~D(θ)≡(sin5θ+D(θ)cosθ), (44) and is explicitly given by the solution, 1z=Asinθ(θ−sinθcosθ−π(1−κ)πκ)1/3, (45) where is an integration constant with dimensions of inverse length. For small , the polar angle approaches , so that the wrapped by the D5-brane shrinks to zero size and the collapsed configuration must be viewed as -coincident strings. In the IR limit on the other hand, when , approaches which yields the blown-up D5-brane embedding. In order to calculate the excess EE contribution from this non-conformal impurity in the boundary CFT, we first need to map the configuration to hyperbolically sliced AdS (20). The internal angle of the ten dimensional geometry is unaffected by the map. The only other active coordinate in the D5-brane embedding is the radial position in AdS spacetime which, upon rewriting in terms of hyperbolic Euclidean AdS coordinates (LABEL:poinctohyp), yields the transformed solution: 1R(ρ+√ρ2−1cosτ)=Asinθ(θ−sinθcosθ−π(1−κ)πκ)1/3, (46) with the restriction . The impurity is placed at the spatial origin, in , which corresponds to in . Since is a function of and , the induced metric on the D5-brane is, ∗ds2∣∣D5 =[fn(ρ)+(∂τθ)2]dτ2+[1fn(ρ)+(∂ρθ)2]dρ2+2∂ρθ∂τθdτdρ (47) +sin2θdΩ24, where is given in eq.(20). The D5-brane embedding, mapped to hyperbolic AdS, must also have a non-trivial background worldvolume electric field. Since the embedding shares only the temporal and radial directions with the bulk AdS geometry, there is only one component of the field strength to switch on: i~G=2πα′Fτρ. (48) For the case with , can be obtained directly by transforming the field strength in the Poincaré patch solution. To implement the replica trick, however, we first need to consider general temperatures of the hyperbolic black hole. Using the above ansatz for the D5-brane embedding, the action in the hyperbolic AdS background is, ID5(β)= TD5Vol(S4)∫π2−π2dτ∫ρ∞ρ+dρ[sin4θ√1−~G2+fn(ρ)(∂ρθ)2+(∂τθ)2fn(ρ) (49) −D(θ)~G]+IUV. Solving for using its equation of motion and plugging it back in, ID5(β)=TD58π23∫π2−π2dτ∫ρ∞ρ+dρ √sin8θ+D(θ)2× √1+fn(ρ)(∂ρθ)2+(∂τθ)2fn(ρ)+IUV. In order to extract entanglement entropy excess due to the impurity, we need to vary this action with repect to and set , whilst keeping fixed as the BPS solution at . The latter is justified because the first variation of the action with repect to vanishes by the equations of motion at . Once the variations with respect to are performed, the remaining integrals are most easily evaluated in Poincaré patch coordinates, in which the D-brane embedding function is simpler. The transformations (LABEL:poinctohyp) when restricted to the location of the heavy quark at imply, ρ=R2+x20+z22zR,cosτ=R2−x20−z2√(x20+z2+R2)2−4R2z2. (51) The Jacobian for the transformation on the worldvolume back to Poincaré patch coordinates is, ∣∣∣∂ρ∂z∂τ∂x0−∂ρ∂x0∂τ∂z∣∣∣=1z2. (52) We also note that the kinetic terms for a static Poincaré patch configuration satisfy, z2θ′(z)2=(ρ2−1)(∂ρθ)2+(∂τθ)2ρ2−1. (53) We first evaluate the action (or free energy) of the BPS embedding in the hyperbolic AdS background with , by recasting in Poincaré patch coordinates: ID5(2π)=TD5Vol(S4)∫∫Ddx0dzddz [−1z~D(θ)]−2Rϵ~D(θ)∣∣z=ϵ, (54) where is defined in eq.(44). Although the integrand is a total derivative, the fact that the integration region is limited to the half-disk (eq.(27)), renders the evaluation nontrivial. In particular, the integration over is performed first since the integrand is independent of time. Following this, the remaining integral can be performed numerically after exchanging the integration variable for , which is more convenient as the solution is known explicitly for as a function of . The values of the (regularized) actions for the two types of conformal sources, fundamental and antisymmetric tensor in hyperbolic space are: kI□(2π)=−k√λ2,IAk(2π)=−N√λ3πsin3θκ. (55) The partition function of the heavy quark source in hyperbolic space with inverse temperature is plotted in figure 2 as a function of the deformation parameter . It is a monotonically decreasing function of the size of the entangling region and interpolates between the value for coincident fundamental quarks in the UV and that for a source transforming in the antisymmetric tensor representation in the IR. We note that is like a relative entropy relative (). It is the free energy difference between the embeddings with non-zero and vanishing deformations in the thermal state with associated to the modular Hamiltonian. This explains the monotonic increase of with , and the vanishing slope in figure 2 for arbitrarily small deformations. By expanding the solution for the embedding function , the deformation can be interpreted as the expectation value of a dimension one operator in the UV quantum mechanics of the boundary impurity Kumar:2016jxy (). The EE contribution from the impurity is obtained by varying the “off-shell” action (3.1.2) with respect to and evaluating the first variation on the BPS solution, SD5(RA)=limβ→2πβ∂βID5(β) (56) =TD5Vol(S4)[π3∂θ~D~D∣∣∣ρ=1+13∫π2−π2dτ∫∞1dρ(∂θ~D)2~D1−2z2sin2τ/R2ρ2(ρ2−1)]. We have made use of the BPS formula (44) and that when . Recasting the result in terms of the integral over the domain in Poincaré patch, we find: SD5(RA)=limβ→2πβ∂βID5(β) (57) =TD58π23[π3sin8θ+D2sin5θ+Dcosθ∣∣∣ρ=1−13∫dx0∫dzθ′(z)sinθ(sin3θcosθ−D)× ×16R4z3(x40+x20(2R2−6z2)+(z2−R2)2)(z2+x20+R2)2((x20+z2)2+2R2(x20−z2)+R4)2]. As in the case of the free energy above, the integration over the domain must be performed numerically. The integral over the coordinate can once again be obtained analytically, and the final integration is achieved numerically after exchanging for . The result for the entanglement entropy excess is a function of the dimensionless combination , as plotted in fig.(3). For every value of , we see that the entanglement entropy contribution interpolates between that of coincident fundamental quarks and a source in the antisymmetric representation : SD5kS□∣∣∣AR→0=1,SD5kS□∣∣∣AR→∞=23πκsin3θκ. (58) The main notable feature of the results is that the variation of the EE with size of the entangling region (or equivalently the deformation ) is non-monotonic, exhibiting a maximum at a special value of of order unity, and decreasing monotonically subsequently. ### 3.2 Comparison with ⟨OF2⟩ The D5-brane is a source of various supergravity fields in and the falloffs of these fields yield the VEVs of corresponding operators in the boundary gauge theory. In particular, the dilaton falloff was used in Kumar:2016jxy () to infer the VEV of the dimension four operator , equal to the Lagrangian density of the theory, in the presence of the non-conformal D5-brane impurity. Since is a dimension four operator, for conformal impurities the VEV of this operator scales as where is the spatial distance from the heavy quark on the boundary: ⟨OF2⟩ =√224π2(3πκ2)√λr4,rA≪1, =√224π2sin3θκ√λr4,rA≫1. In fig. (4), we plot the dimensionless ratio as a function of the dimensionless distance from the impurity . The qualitative features of the plots are similar to those of the entanglement entropy contribution from the defect. The sources in the fundamental representation are screened into the antisymmetric representation, but the effect is non-monotonic as a function of the distance from the source. ## 4 D3-brane impurities The D3-brane embedding with worldvolume found by Drukker and Fiol drukkerfiol () computes BPS Wilson lines in the rank symmetric tensor representation passerini (); paper1 (). In Kumar:2016jxy (), a D3-brane (BPS) embedding was analyzed which interpolates between the representation in the UV and coincident strings in the IR. We will first review the properties of this zero temperature solution in Poincaré patch and subsequently analyze its geometric entropy. ### 4.1 Poincaré patch D3-brane embedding The D3-brane wraps an AdS subset of AdS and is supported by units of flux. Since the internal five-sphere plays no role we will suppress it in the discussion below. The D3-brane impurity preserves the same symmetries as a point at the spatial origin of the boundary CFT on . In particular, choosing the worldvolume coordinates to be the induced metric for the relevant embedding takes the form (in Euclidean signature), ∗ds2∣∣D3=1z2[dx20+dσ2[(∂z∂σ)2+(∂r∂σ)2]+r(σ)2dΩ22]. (60) Eventually we will set after discussing the counterterms and UV regularization. The background five-form RR flux, and its associated four-form potential play a crucial role in stabilizing the D3-brane configuration. In particular, the pullback of the four-form potential onto the D3-brane worldvolume is ∗C4=−igsr2z4∂σrdx0∧dσ∧ω2, (61) where is the volume-form on the unit two-sphere. We also recall that is only defined up to a gauge choice. The choice of gauge will be important when we proceed to the calculation of the entanglement entropy contribution from the defect. The expanded D3-brane configiuration also has a worldvolume electric field and the a tension . Putting all ingredients together, we find, ID3<
# I/O Utils¶ The ioutils module contains functions for reading data from automatic plate readers. The different functions read the data files and generate a data table of type pandas.DataFrame which contains all the relevant data: the read from every well at every time point. This data table is in a tidy data format, meaning that each row in the table contains a single measurement with the following values (as columns): • Time: in hours (mandatory) • OD: optical density which is a proxy for cell density (mandatory) • Well: as in the name of the well such as “A1” or “H12” (optional) • Row, Col: the row and column of the well in the plate (optional) • Strain: the name of the strain (optional) • Color: the color that should be given to graphs of the data from this well (optional) Any other columns can also be provided (for example, Cycle Nr. and Temp. [°C] are provided by Tecan Infinity). Example of a pandas.DataFrame generated using the ioutils module functions: Time Temp. [°C] Cycle Nr. Well OD Row Col Strain Color 0.0 30.0 1.0 A1 0.109999999403954 A 1 G #4daf4a 0.23244444444444445 30.3 2.0 A1 0.109899997711182 A 1 G #4daf4a 0.46569444444444447 30.1 3.0 A1 0.110500000417233 A 1 G #4daf4a 0.6981111111111112 30.1 4.0 A1 0.110500000417233 A 1 G #4daf4a 0.9305555555555556 30.0 5.0 A1 0.111599996685982 A 1 G #4daf4a ## Plate template¶ Normally, the output of a plate reader doesn’t include information about the strain in each well. To integrate that information (as well as the colors that should be used for plotting the data from each well), you must provide a plate definition CSV file. This plate template file is a table in which each row has four values: Row, Col, Strain, and Color. The Row and Col values define the wells; the Strain and Color values define the names of the strains and their respective colors (for plotting purposes). These template files can be created using the Plato web app, using Excel (save as .csv), or in any other way that is convinient to you. Curveball is also shipped with some plate templates files - type curveball plate list in the command line for a list of the builtin plate templates: > curveball plate --list checkerboard.csv checkerboard2.csv DH5a-s12-TG1.csv DH5a-TG1.csv G-RG-R.csv nine-strains.csv six-strains.csv Example of the first 5 rows of a plate template file: Row Col Strain Color A 1 0 #ffffff A 2 0 #ffffff A 3 0 #ffffff A 4 0 #ffffff A 5 0 #ffffff A full example can be viewed by typing curveball plate in the command line. ### Members¶ curveball.ioutils.read_curveball_csv(filename, max_time=None, plate=None)[source] Reads growth measurements from a Curveball csv (comma separated values) file. Parameters filenamestr path to the file. platepandas.DataFrame, optional data frame representing a plate, usually generated by reading a CSV file generated by Plato. Returns pandas.DataFrame Examples >>> df = curveball.ioutils.read_curveball_csv("data/Tecan_210115.csv") curveball.ioutils.read_sunrise_xlsx(filename, label='OD', max_time=None, plate=None)[source] Reads growth measurements from a Tecan Sunrise Excel output file. Parameters filenamestr pattern of the XLSX files to be read. Use * and ? in filename to read multiple files and parse them into a single data frame. label : str, optional labelstr, optional measurment name to use for the data in the file, defaults to OD. max_timefloat, optional maximal time in hours, defaults to infinity platepandas.DataFrame, optional data frame representing a plate, usually generated by reading a CSV file generated by Plato. Returns pandas.DataFrame Data frame containing the columns: • Time (float, in hours) • OD (or the value of label, if given) • Well (str): the well name, usually a letter for the row and a number of the column. • Row (str): the letter corresponding to the well row. • Col (str): the number corresponding to the well column. • Filename (str): the filename from which this measurement was read. • Strain (str): if a plate was given, this is the strain name corresponding to the well from the plate. • Color (str, hex format): if a plate was given, this is the strain color corresponding to the well from the plate. curveball.ioutils.read_tecan_mat(filename, time_label='tps', value_label='plate_mat', value_name='OD', plate_width=12, max_time=None, plate=None)[source] Reads growth measurements from a Matlab file generated by a propriety script at the Pilpel lab. Parameters filenamestr name of the XML file to be read. Use * and ? in filename to read multiple files and parse them into a single data frame. time_labelstr, optional name of the field used to store the time values, defaults to tps. labelstr name of the field used to store the OD values, defaults to plate_mat. plate_widthint width of the microwell in plate in number of wells, defaults to 12. max_timefloat, optional maximal time in hours, defaults to infinity platepandas.DataFrame, optional data frame representing a plate, usually generated by reading a CSV file generated by Plato. Returns pandas.DataFrame Data frame containing the columns: • Time (float, in hours) • OD (float) • Well (str): the well name, usually a letter for the row and a number of the column. • Row (str): the letter corresponding to the well row. • Col (str): the number corresponding to the well column. • Filename (str): the filename from which this measurement was read. • Strain (str): if a plate was given, this is the strain name corresponding to the well from the plate. • Color (str, hex format): if a plate was given, this is the strain color corresponding to the well from the plate. curveball.ioutils.read_tecan_xlsx(filename, label='OD', sheets=None, max_time=None, plate=None, PRINT=False)[source] Reads growth measurements from a Tecan Infinity Excel output file. Parameters filenamestr path to the file. labelstr / sequence of str a string or sequence of strings containing measurment names used as titles of the data tables in the file. sheetslist, optional list of sheet numbers, if known. Otherwise the function will try to all the sheets. max_timefloat, optional maximal time in hours, defaults to infinity platepandas.DataFrame, optional data frame representing a plate, usually generated by reading a CSV file generated by Plato. Returns pandas.DataFrame Data frame containing the columns: There will also be a separate column for each label, and if there is more than one label, a separate Time and Temp. [°C] column for each label. Raises ValueError if not data was parsed from the file. Examples >>> plate = pd.read_csv("plate_templates/G-RG-R.csv") >>> df = curveball.ioutils.read_tecan_xlsx("data/Tecan_210115.xlsx", label=('OD','Green','Red'), max_time=12, plate=plate) >>> df.shape (8544, 9) curveball.ioutils.read_tecan_xml(filename, label='OD', max_time=None, plate=None)[source] Reads growth measurements from a Tecan Infinity XML output files. Parameters filenamestr pattern of the XML files to be read. Use * and ? in filename to read multiple files and parse them into a single data frame. labelstr, optional measurment name used as Name in the measurement sections in the file, defaults to OD. max_timefloat, optional maximal time in hours, defaults to infinity platepandas.DataFrame, optional data frame representing a plate, usually generated by reading a CSV file generated by Plato. Returns pandas.DataFrame Data frame containing the columns: • Time (float, in hours) • Well (str): the well name, usually a letter for the row and a number of the column. • Row (str): the letter corresponding to the well row. • Col (str): the number corresponding to the well column. • Filename (str): the filename from which this measurement was read. • Strain (str): if a plate was given, this is the strain name corresponding to the well from the plate. • Color (str, hex format): if a plate was given, this is the strain color corresponding to the well from the plate. There will also be a separate column for the value of the label. Notes Examples >>> import zipfile >>> with zipfile.ZipFile("data/20130211_dh.zip") as z: z.extractall("data/20130211_dh") >>> df = curveball.ioutils.read_tecan_xlsx("data/20130211_dh/*.xml", 'OD', plate=plate) >>> df.shape (2016, 8) curveball.ioutils.write_curveball_csv(df, filename)[source] Reads growth measurements from a Curveball csv (comma separated values) file. Parameters dfpandas.DataFrame, optional data frame to write filenamestr path to the output file
# Is any computational complexity question solved by injury priority method except Post problem? As we know, there are many questions of Turing Degree closed by injury priority method. Is any computational complexity question solved by injury priority method except Post problem or Turing Degree? I'm curious about how to solve by same or similiar methods the parallel questions up and down the computational hierarchy BTW,any question in computer science is solved by forcing except continuum hypothesis? Buhrman and Torenvliet use a resource-bounded priority method to build an oracle $$A$$ such that $$NEXP^A \subseteq P^{NP^A}$$.
# $M,N\in \Bbb R ^{n\times n}$, show that $e^{(M+N)} = e^{M}e^N$ given $MN=NM$ I am working on the following problem. Let $e^{Mt} = \sum\limits_{k=0}^{\infty} \frac{M^k t^k}{k!}$ where $M$ is an $n\times n$ matrix. Now prove that $$e^{(M+N)} = e^{M}e^N$$ given that $MN=NM$, ie $M$ and $N$ commute. Now the left hand side of the desired equality is $$e^{(M+N)} = I+ (M+N) + \frac{(M+N)^2}{2!} + \frac{(M+N)^3}{3!} + \ldots$$ On the right hand side of the equation we have $$e^Me^N = \left(I + M + \frac{M^2}{2!} + \frac{M^3}{3!}\ldots\right) \left(I + N + \frac{N^2}{2!} + \frac{N^3}{3!} \ldots\right)$$ Now basically this is as far as I got… I am unsure on how to work out the product of the two infinite sums. Possibly I need to expand the powers on the left hand side expression but I am unsure how to do this in an infinite sum… If anyone could give me an answer or a hint that can help me forward I would greatly appreciate it. Thanks #### Solutions Collecting From Web of "$M,N\in \Bbb R ^{n\times n}$, show that $e^{(M+N)} = e^{M}e^N$ given $MN=NM$" Another take on it, which avoids the somewhat tedious term-by-term manipulation and term-by-term comparison of matrix power series: Consider the ordinary, constant coefficient, matrix differential equation $dX / dt = (M + N)X; \, \text{with} \, X(0) = I; \tag{1}$ the unique matrix solution is well-known to be $X(t) = e^{(M + N)t}. \tag{2}$ Next, set $Y(t) = e^{Mt}e^{Nt} \tag{3}$ and note that, by the Leibnitz rule for derivatives of products, $dY / dt = (d(e^{Mt}) / dt) e^{Nt} + e^{Mt}(d(e^{Nt}) /dt) = Me^{Mt}e^{Nt} + e^{Mt}Ne^{Nt}, \tag{4}$ and since $MN – NM = [M, N] = 0$ we also have $[e^{Mt}, N] = 0$ so that (4) becomes $dY / dt = Me^{Mt}e^{Nt} + Ne^{Mt}e^{Nt} = (M + N)e^{Mt}e^{Nt} = (M + N)Y, \tag{5}$ and evidently $Y(0) = I, \tag{6}$ so that $X(t)$ and $Yt)$ satisfy the same differential equation with the same initial conditions; thus $X(t) = Y(t)$ for all $t$, or $e^{(M + N)t} = e^{Mt}e^{Nt} \tag{7}$ for all $t$. Taking $t = 1$ yields the requisite result.QED Hope this helps. Cheerio, and as always, Fiat Lux!!! \begin{align} \color{#0000ff}{\large\expo{M}\expo{N}} &= \sum_{\ell = 0}^{\infty}{M^{\ell} \over \ell!} \sum_{\ell’ = 0}^{\infty}{N^{\ell’} \over \ell’!} = \sum_{\ell = 0}^{\infty}\sum_{\ell’ = 0}^{\infty}{M^{\ell}N^{\ell’} \over \ell!\ell’!} \sum_{n = 0}^{\infty}\delta_{n, \ell + \ell’} \\[3mm]&= \sum_{n = 0}^{\infty}\sum_{\ell = 0}^{\infty}{M^{\ell} \over \ell!} \sum_{\ell’ = 0}^{\infty}{N^{\ell’} \over \ell’!}\,\delta_{\ell’,n – \ell} = \sum_{n = 0}^{\infty} \sum_{\ell = 0 \atop {\vphantom{\LARGE A}n – \ell\ \geq\ 0}}^{\infty} {M^{\ell} \over \ell!}\,{N^{n – \ell} \over \pars{n – \ell}!} \\[3mm]&= \sum_{n = 0}^{\infty}{1 \over n!} \sum_{\ell = 0}^{n} {n! \over \ell!\pars{n – \ell}!}\,M^{\ell}N^{n – \ell} = \sum_{n = 0}^{\infty}{1 \over n!} \sum_{\ell = 0}^{n}{n \choose \ell}M^{\ell}N^{n – \ell} \\[3mm]& =\sum_{n = 0}^{\infty}{1 \over n!}\pars{M + N}^{n} = \end{align} Hint. $$(M+N)^2=M^2+MN+NM+N^2=(\text{if}~~[M,N]=0)=M^2+2MN+N^2.$$ Use this fact in the expansion of $\exp(M+N)$ to arrive at sum of monomials of the form $M^qN^r$ with $q,r\geq 0$ and rational coefficients (without $[M,N]=0$ you would have also monomials of the form $N^rM^q$!) . To finish the proof you need to collect the monomials of the same degree in $\exp(M)\exp(N)$, where the ordering is clear by definition. The product of two series (one of which is absolutely convergent) is $$\left(\sum_{n=0}^\infty a_n\right)\left(\sum_{n=0}^\infty b_n\right)=\sum_{n=0}^\infty \left(\sum_{k=0}^n a_{n-k}b_{k}\right).$$ Applying this to the series, $$e^Me^N = \left(I + M + \frac{M^2}{2!} + \frac{M^3}{3!}\ldots\right) \left(I + N + \frac{N^2}{2!} + \frac{N^3}{3!} \ldots\right)\\=I+ (MI+IN)+\left(\frac{M^2}{2}I+MN+I\frac{N^2}{2}\right)+\ldots$$ Now compare this to the other sum $$e^{(M+N)} = I+ (M+N) + \frac{(M+N)^2}{2!} + \frac{(M+N)^3}{3!} + \ldots$$
# Why does it appear impossible to fit Gaussians to arbitrary probability density functions $p$? I want to fit a Gaussian $$q$$ to a pdf $$p$$ by minimizing the energy $$E = -\int q(x) \log p(x) dx$$. This should result in a "delta function" Gaussian with $$\sigma \rightarrow 0$$ and $$\mu \rightarrow x^*$$, where $$x^*$$ is the mode of the target distribution. If I try to do this via gradient descent, I get \begin{align} \nabla_\mu E &= - \int \log p \, \nabla_\mu q \,\, dx \\ &= - \int \log p \, \cdot (q \cdot \nabla_\mu (\log q)) \,\, dx \\ &= - \mathbb E_q[ \nabla_\mu (\log q) \log p] \\ &= \mathbb E_q[ \Sigma^{-1}(x-\mu) \log p] \\ \end{align} where the second line comes from the chain rule. But according to the last line, if I draw from $$q$$, I get $$\mu$$ in expectation, which means my gradient will be usually be zero if I attempt gradient descent. What's going on here? The last expectation isn't 0. For example, suppose you approximate $$\log p$$ with a linear function around $$\mu$$: $$\log p \approx (x-\mu)^Tw+c$$. Then you have: \begin{align} &\mathbb{E}_q[\Sigma^{-1}(x-\mu)((x-\mu)^Tw+c)] \\ &\text{now pull out all the terms not involving x} \\ &= \Sigma^{-1}\mathbb{E}_q[(x-\mu)(x-\mu)^T]w + c\Sigma^{-1}\mathbb{E}_q[(x-\mu)] \\ &= \Sigma^{-1}\Sigma w + 0 \\ &= w \end{align} So in fact, this is equivalent to gradient ascent on the gradient of $$\log p$$ (assuming small enough $$\sigma)$$. To get some intuition for this problem, write the energy as a moment: $$E(\mu,\sigma) \equiv \mathbb{E}(-\log p(X) | X \sim \text{N}(\mu, \sigma^2)).$$ Taking $$\sigma=0$$ and $$\mu=x^*$$ (where the latter is the mode of $$p$$) so that the distribution in the expectation is a delta function at the mode of target distribution then you get: $$E(x^*,0) = \mathbb{E}(-\log p(X) | X = x^*) = \mathbb{E}(-\log p(x^*)) = -\log p(x^*).$$ Now, to show this is the minimum, we note that the mode $$x^*$$ satisfies $$p(x^*) = \max p(x)$$, so we have: \begin{align} E(\mu,\sigma) &= \mathbb{E}(-\log p(X) | X \sim \text{N}(\mu, \sigma^2)) \\[12pt] &= - \int \limits_\mathbb{R} \log p(x) \cdot \text{N}(x|\mu, \sigma^2) \ dx \\[6pt] &\geqslant - \int \limits_\mathbb{R} \Big( \max_{x} \log p(x) \Big) \cdot \text{N}(x|\mu, \sigma^2) \ dx \\[6pt] &= - \int \limits_\mathbb{R} \log \Big( \max_{x} p(x) \Big) \cdot \text{N}(x|\mu, \sigma^2) \ dx \\[6pt] &= - \int \limits_\mathbb{R} \log p(x^*) \cdot \text{N}(x|\mu, \sigma^2) \ dx \\[6pt] &= - \log p(x^*) \int \limits_\mathbb{R} \text{N}(x|\mu, \sigma^2) \ dx \\[6pt] &= - \log p(x^*). \\[6pt] \end{align} This establishes that $$E(x^*,0) \geqslant E(\mu, \sigma^2)$$ for all $$\mu \in \mathbb{R}$$ and $$\sigma \geqslant 0$$, which means that the delta function at the mode is a minimising input for the energy function. There is not really any need for gradient descent (or any other iterative method) here, except possibly to find the mode of $$p$$. • What exactly do you mean by "there is not really any need for gradient descent"? Given an initialization of $\mu$, don't I need to move in the direction of the gradient in order to approach $x^*$? – actinidia May 15 at 6:51 • It depends on whether or not you already have an explicit formula for $x^*$. For many distributions the mode has an explicit closed form, in which case iterative methods are not needed. – Ben May 15 at 9:00
# What did Tesla mean by “there is no energy in matter”? I was reading "THE ETERNAL SOURCE OF ENERGY OF THE UNIVERSE, ORIGIN AND INTENSITY OF COSMIC RAYS" by Nikola Tesla, and he states: "There is no energy in matter except that absorbed from the medium." What does he mean by this? Einstein's famous equation $E=mc^2$ shows equivalence between mass and energy. Does this equation only mean that energy has mass? My initial understanding would be that matter can be converted into energy. However, if Tesla is right, then that cannot be since there is no energy in matter, only stored energy. Can someone clarify my confusion? • This would be an excellent question for history of science and mathematics - I'd like to see an answer. However, Tesla's scientific thoughts were formed long before Einstein formed his and his view of matter and energy would have been profoundly different from that which we have now. So, I don't think you'll get an answer to this here, but rather from someone with a better knowledge of Tesla's life and / or detailed knowledge of 19th century physics. – WetSavannaAnimal Jun 9 '15 at 12:42 • During Tesla's time, Energy and Mass were two different concepts altogether. To them, matter are things that have mass, and they can posses energy by moving or by being in a potential forcefield. On its own however, it has no energy. Now however, mass or more specifically rest mass, has energy which leads to the equation above. Also regarding your statement "Energy has mass", you are wrong. Mass is energy but energy may not always have mass. For example a photon or any massless particles. – Horus Jun 9 '15 at 15:05 • @Horus, can't any energy be converted to mass and any mass likewise converted to energy? – ouflak Jun 10 '15 at 12:44 • @ouflak: Tesla wrote that in 1932. According to Wikipedia, the discovery of nuclear fission occurred in 1938. Top physicists probably understood that it was theoretically possible, but you cannot assume it was mainstream knowledge. – Martin Argerami Jun 10 '15 at 17:29 • @ouflak Yes energy can be converted to mass and vice versa. I was just remarking how just because something has energy, does not automatically mean it has mass. – Horus Jun 19 '15 at 10:02 You've taken this out of context. The context in which Tesla is writing, he's talking about kinetic energy and heat. Just a few sentences previously he states: ...according to an experimental findings and deductions of positive science, any material substance (cooled down to the absolute zero of temperature) should be devoid of an internal movement and energy, so to speak, dead. In that context, he's not denying mass-energy equivalence. The only thing he is mistaken about is not accounting for the quantum mechanical zero-point energy (seems forgivable to not be that pedantic in his writing). While he may have disagreed with Einstein's work, it's not clear at all that he's denying it with that single statement. • I really didn't think to google the phrase (which I presume is how you found the great link +1): I just thought Tesla's conception of these things would be so vastly different from ours that I doubted I'd understand anything I found written by him. But not so. – WetSavannaAnimal Jun 10 '15 at 8:45 I think that sentence shows that Tesla did not quite grasp the results from relativity. This is not unusual, as many physicists required many years to fully accept relativity, but by 1932 (the date of the writing of that text) I would expect it to be already orthodox knowledge. In any case, Tesla is known to have been a self-taught experimental genius and it's not unreasonable to believe his understanding of fundamental physics was flawed. This does not overshadow his achievements and genius, it just means his talent was not regarding the same inquiries as for, say, Einstein or other theoretical physics. The best understanding of Einstein's relation is that Energy and Mass can be seen as two sides of the same coin: mass can be converted to energy and energy to mass. Examples: 1. Creation of particles at accelerators: energy to mass. 2. Annihilation of matter (with anti-matter): mass to energy. 3. Mass of nucleons is not the sum of the mass of the (valence) quarks, instead it is the related to the energy of the strong nuclear force confinement. So Tesla was not right about his interpretation, but then again you have to remember that Tesla was an experimental, engineering, inventor genius and not a theoretical physicist. In fact, he had no formal training as a physicist (not a fully formed field at the time) and only partial higher training in scientific and engineering areas. • I have one objection to this, but it is a significant one: $E=mc^2$ says nothing about converting energy to mass or vice versa. If you do find some way to do the conversion, it tells you how much energy corresponds to a given amount of mass. – David Z Jun 10 '15 at 8:37 ## protected by Qmechanic♦Jun 10 '15 at 5:44 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
# 6.8 Fitting exponential models to data  (Page 4/12) Page 4 / 12 ## Using logarithmic regression to fit a model to data Due to advances in medicine and higher standards of living, life expectancy has been increasing in most developed countries since the beginning of the 20th century. [link] shows the average life expectancies, in years, of Americans from 1900–2010 Source: Center for Disease Control and Prevention, 2013 . Year 1900 1910 1920 1930 1940 1950 Life Expectancy(Years) 47.3 50 54.1 59.7 62.9 68.2 Year 1960 1970 1980 1990 2000 2010 Life Expectancy(Years) 69.7 70.8 73.7 75.4 76.8 78.7 1. Let $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ represent time in decades starting with $\text{\hspace{0.17em}}x=1\text{\hspace{0.17em}}$ for the year 1900, $\text{\hspace{0.17em}}x=2\text{\hspace{0.17em}}$ for the year 1910, and so on. Let $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ represent the corresponding life expectancy. Use logarithmic regression to fit a model to these data. 2. Use the model to predict the average American life expectancy for the year 2030. 1. Using the STAT then EDIT menu on a graphing utility, list the years using values 1–12 in L1 and the corresponding life expectancy in L2. Then use the STATPLOT feature to verify that the scatterplot follows a logarithmic pattern as shown in [link] : Use the “LnReg” command from the STAT then CALC menu to obtain the logarithmic model, $y=42.52722583+13.85752327\mathrm{ln}\left(x\right)$ Next, graph the model in the same window as the scatterplot to verify it is a good fit as shown in [link] : 2. To predict the life expectancy of an American in the year 2030, substitute $\text{\hspace{0.17em}}x=14\text{\hspace{0.17em}}$ for the in the model and solve for $\text{\hspace{0.17em}}y:$ If life expectancy continues to increase at this pace, the average life expectancy of an American will be 79.1 by the year 2030. Sales of a video game released in the year 2000 took off at first, but then steadily slowed as time moved on. [link] shows the number of games sold, in thousands, from the years 2000–2010. Year 2000 2001 2002 2003 2004 2005 Number Sold (thousands) 142 149 154 155 159 161 Year 2006 2007 2008 2009 2010 - Number Sold (thousands) 163 164 164 166 167 - 1. Let $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ represent time in years starting with $\text{\hspace{0.17em}}x=1\text{\hspace{0.17em}}$ for the year 2000. Let $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ represent the number of games sold in thousands. Use logarithmic regression to fit a model to these data. 2. If games continue to sell at this rate, how many games will sell in 2015? Round to the nearest thousand. 1. The logarithmic regression model that fits these data is $\text{\hspace{0.17em}}y=141.91242949+10.45366573\mathrm{ln}\left(x\right)\text{\hspace{0.17em}}$ 2. If sales continue at this rate, about 171,000 games will be sold in the year 2015. ## Building a logistic model from data Like exponential and logarithmic growth, logistic growth increases over time. One of the most notable differences with logistic growth models is that, at a certain point, growth steadily slows and the function approaches an upper bound, or limiting value . Because of this, logistic regression is best for modeling phenomena where there are limits in expansion, such as availability of living space or nutrients. It is worth pointing out that logistic functions actually model resource-limited exponential growth. There are many examples of this type of growth in real-world situations, including population growth and spread of disease, rumors, and even stains in fabric. When performing logistic regression analysis , we use the form most commonly used on graphing utilities: the gradient function of a curve is 2x+4 and the curve passes through point (1,4) find the equation of the curve 1+cos²A/cos²A=2cosec²A-1 test for convergence the series 1+x/2+2!/9x3 a man walks up 200 meters along a straight road whose inclination is 30 degree.How high above the starting level is he? 100 meters Kuldeep Find that number sum and product of all the divisors of 360 Ajith exponential series Naveen what is subgroup Prove that: (2cos&+1)(2cos&-1)(2cos2&-1)=2cos4&+1 e power cos hyperbolic (x+iy) 10y Michael tan hyperbolic inverse (x+iy)=alpha +i bita prove that cos(π/6-a)*cos(π/3+b)-sin(π/6-a)*sin(π/3+b)=sin(a-b) why {2kπ} union {kπ}={kπ}? why is {2kπ} union {kπ}={kπ}? when k belong to integer Huy if 9 sin theta + 40 cos theta = 41,prove that:41 cos theta = 41 what is complex numbers Dua Yes ahmed Thank you Dua give me treganamentry question Solve 2cos x + 3sin x = 0.5
# Thoughts on Black written by Eric J. Ma on 2018-11-12 Having used Black for quite a while now, I have a hunch that it will continue to surpass its current popularity amongst projects. It's one thing to be opinionated about things that matter for a project, but don't matter personally. Like code style. It's another thing to actually build a tool that, with one command, realizes those opinions in (milli)seconds. That's exactly what Black does. At the end of the day, it was, and still is, a tool that has a very good human API - that of convenience. By being opinionated about what code ought to look like, black has very few configurable parameters. Its interface is very simple. Convenient. By automagically formatting every Python file in subdirectories (if not otherwise configured so), it makes code formatting quick and easy. Convenient. In particular, by being opinionated about conforming to community standards for code style with Python, black ensures that formatted code is consistently formatted and thus easy to read. Convenient! Because of this, I highly recommend the use of black for code formatting. pip install black Did you enjoy this blog post? Let's discuss more! # Bayesian Modelling is Hard Work! written by Eric J. Ma on 2018-11-07 It’s definitely not easy work; anybody trying to tell you that you can "just apply this model and just be done with it" is probably wrong. ## Simple Models Let me clarify: I agree that doing the first half of the statement, "just apply this model", is a good starting point, but I disagree with the latter half, "and just be done with it". I have found that writing and fitting a very naive Bayesian model to the data I have is a very simple thing. But doing the right thing is not. Let’s not be confused: I don’t mean a Naive Bayes model, I mean naively writing down a Bayesian model that is structured very simply with the simplest of priors that you can think of. Write down the model, including any transformations that you may need on the variables, and then lazily put in a bunch of priors. For example, you might just start with Gaussians everywhere a parameter could take on negative to positive infinity values, or a bounded Half Gaussian if it can only take values above (or below) a certain value. You might assume Gaussian-distributed noise in the output. Let’s still not be confused: Obviously this would not apply to a beta-bernoulli/binomial model! Doing the right thing, however, is where the tricky parts come in. To butcher and mash-up two quotes: All models are wrong, but some are useful (Box), yet some models are more wrong than others (modifying from Orwell). ## Critiquing Models When doing modeling, a series of questions comes up: • Do my naive assumptions about "Gaussians everywhere" hold? • Given that my output data are continuous, is there a better distribution that can describe the likelihood? • Is there are more principled prior for some of the variables? • Does my link function, which joins the input data to the output parameters, properly describe their relationship? • Instead of independent priors per group, would a group prior be justifiable? • Does my model yield posterior distributions that are within bounds of reasonable ranges, which come from my prior knowledge? If it does not, do I need to bound my priors instead of naively assuming the full support for those distributions? I am quite sure that this list is non-exhaustive, and probably only covers the bare minimum we have to think about. Doing these model critiques is not easy. Yet, if we are to work towards truthful and actionable conclusions, it is a necessity. We want to know ground truth, so that we can act on it accordingly, and hence take appropriate actions. ## Prior Experience I have experienced this modeling loop that Mike Betancourt describes (in his Principled Bayesian Workflow notebook) more than once. One involved count data, with a data scientist from TripAdvisor last year at the SciPy conference; another involved estimating cycle time distributions at work, and yet another involved a whole 4-parameter dose-response curve. In each scenario, model fitting and critique took hours at the minimum; I’d also note that with real world data, I didn’t necessarily get to the "win" was looking for. With the count data, the TripAdvisor data scientist and I reached a point where after 5 rounds of tweaking his model, we had a model that fit the data, and described a data generating process that mimics closely to what we would expect given his process. It took us 5 rounds, and 3 hours of staring at his model and data, to get there! Yet with cycle time distributions from work, a task ostensibly much easier ("just fit a distribution to the data"), none of my distribution choices, which reflected what I thought would be the data generating process, gave me a "good fit" to the data. I checked by many means: K-S tests, visual inspection, etc. I ended up abandoning the fitting procedure, and used empirical distributions instead. With a 4-parameter dose-response curve, it took me 6 hours to go through 6 rounds of modeling to get to a point where I felt comfortable with the model. I started with a simplifying "Gaussians everywhere" assumption. Later, though, I hesitantly and tentatively putting in bound priors because I knew some posterior distributions were completely out of range under the naive assumptions of the first model, and were likely a result of insufficient range in the concentrations tested. Yet even that model remained unsatisfying: I was stuck with some compounds that didn’t change the output regardless of concentration, and that data are fundamentally very hard to fit with a dose response curve. Thus I the next afternoon,I modeled the dose response relationship using a Gaussian Process instead. Neither model is completely satisfying to the degree that the count data model was, but both the GP and the dose-response curve are and will be roughly correct modeling choices (with the GP probably being more flexible), and importantly, both are actionable by the experimentalists. ## Thoughts As you probably can see, whenever we either (1) don’t know ground truth, and/or (2) have messy, real world data that don’t fit idealized assumptions about the data generating process, getting the model "right" is a very hard thing to do! Moreover, data are insufficient on their own to critique the model; we will always need to bring in prior knowledge. Much as all probability is conditional probability (Venn), all modeling involves prior knowledge. Sometimes it comes up in non-modellable ways, though as far as possible, it’s a good exercise to try incorporating that into the model definition. ## Canned Models? Even with that said, I’m still a fan of canned models, such as those provided by pymc-learn and scikit-learn - provided we recognize that their "canned" nature and are equipped to critique and modify said models. Yes, they provide easy, convenient baselines that we can get started with. We can "just apply this model". But we can’t "just be done with it": the hard part of getting the model right takes much longer and much more hard work. Veritas! Did you enjoy this blog post? Let's discuss more! written by Eric J. Ma on 2018-10-26 I learned a new thing about dask yesterday: pre-scattering data properly! Turns out, you can pre-scatter your data across worker nodes, and have them access that data later when submitting functions to the scheduler. ## How-To To do so, we first call on client.scatter, pass in the data that I want to scatter across all nodes, ensure that broadcasting is turned on (if and only if I am sure that all worker nodes will need it), and finally assign it to a new variable. from dask_jobqueue import SGECluster cluster = SGECluster(...) # put parameters in there. client = Client(cluster) One key thing to remember here is to assign the result of client.scatter to a variable. This becomes a pointer that you pass into other functions that are submitted via the client.submit interface. Because this point is not immediately clear from the client.scatter docs, I put in a pull request (PR) to provide some just-in-time documentation, which just got merged this morning. By the way, not every PR has to be code - documentation help is always good! Once we've scattered the data across our worker nodes and obtained a pointer for the scattered data, we can parallel submit our function across worker nodes. Let's say we have a function, called func, that takes in the data variable and returns a number. The key characteristic of this function is that it takes anywhere from a few seconds to minutes to run, but I need it run many times (think hundreds to thousands of times). In serial, I would usually do this as a list comprehension: results = [func(data) for i in range(200)] If done in parallel, I can now use the client object to submit the function across all worker nodes. For clarity, let me switch to a for-loop instead: results = [] for i in range(200): results.append(client.submit(func, data_future)) results = client.gather(results) Because the client does not have to worry about sending the large data object across the network of cluster nodes, it is very fast to submit the functions to the scheduler, which then dispatches it to the worker nodes, which all know where data_future is on their own "virtual cluster" memory. By pre-scattering, we invest a bit of time pre-allocating memory on worker nodes to hold data that are relatively expensive to transfer. This time investment reaps dividends later when we are working with functions that operate on the data. ## Cautions Not really disadvantages (as I can't think of any), just some things to note: 1. You need to know how much memory my data requires, and have to request for at least that amount of memory first per worker node at the the SGECluster instantiation step. 2. Pre-scattering sometimes takes a bit of time, but I have not seen it take as much time as having the scheduler handle everything. ## Acknowledgments Special thanks goes to Matt Rocklin, who answered my question on StackOverflow, which in turn inspired this blog post. Did you enjoy this blog post? Let's discuss more! # Parallel Processing with Dask on GridEngine Clusters written by Eric J. Ma on 2018-10-11 I recently just figured out how to get this working... and it's awesome! :D ## Motivation If I'm developing an analysis in the Jupyter notebook, and I have one semi-long-running function (e.g. takes dozens of seconds) that I need to run over tens to hundreds of thousands of similar inputs, it'll take ages for this to complete in serial. For a sense of scale, a function that takes ~20 seconds per call run serially over 10,000 similar inputs would take 200,000 seconds, which is 2 days of run-time (not including any other overhead). That's not feasible for interactive exploration of data. If I could somehow parallelize just the function over 500 compute nodes, we could take the time down to 7 minutes. GridEngine-based compute clusters are one of many options for parallelizing work. During grad school at MIT, and at work at Novartis, the primary compute cluster environment that I've encountered has been GridEngine-based. However, because they are designed for batch jobs, as a computational scientist, we have to jump out of whatever development environment we're currently using, and move to custom scripts. In order to do parallelism with traditional GridEngine systems, I would have to jump out of the notebook and start writing job submission scripts, which disrupts my workflow. I would be disrupting my thought process, and lose the interactivity that I might need to prototype my work faster. dask, alongside dask-jobqueue enables computational scientists like myself to take advantage of existing GridEngine setups to do interactive, parallel work. As long as I have a Jupyter notebook server running on a GridEngine-connected compute node, I can submit functions to the GridEngine cluster and collect back those results to do further processing, in a fraction of the time that it would take, thus enabling me to do my science faster than if I did everything single core/single node. In this blog post, I'd like to share an annotated, minimal setup for using Dask on a GridEngine cluster. (Because we use Dask, more complicated pipelines are possible as well - I would encourage you to read the Dask docs for more complex examples.) I will assume that you are working in a Jupyter notebook environment, and that the notebook you are working out of is hosted on a GridEngine-connected compute node, from which you are able to qsub tasks. Don't worry, you won't be qsub-ing anything though! ## Setup To start, we need a cell that houses the following code block: from dask_jobqueue import SGECluster cluster = SGECluster(queue='default.q', walltime="1500000", processes=1, memory='1GB', cores=1, env_extra=['source /path/to/custom/script.sh', 'export ENV_VARIABLE="SOMETHING"'] ) Here, we are instantiating an SGECluster object under the variable name cluster. What cluster stores is essentially a configuration for a block of worker nodes that you will be requesting. Under the hood, what dask-jobqueue is doing is submitting jobs to the GridEngine scheduler, which will block off a specified amount of compute resources (e.g. number of cores, amount of RAM, whether you want GPUs or not, etc.) for a pre-specified amount of time, on which Dask then starts a worker process to communicate with the head process coordinating tasks amongst workers. As such, you do need to know two pieces of information: 1. queue: The queue that jobs are to be submitted to. Usually, it is named something like default.q, but you will need to obtain this through GridEngine. If you have the ability to view all jobs that are running, you can call qstat at the command line to see what queues are being used. Otherwise, you might have to ping your system administrator for this information. 2. walltime: You will also need to pre-estimate the wall clock time, in seconds, that you want the worker node to be alive for. It should be significantly longer than the expected time you think you will need, so that your function call doesn't timeout unexpectedly. I have defaulted to 1.5 million seconds, which is about 18 days of continual runtime. In practice, I usually kill those worker processes after just a few hours. Besides that, you also need to specify the resources that you need per worker process. In my example above, I'm asking for each worker process to use only 1GB of RAM, 1 core, and to use only 1 process per worker (i.e. no multiprocessing, I think). Finally, I can also specify extra environment setups that I will need. Because each worker process is a new process that has no knowledge of the parent process' environment, you might have to source some bash script, or activate a Python environment, or export some environment variable. This can be done under the env_extra keyword, which accepts a list of strings. ## Request for worker compute "nodes" I put "nodes" in quotation marks, because they are effectively logical nodes, rather than actual compute nodes. (Technically, I think a compute node minimally means one physical hardware unit with CPUs and RAM). In order to request for worker nodes to run your jobs, you need the next line of code: cluster.scale(500) With this line, under the hood, dask-jobqueue will start submitting 500 jobs, each requesting 1GB of RAM and 1 core, populating my compute environment according to the instructions I provided under env_extra. At the end of this, I effectively have a 500-node cluster on the larger GridEngine cluster (let's call this a "virtual cluster"), each with 1GB of RAM and 1 core available to it, on which I can submit functions to run. ## Start a client process In order to submit jobs to my virtual cluster, I have to instantiate a client that is connected to the cluster, and is responsible for sending functions there. from dask.distributed import Client client = Client(cluster) ## Compute! With this setup complete (I have it stored as a TextExpander snippets), we can now start submitting functions to the virtual cluster! To simulate this, let's define a square-rooting function that takes 2-3 seconds to run each time it is called, and returns the square of its inputs. This simulates a function call that is computationally semi-expensive to run a few times, but because call on this hundreds of thousands of time, the total running time to run it serially would be too much. from time import sleep from math import sqrt from random import random def slow_sqrt(x): """ Simulates the run time needed for a semi-expensive function call. """ assert x > 0 # define sleeping time in seconds, between 2-3 seconds. t = 2 + random() sleep(t) return sqrt(x) ### Serial Execution In a naive, serial setting, we would call on the function in a for-loop: results = [] for i in range(10000): results.append(slow_sqrt(i)) This would take us anywhere between 20,000 to 30,000 seconds (approximately 8 hours, basically). ### Parallel Execution In order to execute this in parallel instead, we could do one of the following three ways: #### map sq_roots = client.map(slow_sqrt, range(10000)) sq_roots = client.gather(sq_roots) #### for-loop sq_roots = [] for i in range(10000): sq_roots.append(client.submit(slow_sqrt, i, restart=20)) # submit the function as first argument, then rest of arguments sq_roots = client.gather(sq_roots) #### delayed sq_roots = [] for i in range(10000): sq_roots.append(delayed(slow_sqrt)(i)) sq_roots = compute(*sq_roots) I have some comments on each of the three methods, each of which I have used. First off, each of them do require us to change the code that we would have written in serial. This little bit of overhead is the only tradeoff we really need to make in order to gain parallelism. In terms of readability, all of them are quite readable, though in my case, I tend to favour the for-loop with client.submit. Here is why. For readability, the for-loop explicitly indicates that we are looping over something. It's probably more easy for novices to approach my code that way. For debuggability, client.submit returns a Futures object (same goes for client.map). A "Futures" object might be confusing at first, so let me start by demystifying that. A Futures object promises that the result that is computed from slow_sqrt will exist, and actually contains a ton of diagnostic information, including the type of the object (which can be useful for diagnosing whether my function actually ran correctly). In addition to that, I can call on Futures.result() to inspect the actual result (in this case, sq_roots[0].result()). This is good for debugging the function call, in case there are issues when scaling up. (At work, I was pinging a database in parallel, and sometimes the ping would fail; debugging led me to include some failsafe code, including retries and sleeps with random lengths to stagger out database calls.) Finally, the Futures interface is non-blocking on my Jupyter notebook session. Once I've submitted the jobs, I can continue with other development work in my notebook in later cells, and check back when the Dask dashboard indicates that the jobs are done. That said, I like the delayed interface as well. Once I was done debugging and confident that my own data pipeline at work wouldn't encounter the failure modes I was seeing, I switched over to the delayed interface and scaled up my analysis. I was willing to trade in the interactivity using the Futures interface for the automation provided by the delayed interface. (I also first used Dask on a single node through the delayed interface as well). Of course, there's something also to be said for the simplicity of two lines of code for parallelism (with the client.map example). The final line in each of the code blocks allows us to "gather" the results back into my coordinator node's memory, thus completing the function call and giving us the result we needed. ## Conclusions That concludes it! The two key ideas illustrated in this blog post were: 1. To set up a virtual cluster on a GridEngine system, we essentially harness the existing job submission system to generate workers that listen for tasks. 2. A useful programming pattern is to submit functions using the client object using client.submit(func, *args, **kwargs). This requires minimal changes from serial code. ## Practical Tips Here's some tips for doing parallel processing, which I've learned over the years. Firstly, never prematurely parallelize. It's as bad as prematurely optimizing code. If your code is running slowly, check first to make sure that there aren't algorithmic complexity issues, or bandwidths being clogged up (e.g. I/O bound). As the Dask docs state, it is easier to achieve those gains first before doing parallelization. Secondly, when developing parallel workflows, make sure to test the pipeline on subsets of input data first, and slowly scale up. It is during this period that you can also profile memory usage to check to see if you need to request for more RAM per worker. Thirdly, for GridEngine clusters, it is usually easier to request for many small worker nodes that consume few cores and small amounts of RAM. If your job is trivially parallelizable, this may be a good thing. Fourthly, it's useful to have realistic expectations on the kinds of speed-ups you can expect to gain. At work, through some ad-hoc profiling, I quickly came to the realization that concurrent database pings were the most likely bottleneck in my code's speed, and that nothing apart from increasing the number of concurrent database pings allowed would make my parallel code go faster. Finally, on a shared cluster, be respectful of others' usage. Don't request for unreasonable amounts of compute time. And when you're confirmed done with your analysis work, remember to shut down the virtual cluster! :) Did you enjoy this blog post? Let's discuss more! # Optimizing Block Sparse Matrix Creation with Python written by Eric J. Ma on 2018-09-04 import networkx as nx import numpy as np import scipy.sparse as sp import matplotlib.pyplot as plt import seaborn as sns from tqdm import tqdm from typing import List from numba import jit sns.set_context('talk') sns.set_style('white') %matplotlib inline %config InlineBackend.figure_format = 'retina' # Introduction At work, I recently encountered a neat problem. I'd like to share it with you all. One of my projects involves graphs; specifically, it involves taking individual graphs and turning them into one big graph. If you've taken my Network Analysis Made Simple workshops before, you'll have learned that graphs can be represented as a matrix, such as the one below: G = nx.erdos_renyi_graph(n=10, p=0.2) A = nx.to_numpy_array(G) sns.heatmap(A) Because the matrix is so sparse, we can actually store it as a sparse matrix: A_sparse = nx.adjacency_matrix(G).tocoo() [A_sparse.row, A_sparse.col, A_sparse.data] [array([0, 0, 1, 2, 3, 3, 3, 4, 4, 5, 5, 5, 5, 6, 7, 7, 8, 8], dtype=int32), array([5, 7, 4, 7, 5, 6, 8, 1, 5, 0, 3, 4, 8, 3, 0, 2, 3, 5], dtype=int32), array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int64)] The most straightforward way of storing sparse matrices is in the COO (COOrdinate) format, which is also known as the "triplet" format, or the "ijv" format. ("i" is row, "j" is col, "v" is value) If we want to have two or more graphs stored together in a single matrix, which was what my projects required, then one way of representing them is as follows: G2 = nx.erdos_renyi_graph(n=15, p=0.2) A2 = nx.to_numpy_array(G2) sns.heatmap(sp.block_diag([A, A2]).todense()) Now, notice how there's 25 nodes in total (0 to 24), and that they form what we call a "block diagonal" format. In its "dense" form, we have to represent $25^2$ values inside the matrix. That's fine for small amounts of data, but if we have tens of thousands of graphs, that'll be impossible to deal with! You'll notice I used a function from scipy.sparse, the block_diag function, which will create a block diagonal sparse matrix from an iterable of input matrices. block_diag is the function that I want to talk about in this post. # Profiling block_diag performance I had noticed that when dealing with tens of thousands of graphs, block_diag was not performing up to scratch. Specifically, the time it needed would scale quadratically with the number of matrices provided. Let's take a look at some simulated data to illustrate this. %%time Gs = [] As = [] for i in range(3000): n = np.random.randint(10, 30) p = 0.2 G = nx.erdos_renyi_graph(n=n, p=p) Gs.append(G) A = nx.to_numpy_array(G) As.append(A) Let's now define a function to profile the code. from time import time from random import sample def profile(func): n_graphs = [100, 200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000] data = [] for n in tqdm(n_graphs): for i in range(3): # 3 replicates per n start = time() func(sample(As, n)) end = time() data.append((n, end - start)) return data data_sp = profile(sp.block_diag) plt.scatter(*zip(*data_sp)) plt.xlabel('number of graphs') plt.ylabel('time (s)') It is quite clear that the increase in time is super-linear, showing $O(n^2)$ scaling. (Out of impatience, I did not go beyond 50,000 graphs in this post, but at work, I did profile performance up to that many graphs. For reference, it took about 5 minutes to finish creating the scipy sparse matrix for 50K graphs. # Optimizing block_diag performance I decided to take a stab at creating an optimized version of block_diag. Having profiled my code and discovering that sparse block diagonal matrix creation was a bottleneck, I implemented my own sparse block diagonal matrix creation routine using pure Python. def _block_diag(As: List[np.array]): """ Return the (row, col, data) triplet for a block diagonal matrix. Intended to be put into a coo_matrix. Can be from scipy.sparse, but also can be cupy.sparse, or Torch sparse etc. Example usage: >>> row, col, data = _block_diag(As) >>> coo_matrix((data, (row, col))) :param As: A list of numpy arrays to create a block diagonal matrix. :returns: (row, col, data), each as lists. """ row = [] col = [] data = [] start_idx = 0 for A in As: nrows, ncols = A.shape for r in range(nrows): for c in range(ncols): if A[r, c] != 0: row.append(r + start_idx) col.append(c + start_idx) data.append(A[r, c]) start_idx = start_idx + nrows return row, col, data Running it through the same profiling routine: data_custom = profile(_block_diag) plt.scatter(*zip(*data_custom), label='custom') plt.scatter(*zip(*data_sp), label='scipy.sparse') plt.legend() plt.xlabel('number of graphs') plt.ylabel('time (s)') I also happened to have listened in on a talk by Siu Kwan Lam during lunch, on numba, the JIT optimizer that he has been developing for the past 5 years now. Seeing as how the code I had written in _block_diag was all numeric code, which is exactly what numba was designed for, I decided to try optimizing it with JIT. from numba import jit data_jit = profile(jit(_block_diag)) plt.scatter(*zip(*data_custom), label='custom') plt.scatter(*zip(*data_sp), label='scipy.sparse') plt.scatter(*zip(*data_jit), label='jit') plt.legend() plt.xlabel('number of graphs') plt.ylabel('time (s)') Notice the speed-up that JIT-ing the code provided! (Granted, that first run was a "warm-up" run; once JIT-compiled, everything is really fast!) My custom implementation only returns the (row, col, data) triplet. This is an intentional design choice - having profiled the code with and without calling a COO matrix creation routine, I found the JIT-optimized performance to be significantly better without creating the COO matrix routine. As I still have to create a sparse matrix, I ended up with the following design: def block_diag(As): row, col, data = jit(_block_diag)(As) return sp.coo_matrix((data, (row, col))) data_wrap = profile(block_diag) plt.scatter(*zip(*data_custom), label='custom') plt.scatter(*zip(*data_sp), label='scipy.sparse') plt.scatter(*zip(*data_jit), label='jit') plt.scatter(*zip(*data_wrap), label='wrapped') plt.legend() plt.xlabel('number of graphs') plt.ylabel('time (s)') You'll notice that the array creation step induces a consistent overhead on top of the sparse matrix triplet creation routine, but stays flat and trends the "jit" dots quite consistently. It intersects the "custom" dots at about $10^3$ graphs. Given the problem that I've been tackling, which involves $10^4$ to $10^6$ graphs at a time, it is an absolutely worthwhile improvement to JIT-compile the _block_diag function. # Conclusion This was simultaneously a fun and useful exercise in optimizing my code! A few things I would take away from this: • Profiling code for bottlenecks can be really handy, and can be especially useful if we have a hypothesis on how to optimize it. • numba can really speed up array-oriented Python computation. It lives up to the claims on its documentation. I hope you learned something new, and I hope you also enjoyed reading this post as much as I enjoyed writing it! Did you enjoy this blog post? Let's discuss more!
• JINYUB MAENG Articles written in Pramana – Journal of Physics • Truncated $q$-deformed fermion algebras and phase transition In this paper, we apply the $q$-deformed fermion theory to the phase transition from the ordinary fermion into the truncated $q$-deformed fermion at the critical temperature $T_{c}$. • # Pramana – Journal of Physics Current Issue Volume 93 | Issue 1 July 2019
# Variance In probability theory and statistics, the variance is a measure of how far a set of numbers is spread out. It is one of several descriptors of a probability distribution, describing how far the numbers lie from the mean (expected value). In particular, the variance is one of the moments of a distribution. In that context, it forms part of a systematic approach to distinguishing between probability distributions. While other such approaches have been developed, those based on moments are advantageous in terms of mathematical and computational simplicity. The variance is a parameter describing in part either the actual probability distribution of an observed population of numbers, or the theoretical probability distribution of a sample (a not-fully-observed population) of numbers. In the latter case a sample of data from such a distribution can be used to construct an estimate of its variance: in the simplest cases this estimate can be the sample variance. ## Definition The variance of a random variable X is its second central moment, the expected value of the squared deviation from the mean μ = E[X]: $\operatorname{Var}(X) = \operatorname{E}\left[(X - \mu)^2 \right].$ This definition encompasses random variables that are discrete, continuous, neither, or mixed. The variance can also be thought of as the covariance of a random variable with itself: $\operatorname{Var}(X) = \operatorname{Cov}(X, X).$ The variance is also equivalent to the second cumulant of the probability distribution for X. The variance is typically designated as Var(X), $\scriptstyle\sigma_X^2$, or simply σ2 (pronounced "sigma squared"). The expression for the variance can be expanded: $\operatorname{Var}(X)= \operatorname{E}\left[X^2 - 2X\operatorname{E}[X] + (\operatorname{E}[X])^2\right] = \operatorname{E}\left[X^2\right] - 2\operatorname{E}[X]\operatorname{E}[X] + (\operatorname{E}[X])^2 = \operatorname{E}\left[X^2 \right] - (\operatorname{E}[X])^2$ A mnemonic for the above expression is "mean of square minus square of mean". ### Continuous random variable If the random variable X is continuous with probability density function f(x), then the variance is given by $\operatorname{Var}(X) =\sigma^2 =\int (x-\mu)^2 \, f(x) \, dx\, =\int x^2 \, f(x) \, dx\, - \mu^2$ where $\mu$ is the expected value, $\mu = \int x \, f(x) \, dx\,$ and where the integrals are definite integrals taken for x ranging over the range of X. If a continuous distribution does not have an expected value, as is the case for the Cauchy distribution, it does not have a variance either. Many other distributions for which the expected value does exist also do not have a finite variance because the integral in the variance definition diverges. An example is a Pareto distribution whose index k satisfies 1 < k ≤ 2. ### Discrete random variable If the random variable X is discrete with probability mass function x1 ↦ p1, ..., xn ↦ pn, then $\operatorname{Var}(X) = \sum_{i=1}^n p_i\cdot(x_i - \mu)^2 = \sum_{i=1}^n (p_i\cdot x_i^2) - \mu^2$ where $\mu$ is the expected value, i.e. $\mu = \sum_{i=1}^n p_i\cdot x_i$ . (When such a discrete weighted variance is specified by weights whose sum is not 1, then one divides by the sum of the weights.) The variance of a set of n equally likely values can be written as $\operatorname{Var}(X) = \frac{1}{n} \sum_{i=1}^n (x_i - \mu)^2.$ The variance of a set of n equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all points from each other: $\operatorname{Var}(X) = \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n \frac{1}{2}(x_i - x_j)^2.$ ↑Jump back a section ## Examples ### Normal distribution The normal distribution with parameters μ and σ is a continuous distribution whose probability density function is given by: $f(x) = \frac{1}{\sqrt{2\pi \sigma^2}} e^{ -\frac{(x-\mu)^2}{2\sigma^2} }.$ It has mean μ and variance equal to: $\operatorname{Var}(X) = \int_{-\infty}^\infty \frac{(x - \mu)^2}{\sqrt{2\pi \sigma^2}} e^{ -\frac{(x-\mu)^2}{2\sigma^2} } \, dx = \sigma^2.$ The role of the normal distribution in the central limit theorem is in part responsible for the prevalence of the variance in probability and statistics. ### Exponential distribution The exponential distribution with parameter λ is a continuous distribution whose support is the semi-infinite interval [0,∞). Its probability density function is given by: $f(x) = \lambda e^{-\lambda x},\,$ and it has expected value μ = λ−1. The variance is equal to: $\operatorname{Var}(X) = \int_0^\infty (x - \lambda^{-1})^2 \, \lambda e^{-\lambda x} dx = \lambda^{-2}.\,$ So for an exponentially distributed random variable σ2 = μ2. ### Poisson distribution The Poisson distribution with parameter λ is a discrete distribution for k = 0, 1, 2, ... Its probability mass function is given by: $p(k) = \frac{\lambda^k}{k!} e^{-\lambda k},$ and it has expected value μ = λ. The variance is equal to: $\operatorname{Var}(X) = \sum_{k=1}^{n} \frac{\lambda^k}{k!} e^{-\lambda k} (k-\lambda)^2 = \lambda,$ So for a Poisson-distributed random variable σ2 = μ. ### Binomial distribution The binomial distribution with parameters n and p is a discrete distribution for k = 0, 1, 2, ... Its probability mass function is given by: $p(k) = {n\choose k}p^k(1-p)^{n-k},$ and it has expected value μ = np. The variance is equal to: $\operatorname{Var}(X) = \sum_{k=1}^{n} {n\choose k}p^k(1-p)^{n-k} (k-np)^2 = np(1-p),$ #### Coin Toss The binomial distribution with p = 0.5 describes the probability of getting k heads in n tosses. Thus the expected value of the number of heads is n/2, and the variance is n/4. ### Fair die A six-sided fair die can be modelled with a discrete random variable with outcomes 1 through 6, each with equal probability $\textstyle\frac{1}{6}$. The expected value is (1 + 2 + 3 + 4 + 5 + 6)/6 = 3.5. Therefore the variance can be computed to be: \begin{align} \sum_{i=1}^6 \tfrac{1}{6}(i - 3.5)^2 = \tfrac{1}{6}\sum_{i=1}^6 (i - 3.5)^2 & = \tfrac{1}{6}\left((-2.5)^2{+}(-1.5)^2{+}(-0.5)^2{+}0.5^2{+}1.5^2{+}2.5^2\right) \\ & = \tfrac{1}{6} \cdot 17.50 = \tfrac{35}{12} \approx 2.92. \end{align} The general formula for the variance of the outcome X of a die of n sides is: \begin{align} \sigma^2=E(X^2)-(E(X))^2 &=\frac{1}{n}\sum_{i=1}^n i^2-\left(\frac{1}{n}\sum_{i=1}^n i\right)^2 \\ &=\tfrac 16 (n+1)(2n+1) - \tfrac 14 (n+1)^2\\ &=\frac{ n^2-1 }{12}. \end{align} ↑Jump back a section ## Properties ### Basic properties Variance is non-negative because the squares are positive or zero. $\operatorname{Var}(X)\ge 0.$ The variance of a constant random variable is zero, and if the variance of a variable in a data set is 0, then all the entries have the same value. $P(X=a) = 1\Leftrightarrow \operatorname{Var}(X)= 0.$ Variance is invariant with respect to changes in a location parameter. That is, if a constant is added to all values of the variable, the variance is unchanged. $\operatorname{Var}(X+a)=\operatorname{Var}(X).$ If all values are scaled by a constant, the variance is scaled by the square of that constant. $\operatorname{Var}(aX)=a^2\operatorname{Var}(X).$ The variance of a sum of two random variables is given by: $\operatorname{Var}(aX+bY)=a^2\operatorname{Var}(X)+b^2\operatorname{Var}(Y)+2ab\, \operatorname{Cov}(X,Y),$ $\operatorname{Var}(X-Y)=\operatorname{Var}(X)+\operatorname{Var}(Y)-2\, \operatorname{Cov}(X,Y),$ In general we have for the sum of $N$ random variables: $\operatorname{Var}\left(\sum_{i=1}^N X_i\right)=\sum_{i,j=1}^N\operatorname{Cov}(X_i,X_j)=\sum_{i=1}^N\operatorname{Var}(X_i)+\sum_{i\ne j}\operatorname{Cov}(X_i,X_j).$ These results lead to the variance of a linear combination as: \begin{align} \operatorname{Var}\left( \sum_{i=1}^{N} a_iX_i\right) &=\sum_{i,j=1}^{N} a_ia_j\operatorname{Cov}(X_i,X_j) \\ &=\sum_{i=1}^{N}a_i^2\operatorname{Var}(X_i)+\sum_{i\not=j}a_ia_j\operatorname{Cov}(X_i,X_j)\\ & =\sum_{i=1}^{N}a_i^2\operatorname{Var}(X_i)+2\sum_{1\le i The variance of a finite sum of uncorrelated random variables is equal to the sum of their variances. This stems from the above identity and the fact that for uncorrelated variables the covariance is zero; that is, if $\operatorname{Cov}(X_i,X_j)=0\ (i\ne j) ,$ then $\operatorname{Var}\left(\sum_{i=1}^N X_i\right)=\sum_{i=1}^N\operatorname{Var}(X_i).$ ### Sum of uncorrelated variables (Bienaymé formula) One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) of uncorrelated random variables is the sum of their variances: $\operatorname{Var}\Big(\sum_{i=1}^n X_i\Big) = \sum_{i=1}^n \operatorname{Var}(X_i).$ This statement is called the Bienaymé formula.[1] and was discovered in 1853.[citation needed] It is often made with the stronger condition that the variables are independent, but uncorrelatedness suffices. So if all the variables have the same variance σ2, then, since division by n is a linear transformation, this formula immediately implies that the variance of their mean is $\operatorname{Var}\left(\overline{X}\right) = \operatorname{Var}\left(\frac {1} {n}\sum_{i=1}^n X_i\right) = \frac {1} {n^2}\sum_{i=1}^n \operatorname{Var}\left(X_i\right) = \frac {\sigma^2} {n}.$ That is, the variance of the mean decreases when n increases. This formula for the variance of the mean is used in the definition of the standard error of the sample mean, which is used in the central limit theorem. ### Product of independent variables If two variables X and Y are independent, the variance of their product is given by[2][3] $\operatorname{Var}(XY) = [E(X)]^{2}\operatorname{Var}(Y) + [E(Y)]^{2}\operatorname{Var}(X) + \operatorname{Var}(X)\operatorname{Var}(Y).$ ### Sum of correlated variables In general, if the variables are correlated, then the variance of their sum is the sum of their covariances: $\operatorname{Var}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \sum_{j=1}^n \operatorname{Cov}(X_i, X_j).$ (Note: This by definition includes the variance of each variable, since Cov(Xi,Xi) = Var(Xi).) Here Cov is the covariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. This formula is used in the theory of Cronbach's alpha in classical test theory. So if the variables have equal variance σ2 and the average correlation of distinct variables is ρ, then the variance of their mean is $\operatorname{Var}(\overline{X}) = \frac {\sigma^2} {n} + \frac {n-1} {n} \rho \sigma^2.$ This implies that the variance of the mean increases with the average of the correlations. Moreover, if the variables have unit variance, for example if they are standardized, then this simplifies to $\operatorname{Var}(\overline{X}) = \frac {1} {n} + \frac {n-1} {n} \rho.$ This formula is used in the Spearman–Brown prediction formula of classical test theory. This converges to ρ if n goes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have $\lim_{n \to \infty} \operatorname{Var}(\overline{X}) = \rho.$ Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does generally not converge to the population mean, even though the Law of large numbers states that the sample mean will converge for independent variables. ### Weighted sum of variables The scaling property and the Bienaymé formula, along with this property from the covariance page: Cov(aXbY) = ab Cov(XY) jointly imply that $\operatorname{Var}(aX+bY) =a^2 \operatorname{Var}(X) + b^2 \operatorname{Var}(Y) + 2ab\, \operatorname{Cov}(X, Y).$ This implies that in a weighted sum of variables, the variable with the largest weight will have a disproportionally large weight in the variance of the total. For example, if X and Y are uncorrelated and the weight of X is two times the weight of Y, then the weight of the variance of X will be four times the weight of the variance of Y. The expression above can be extended to a weighted sum of multiple variables: $\operatorname{Var}\left(\sum_{i}^n a_iX_i\right) = \sum_{i=1}^na_i^2 \operatorname{Var}(X_i) + 2\sum_{1\le i}\sum_{ ### Decomposition The general formula for variance decomposition or the law of total variance is: If $X$ and $Y$ are two random variables and the variance of $X$ exists, then $\operatorname{Var}(X) = \operatorname{Var}(\operatorname{E}(X|Y))+ \operatorname{E}(\operatorname{Var}(X|Y)).$ Here, $\operatorname E(X|Y)$ is the conditional expectation of $X$ given $Y$, and $\operatorname{Var}(X|Y)$ is the conditional variance of $X$ given $Y$. (A more intuitive explanation is that given a particular value of $Y$, then $X$ follows a distribution with mean $\operatorname E(X|Y)$ and variance $\operatorname{Var}(X|Y)$. The above formula tells how to find $\operatorname{Var}(X)$ based on the distributions of these two quantities when $Y$ is allowed to vary.) This formula is often applied in analysis of variance, where the corresponding formula is $\mathit{MS}_\mathrm{Total} = \mathit{MS}_\mathrm{Between} + \mathit{MS}_\mathrm{Within};$ here $\mathit{MS}$ refers to the Mean of the Squares. It is also used in linear regression analysis, where the corresponding formula is $\mathit{MS}_\mathrm{Total} = \mathit{MS}_\mathrm{Regression} + \mathit{MS}_\mathrm{Residual}.$ This can also be derived from the additivity of variances, since the total (observed) score is the sum of the predicted score and the error score, where the latter two are uncorrelated. Similar decompositions are possible for the sum of squared deviations (sum of squares, $\mathit{SS}$): $\mathit{SS}_\mathrm{Total} = \mathit{SS}_\mathrm{Between} + \mathit{SS}_\mathrm{Within},$ $\mathit{SS}_\mathrm{Total} = \mathit{SS}_\mathrm{Regression} + \mathit{SS}_\mathrm{Residual}.$ ### Formulae for the variance A formula often used for deriving the variance of a theoretical distribution is as follows: $\operatorname{Var}(X) =\operatorname{E}(X^2) - (\operatorname{E}(X))^2.$ This will be useful when it is possible to derive formulae for the expected value and for the expected value of the square. This formula is also sometimes used in connection with the sample variance. While useful for hand calculations, it is not advised for computer calculations as it suffers from catastrophic cancellation if the two components of the equation are similar in magnitude and floating point arithmetic is used.[citation needed] This is discussed below. ### Calculation from the CDF The population variance for a non-negative random variable can be expressed in terms of the cumulative distribution function F using $2\int_0^\infty uH(u)\,du - \Big(\int_0^\infty H(u)\,du\Big)^2.$ where H(u) = 1 − F(u) is the right tail function. This expression can be used to calculate the variance in situations where the CDF, but not the density, can be conveniently expressed. ### Characteristic property The second moment of a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e. $\mathrm{argmin}_m\,\mathrm{E}((X - m)^2) = \mathrm{E}(X)\,$. Conversely, if a continuous function $\varphi$ satisfies $\mathrm{argmin}_m\,\mathrm{E}(\varphi(X - m)) = \mathrm{E}(X)\,$ for all random variables X, then it is necessarily of the form $\varphi(x) = a x^2 + b$, where a > 0. This also holds in the multidimensional case.[4] ### Matrix notation for the variance of a linear combination Let's define $X$ as a column vector of n random variables $X_1, ...,X_n$, and c as a column vector of N scalars $c_1, ...,c_n$. Therefore $c^T X$ is a linear combination of these random variables, where $c^T$ denotes the transpose of vector $c$. Let also be $\Sigma$ the variance-covariance matrix of the vector X. The variance of $c^TX$ is given by:[5] $\operatorname{Var}(c^T X) = c^T \Sigma c .$ ### Units of measurement Unlike expected absolute deviation, the variance of a variable has units that are the square of the units of the variable itself. For example, a variable measured in inches will have a variance measured in square inches. For this reason, describing data sets via their standard deviation or root mean square deviation is often preferred over using the variance. In the dice example the standard deviation is √2.9 ≈ 1.7, slightly larger than the expected absolute deviation of 1.5. The standard deviation and the expected absolute deviation can both be used as an indicator of the "spread" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution. ↑Jump back a section ## Approximating the variance of a function The delta method uses second-order Taylor expansions to approximate the variance of a function of one or more random variables: see Taylor expansions for the moments of functions of random variables. For example, the approximate variance of a function of one variable is given by $\operatorname{Var}\left[f(X)\right]\approx \left(f'(\operatorname{E}\left[X\right])\right)^2\operatorname{Var}\left[X\right]$ provided that f is twice differentiable and that the mean and variance of X are finite. ↑Jump back a section ## Population variance and sample variance Real-world distributions such as the distribution of yesterday's rain throughout the day are typically not fully known, unlike the behavior of perfect dice or an ideal distribution such as the normal distribution, because it is impractical to account for every raindrop. Instead one estimates the mean and variance of the whole distribution as the computed mean and variance of a sample of n observations drawn suitably randomly from the whole sample space, in this example the set of all measurements of yesterday's rainfall in all available rain gauges. This method of estimation is close to optimal, with the caveat that it underestimates the variance by a factor of (n − 1) / n. (For example, when n = 1 the variance of a single observation is obviously zero regardless of the true variance). This gives a bias which should be corrected for when n is small by multiplying by n / (n − 1). If the mean is determined in some other way than from the same samples used to estimate the variance then this bias does not arise and the variance can safely be estimated as that of the samples. ### Population Variance In general, the population variance of a finite population of size N with values xi is given by $\sigma^2 = \frac 1N \sum_{i=1}^N \left(x_i - \mu \right)^2 = \left(\frac 1N \sum_{i=1}^N x_i^2\right) - \mu^2$ where $\mu = \frac 1N \sum_{i=1}^N x_i$ is the population mean. The population variance therefore is the variance of the underlying probability distribution. In this sense, the concept of population can be extended to continuous random variables with infinite populations. ### Sample Variance In many practical situations, the true variance of a population is not known a priori and must be computed somehow. When dealing with extremely large populations, it is not possible to count every object in the population, so the computation must be performed on a sample of the population.[6] Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution. We take a sample with replacement of n values y1, ..., yn from the population, where n < N, and estimate the variance on the basis of this sample.[7] Directly taking the variance of the sample gives: $\sigma_y^2 = \frac 1n \sum_{i=1}^n \left(y_i - \overline{y} \right)^2$ Here, $\overline{y}$ denotes the sample mean: $\overline{y}=\frac 1n \sum_{i=1}^n y_i .$ Since the yi are selected randomly, both $\scriptstyle\overline{y}$ and $\scriptstyle\sigma_y^2$ are random variables. Their expected values can be evaluated by summing over the ensemble of all possible samples {yi} from the population. For $\scriptstyle\sigma_y^2$ this gives: \begin{align} E[\sigma_y^2] & = E\left[ \frac 1n \sum_{i=1}^n \left(y_i - \frac 1n \sum_{j=1}^n y_j \right)^2 \right] \\ & = \frac 1n \sum_{i=1}^n E\left[ y_i^2 - \frac 2n y_i \sum_{j=1}^n y_j + \frac{1}{n^2} \sum_{j=1}^n y_j \sum_{k=1}^n y_k \right] \\ & = \frac 1n \sum_{i=1}^n \left[ \frac{n-2}{n} E[y_i^2] - \frac 2n \sum_{j \neq i} E[y_i y_j] + \frac{1}{n^2} \sum_{j=1}^n \sum_{k \neq j} E[y_j y_k] +\frac{1}{n^2} \sum_{j=1}^n E[y_j^2] \right] \\ & = \frac 1n \sum_{i=1}^n \left[ \frac{n-2}{n} (\sigma^2+\mu^2) - \frac 2n (n-1) \mu^2 + \frac{1}{n^2} n (n-1) \mu^2 + \frac 1n (\sigma^2+\mu^2) \right] \\ & = \frac{n-1}{n} \sigma^2. \end{align} Hence $\scriptstyle\sigma_y^2$ gives an estimate of the population variance that is biased by a factor of (n-1)/n. For this reason, $\scriptstyle\sigma_y^2$ is referred to as the biased sample variance. Correcting for this bias yields the unbiased sample variance: $s^2 = \frac{1}{n-1} \sum_{i=1}^n \left(y_i - \overline{y} \right)^2$ Either estimator may be simply referred to as the sample variance when the version can be determined by context. The same proof is also applicable for samples taken from a continuous probability distribution. The use of the term n − 1 is called Bessel's correction, and it is also used in sample covariance and the sample standard deviation (the square root of variance). The square root is a concave function and thus introduces negative bias (by Jensen's inequality), which depends on the distribution, and thus the corrected sample standard deviation (using Bessel's correction) is biased. The unbiased estimation of standard deviation is a technically involved problem, though for the normal distribution using the term n − 1.5 yields an almost unbiased estimator. The unbiased sample variance is a U-statistic for the function ƒ(y1y2) = (y1 − y2)2/2, meaning that it is obtained by averaging a 2-sample statistic over 2-element subsets of the population. ### Distribution of the sample variance Distribution and cumulative distribution of s22, for various values of ν = n-1, when the yi are independent normally distributed. Being a function of random variables, the sample variance is itself a random variable, and it is natural to study its distribution. In the case that yi are independent observations from a normal distribution, Cochran's theorem shows that s2 follows a scaled chi-squared distribution:[8] $(n-1)\frac{s^2}{\sigma^2}\sim\chi^2_{n-1}.$ As a direct consequence, it follows that $\operatorname{E}(s^2)=\operatorname{E}\left(\frac{\sigma^2}{n-1} \chi^2_{n-1}\right)=\sigma^2 ,$ and[9] $\operatorname{Var}[s^2] =\operatorname{Var}\left(\frac{\sigma^2}{n-1} \chi^2_{n-1}\right)=\frac{\sigma^4}{(n-1)^2}\operatorname{Var}\left( \chi^2_{n-1}\right)=\frac{2\sigma^4 }{n-1}.$ If the yi are independent and identically distributed, but not necessarily normally distributed, then[10] $\operatorname{E}[s^2] = \sigma^2, \quad \operatorname{Var}[s^2] = \sigma^4 \left (\frac{2}{n-1} + \frac{\kappa}{n} \right) = \frac{1}{n} \left(\mu_4 - \frac{n-3}{n-1}\sigma^4\right),$ where κ is the excess kurtosis of the distribution and μ4 is the fourth moment about the mean. If the conditions of the law of large numbers hold for the squared observations, s2 is a consistent estimator of σ2.[citation needed]. One can see indeed that the variance of the estimator tends asymptotically to zero. ### Samuelson's inequality Samuelson's inequality is a result that states bounds on the values that individual observations in a sample can take, given that the sample mean and (biased) variance have been calculated.[11] Values must lie within the limits $\bar y \pm \sigma_y (n-1)^{1/2}.$ ### Relations with the harmonic and arithmetic means It has been shown[12] that for a sample {yi} of real numbers, $\sigma_y^2 \le 2y_{max} (A - H)$ where ymax is the maximum of the sample, A is the arithmetic mean, H is the harmonic mean of the sample and $\sigma_y^2$ is the (biased) variance of the sample. This bound has been improved on and it is known that variance is bounded by $\sigma_y^2 \le \frac{ y_{max} ( A - H )( y_{max} - A ) } { y_{max} - H },$ $\sigma_y^2 \ge \frac{ y_{min} ( A - H )( A - y_{min} ) } { H - y_{min} },$ where ymin is the minimum of the sample.[13] ↑Jump back a section ## Generalizations If $X$ is a vector-valued random variable, with values in $\mathbb{R}^n$, and thought of as a column vector, then the natural generalization of variance is $\operatorname{E}((X - \mu)(X - \mu)^{\operatorname{T}})$, where $\mu = \operatorname{E}(X)$ and $X^{\operatorname{T}}$ is the transpose of $X$, and so is a row vector. This variance is a positive semi-definite square matrix, commonly referred to as the covariance matrix. If $X$ is a complex-valued random variable, with values in $\mathbb{C}$, then its variance is $\operatorname{E}((X - \mu)(X - \mu)^{\dagger})$, where $X^{\dagger}$ is the conjugate transpose of $X$. This variance is also a positive semi-definite square matrix. ↑Jump back a section ## Tests of equality of variances Testing for the equality of two or more variances is difficult. The F test and chi square tests are both sensitive to non normality and are not recommended for this purpose. Several non parametric tests have been proposed: these include the Barton-David-Ansari-Fruend-Siegel-Tukey test, the Capon test, Mood test, the Klotz test and the Sukhatme test. The Sukhatme test applies to two variances and requires that both medians be known and equal to zero. The Mood, Klotz, Capon and Barton-David-Ansari-Fruend-Siegel-Tukey tests also apply to two variances. They allow the median to be unknown but do require that the two medians are equal. The Lehman test is a parametric test of two variances. Of this test there are several variants known. Other tests of the equality of variances include the Box test, the Box-Anderson test and the Moses test. Resampling methods, which include the bootstrap and the jackknife, may be used to test the equality of variances. ↑Jump back a section ## History The term variance was first introduced by Ronald Fisher in his 1918 paper The Correlation Between Relatives on the Supposition of Mendelian Inheritance:[14] The great body of available statistics show us that the deviations of a human measurement from its mean follow very closely the Normal Law of Errors, and, therefore, that the variability may be uniformly measured by the standard deviation corresponding to the square root of the mean square error. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviations $\theta_1$ and $\theta_2$, it is found that the distribution, when both causes act together, has a standard deviation $\sqrt{\theta_1^2 + \theta_2^2}$. It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance... ↑Jump back a section ## Moment of inertia The variance of a probability distribution is analogous to the moment of inertia in classical mechanics of a corresponding mass distribution along a line, with respect to rotation about its center of mass.[citation needed] It is because of this analogy that such things as the variance are called moments of probability distributions.[citation needed] The covariance matrix is related to the moment of inertia tensor for multivariate distributions. The moment of inertia of a cloud of n points with a covariance matrix of $\Sigma$ is given by[citation needed] $I=n (\mathbf{1}_{3\times 3} \operatorname{tr}(\Sigma) - \Sigma).$ This difference between moment of inertia in physics and in statistics is clear for points that are gathered along a line. Suppose many points are close to the x axis and distributed along it. The covariance matrix might look like $\Sigma=\begin{bmatrix}10 & 0 & 0\\0 & 0.1 & 0 \\ 0 & 0 & 0.1\end{bmatrix}.$ That is, there is the most variance in the x direction. However, physicists would consider this to have a low moment about the x axis so the moment-of-inertia tensor is $I=n\begin{bmatrix}0.2 & 0 & 0\\0 & 10.1 & 0 \\ 0 & 0 & 10.1\end{bmatrix}.$ ↑Jump back a section ## Notes 1. ^ Loeve, M. (1977) "Probability Theory", Graduate Texts in Mathematics, Volume 45, 4th edition, Springer-Verlag, p. 12. 2. ^ Goodman, Leo A., "On the exact variance of products," Journal of the American Statistical Association, December 1960, 708–713. 3. ^ Goodman, Leo A., "The variance of the product of K random variables," Journal of the American Statistical Association, March 1962, 54ff. 4. ^ Kagan, A.; Shepp, L. A. (1998). "Why the variance?". Statistics & Probability Letters 38 (4): 329. doi:10.1016/S0167-7152(98)00041-8. edit 5. ^ Johnson, Richard; Wichern, Dean (2001). Applied Multivariate Statistical Analysis. Prentice Hall. p. 76. ISBN 0-13-187715-1 6. ^ Navidi, William (2006) Statistics for Engineers and Scientists, McGraw-Hill, pg 14. 7. ^ Montgomery, D. C. and Runger, G. C. (1994) Applied statistics and probability for engineers, page 201. John Wiley & Sons New York 8. ^ Knight K. (2000), Mathematical Statistics, Chapman and Hall, New York. (proposition 2.11) 9. ^ Casella and Berger (2002) Statistical Inference, Example 7.3.3, p. 331[full citation needed] 10. ^ Neter, Wasserman, and Kutner (1990) Applied Linear Statistical Models, 3rd edition, pp. 622-623[full citation needed] 11. ^ Samuelson, Paul (1968)"How Deviant Can You Be?", Journal of the American Statistical Association, 63, number 324 (December, 1968), pp. 1522–1525 JSTOR 2285901 12. ^ Mercer A McD (2000) Bounds for A-G, A-H, G-H, and a family of inequalities of Ky Fan’s type, using a general method. J Math Anal Appl 243, 163–173 13. ^ Sharma R (2008) Some more inequalities for arithmetic mean, harmonic mean and variance. J Math Inequalities 2 (1) 109–114 14. ^ ↑Jump back a section
Improving the Wolfram Community Editor GROUPS: Udo Krause 4 Votes Hi Vitaliy, is your group going to improve the community editor? I mean, it is usable after all, but it's not fun to use it. What browsers do you recommend? I try Google Chrome, Internet Explorer 10 64 bit, and Opera 64 bit on regular base and it's always kicking the hassle. You have to do multiple edit's to get it right and have it look not too bad. Especially the spikey button has some awful intelligence guiding it to reformat things I copy from a Mathematica notebook into it and I want to see it there within spikey's area _without_ _any_ _intelligence_ _of_ _the_ _spikey_ _involved_, because I did use Mathematica  already and Mathematica  is intelligent enough to format things.  Furthermore, "following this discussion" has no impact in the real world, I only once got a mail about a reaction on one of my mailings.Having said that, let me say that I appreciate Wolfram Community as a source of recreation.Useful would also be a button directing a hobbyist like me to the questions still having 0 replies.Sincerely yoursUdo. 1 year ago 42 Replies Szabolcs Horvat 2 Votes I agree that a currently weak point of this site is the editor.  I think it's very difficult to make a WYSIWYG editor quick and convenient to use because of all the "hidden" information behind what we actually see.I very much prefer the MarkDown based editor on StackExchange, especially with the Mathematica-specific improvements.  It is true that MarkDown takes a bit of effort to learn, but it's not difficult either (it's definitely easier than the markup shown here when clicking the Source button), and once one learns it, I believe it's much quicker and easier to work with than the current Community WYSIWYG editor.Just to mention some examples of how the current editor is frustrating: it's sometimes difficult to end a certain formatting or text customization.  It happened that I entered a hyperlink, pressed enter, kept typing on the new line, and the text I typed was still part of the link.  When formatting code, sometimes it's broken into several code boxes line by line.  Generally if something is misformatted, it's not trivial to fix (there should at least be a remove all formatting button like in the Gmail editor). 1 year ago Vitaliy Kaurov 7 Votes Thank you very much for your feedback. Yes the Community team is very aware of every single issue you both mentioned. Let me assure you our focus stays put on the issues. And let me address the issues one by one in the order you mentioned them. Improving the editor – this is one of our primarily goals and we do consider all possibilities including the MarkDown type. Development takes time. I can only ask you to please do not get discouraged and bear with us until will bring things to a better shape. For now I can give you a few tips for more efficient usage of the editor.  1) Avoid too much rich formatting  2) 1st click Spikey button, then paste the code in the gray box. Never paste code in the editor, then select it and click Spikey button.   3) As an alternative to MarkDown style use “Source” mode – accessible via “Source” button. In this mode everything is crystal clear. M-code is placed between tags. Or, for example, bold text between tags. Built-in browser spell checker works. I am going out of “Source” only to make a quick hyperlink or image upload.   4) Make at least 2 break lines between separate code blocks – otherwise they will join.Following this discussion – this is quite news to me. I get every single email if anyone comments on post I participated. Could this be your spam folder eating a few of those? Please let me know about specific cases so I could check them out.Zero replies – it also is very known fact to us. We are working on this currently and you will be able to pull those out easily.Correcting bad formatting – very well known. For now I would recommend to jump to “Source” mode for simpler approach.I do not mean to sound like a cliché, but Rome was not built in one day. This is a very interesting project that in future will bit anything known to regular forums with thing like Wolfram Cloud integration and in-post computations. We only ask you to bear with us and support us with your patience, believe and enthusiasm. After all I think the message is much more valuable then pen and the paper. While I agree it is a delight to have a nice pen ;-) 1 year ago Hi Vitaliy! Also to notice is that replying to posts using an iPad is difficult and editing posts is even harder. I use the iPad a lot, and this site works in a peculiar way for me... Good luck! 1 year ago Shenghui Yang 2 Votes 3) and 4) in vitaly's reply is very helpful. That resolves many formatting issue I had for long code. 1 year ago Following this discussion – this is quite news to me. I get every single email if anyone comments on post I participated. Could this be your spam folder eating a few of those? Please let me know about specific cases so I could check them out.You won't believe it, Vitaliy, but this very answer of you didn't make it into the e-Mail. The spam folder contains google+ and facebook (fake) messages every day, but no message from Wolfram community.By the way, the citation button is an example for the issues one has to deal with. (1) Click the citation button - ok(2) Paste in the citation - ok (as shown above)(3) Try to leave the citation area with key strokes or mouse click - not possible (I did not try combined stroke-clicks with functional key interactions ...)(4) Find a workaround:(4.1) Open a second citation area hack around and delete again text(4.2) The cursor is now hold in the second citation area(4.3) Open a third citation area and hack and delete again; then - occasional - the third and the second citation area disappear and the cursor is below the first citation area(4.4) Continue with simple text and produce the answer as it is seen here - ok 1 year ago The only email I got from this thread was Udo's very last reply.  I haven't received any of the others (they're not in the spam folder). 1 year ago I wonder if anyone tried to import a graphic file to a message. I tried it in a number of ways today (copy and paste, drag and drop, or using the editor's "image" button) but none of these worked. Especially the effect of trying to use the "image" widget was truly weird. I tried it on a MacBook Air with Safari, can this be the problem? 1 year ago David Keith 1 Vote Until I saw this post I had no idea what the "Source" mode could be used for. The best short-term improvement for the community web site would be to spend an hour or two documenting the functionality that does exist, so we would not have to learn it by trial and error. 1 year ago Dear all,thank you very much for the feedback, this is invaluable for us. We just created a short tutorial on the editor. Please check it out and comment there on what else would you like to see:How to type up a post: editor tutorial & general tips@ C ormullion, the site is not optimized for mobile platforms yet. This is coming in the future.@ David, please see the linked above tutorial @ Szabolcs, Udo - we are looking into the notification email issue you mentioned. We will get back to you. 1 year ago Thank you. That's a big help!Best regards,David 1 year ago I agree, a good hint of David and a useful hands-on tutorial. So let's go ahead with the "Source" mode as programmers are used to do ... 1 year ago Yes it is a good help, it was good that David induced it. It does not though contain the solution to my question (which I must admit was not fully specified, see below) on the importing of images. It was solved kindly by Vitaliy in mail correspondence (since I also sent an inquiry on "Feedback"). The solution is that only PNG, JPG and GIF formats can be uploaded, whereas I tried to upload a tiff file (the default type that the screen capture app "Grab" produces on my Mac). I thought the problem was with the browser and/or the operating system, but I was not. Please add this specification into the "Tutorial and general tips", not the least since the advice "illustrate your posts with images" appears in it. Thanks!Imre 1 year ago @Imre, don't use the Grab utility in OS X to save screenshots.  Use the keyboard shortcuts instead.  These will give you PNG files, and they're more convenient to use.  TIFF files are not compatible with most web browsers so they'd need to be converted before they can be displayed.  JPEG, GIF and PNG can be displayed directly, so most websites will only allow these.@Moderation Team:  It seems the email issue is still present today. 1 year ago @Szabolcs we are looking into this currently. Did you experience this on other thread discussions too or only this one? 1 year ago @Szabolcs,(Shall we correspond in Hungarian? ). Thank you for the tips. I did not know that the shortcuts generate png, of course it is more direct. However it is no problem to convert a tiff to jpg, this is what I did when I started a discussion in another subject recently:http://community.wolfram.com/groups/-/m/t/151252?p_p_auth=WZZr2ryb 1 year ago I got the notification email just now.  I experienced it in this thread earlier, but I did get your update just a minute ago. 1 year ago @Imre,You can find my email address here. :-) (I won't write in plain text to reduce the amount of spam I get.) 1 year ago I found another problem with the editor.  I can't type the following unless I put it in a code box:Ordering[f[list]]I think the problematic part is[list]as it gets stripped when I try to publish the post.  This shouldn't happen when I use the visual (not source) editor. 1 year ago That is a known issue and exactly why we need the codebox. We do not have in-line functionality yet for this (though we are working on it) and BBCode interpreter sometimes makes content inside square brackets become invisible. These constructs are native for BBCode. We are working on this and thank you very much for caring and bringing these things to our attention. 1 year ago Does somebody test the community editor from outside ... since one or two weeks copy and paste from a Mma notebook into the spikey area of a post does not work anymore with Opera 64 Bit (Version 12.16,  Build 1860, Platform x64, System Windows 7), Mma version is 9.0.1. Google Chrome does it. 1 year ago Is there any way to attach a notebook to a post, or to send a notebook to another community member? 1 year ago Yes you can attach a file to your posts and comments. This is public sharing and these files then can be accessed by anyone. 1 year ago The appended is a copy and paste of a reply/post  I made on: "How to type up a post"I noticed that that "discussion" had not been active for some months.  My post, though perhaps more of a comment, has actual questions in the last itemized ist (and the post script).I could have posted this as a separate question, however these threads appeared to be question and answer sessions.Dear Moderation Team,Thank you very much for this post.  I was going to provide feedback but found other users making the same observations.  I am glad The common issues came to surfaceThe prompt responseThe interface is clearly laid out and the Source button is extremely helpful in editing when things are complex or go wrong, e.g. I found it invaluable for this post (admittedly it did not require all this formatting, esp nested enumerated lists but I was experimenting).There still seem to be issues with uploading animated gifs. However, this may be my machine. I do not receive any email notifications and I have not been aware of any flag or marker to indicate a reply to a comment thread. The latter may reflect that no-one has replied. However, I know I only became aware of a reply (my one and only) by chance rather than design (my one and only reply here some 5 months ago). Finally some minor questions: is the in-line Mathematica markup or a convention, e.g. bold font for in-built Mathematica functions?is there LaTeX support (in-line or display mode)?is some formal mutual interaction/recognition/cross-fertilization between Wolfram Community and Mathematica Stack exchange underway?With respect to the last question: obviously, there are cross posted questions (I hyperlinked one above) and users of both (I have to date learned an enormous amount from Stackexchange and hope to continue to and I see a number of common users).Thank you again for this very helpful post.Post scriptIt appears that this BBCode does not support $latex$ tag for markup. The frequently used commands help regarding BBCode incorrectly repeats $mcode$ for code blocks whereas $code$ is what is produced when just putting non-Mathematica code 1 year ago Hi dear all,there is a new user interface: The source button has gone, the Spikey has gone, the links in the discussion to this very post have gone too. Would you like to see what happens if a poster encourages to post a MatrixForm? In[1]:= {{1, 0, 0}, {0, 0, -1}, {0, -1, 0}}.{{1, 8, 5}, {4, 4, 6}, {2, 7, 9}} // MatrixForm Out[1]//MatrixForm= \!$$TagBox[ RowBox[{"(", "", GridBox[{ {"1", "8", "5"}, { RowBox[{"-", "2"}], RowBox[{"-", "7"}], RowBox[{"-", "9"}]}, { RowBox[{"-", "4"}], RowBox[{"-", "4"}], RowBox[{"-", "6"}]} }, GridBoxAlignment->{ "Columns" -> {{Center}}, "ColumnsIndexed" -> {}, "Rows" -> {{Baseline}}, "RowsIndexed" -> {}}, GridBoxSpacings->{"Columns" -> { Offset[0.27999999999999997], { Offset[0.7]}, Offset[0.27999999999999997]}, "ColumnsIndexed" -> {}, "Rows" -> { Offset[0.2], { Offset[0.4]}, Offset[0.2]}, "RowsIndexed" -> {}}], "", ")"}], Function[BoxForme, MatrixForm[BoxForme]]]$$ It's great, isn't it? This was done with the Google Chrome Browser the only one that worked for me in this forum before. I've read the new help in the Announcements and tried both: Code Sample Button (scrambles up at all) and the CTRL-K (does nothing) thing ...So, let's pose the old questions again: which browser works for this web publishing interface? how to get the Mma code in without the need of multiple publish/edit cycles? where is the help entry on the right-hand side of Dashboard - Groups - People? I've seen the hint to the MarkDown syntax; do you intend one has to do full fledged HTML coding here to get the formatting right? 10 months ago Szabolcs Horvat 2 Votes I tested this in Safari/Chrome/Firefox on a Mac, and it works fine in all of them. You need to select the block of text to be marked as code, and then press Ctrl-K or Command-K (or use the toolbar button). It will simply indent the text by four spaces, which denotes a code block in MarkDown. I'm not sure what you mean here. MatrixForm is not really meant for copying as plan text, but it's possible if you right click it and choose Copy As -> Plain text. Only plain text is suitable for posting here. This is not an issue with the editor. It's a consequence of Mathematica supporting 2D notation and having to translate it into a copyable form. (don't know) MarkDown is not HTML. In fact it is the easiest to learn and easiest to read markup language. The "code" itself is designed both to look very similar to its rendered form and to be quick to type (unlike BBCode which the previous editor used). I strongly prefer MarkDown to the previous editor because it's much more consistent and predictable. The previous editor allowed either directly editing BBCode, which is clearly inferior to MarkDown in usability, or using a WYSIWYG editor which tended to be unpredictable and mangle the underlying BBCode. With MarkDown we always edit the "source code", so the result is completely predictable, and unlike before there's never a need for multiple publish/edit cycles. If you don't edit something explicitly, it doesn't change. In addition, there's also a live preview of the rendered form below, giving a semi-WYSIWYG workflow.Plus, now we have $\LaTeX$ integrated! :-)$$\int_0^\infty e^{-x} \; dx$$ 10 months ago Udo Krause 1 Vote Seems to work - even with PlainText only- only if Code Sample is not the first entry in a post - mention that in the Help, please - In[1]:= {{1,0,0},{0,0,-1},{0,-1,0}}.{{1,8,5},{4,4,6},{2,7,9}}//MatrixForm Out[1]//MatrixForm= (1 8 5 -2 -7 -9 -4 -4 -6 ) In the notebook it is Only plain text is suitable for posting here. Then mention that in the Help, please, and - this is point 3. - bring the Help How to post and use Wolfram Community to some constant retrievable location on the site.And - once again - this had to be edited twice, because - even with PlainText only - the CTRL-K has been ignored for the first time, despite of the WYSIWIG, which showed it correct: then I had to drop a line before the Code Sample to get it to work.In toto this simple thing summed up to four edits.This is Google Chrome under Windows 7 64 Bit. I tested this in Safari/Chrome/Firefox on a Mac, and it works fine in all of them. All the day one has to do with people reacting on a report with the phrase: For me it works just fine. Okay. MarkDown is not HTML. About that one reads on the MarkDown page For any markup that is not covered by Markdown’s syntax, you simply use HTML itself. 10 months ago You really don't have to worry about HTML. As I said, MarkDown is not HTML. What that remark means is that one can use HTML, interspersed with MarkDown. But we don't need to use HTML (unless one needs special formatting, which includes coloured text or images resized in-browser, things that were not possible in the previous editor). On StackExchange (which uses MarkDown) I never need to use HTML. 10 months ago Sorting as well as testing did not yet come to an end here:these is shown as sorted by newest replies; if taken literally, it's just wrong ... it's neither ascending nor descending nor even units have been respected ... good luck!It seems that sorted by newest replies means: sorted by second newest reply, because for this very post as newest reply the post from Mr. Szabolcs Horvat 11 days ago is shown as the newest post. 10 months ago The order of discussions is probably correct for the actual last addition. In the Dashboard list of discussions and how long ago they were created or added to, it seems like most of the ages are for the next-to-last addition. This is being worked on. 10 months ago Since the changes began I routinely do not see numbers above views, replies or votes until I slide the mouse across the subject.If there is a wish list somewhere, it would be nice to see that blue box containing the quoting come back. This enabled making it really clear when my response was addressing only a specific part of someone's previous post 10 months ago What web browser(s) do you (not) see this on? What OS and level? (Why not?) Make and model of mouse? 10 months ago Bill Simpson 1 Vote At the moment IE, Chrome, Comodo Dragon and Epic all show numbers, but there is a visible delay after the page appears done and before the numbers appear on a couple of those. Those fairly commonly do not appear under Comodo and Chrome.Win7 64 Ultimate.P/N X800898-113 PID 69657-492-7435846-16748andU.S. size 9.5 which is Euro size 42. 10 months ago The Community Editor has improved much - thank you all for accomplishing that, but still it happens, that the preview of a post is okay (e.g. post 336269) but the post appears in WC nearly unformatted. 7 months ago Indeed. I find that links given in the [linkname](url) format do not render correctly recently. They're fine in the preview, but not the final post. See for example the next to last post here. 7 months ago Or just see this very post above ... 7 months ago Right now, in Firefox running under Windows 7, the most recent post I see if 22 hours ago. In Chrome, on the same computer, the most recent post is 12 hours ago. I've seen this problem before. 7 months ago Same for me. When I followed the email link to this post I could see it, but only old posts appear on the Dashboard. Win 7 pro x64 and firefox. 7 months ago Same for me: each browser (Chrome 37.0.2062.103, Opera 12.17, Safari 5.1.7 (7534.57.2), Internet Explorer 11.0.9600.17239 all under Windows 7 64 Bit Home Premium) shows another most recent post - none of them is the most recent post. You must know the poster of the most recent post to find the most recent post. Possibly the Wolfies implemented many worlds of Hugh Everett III already in the area of Web Communities and let us know this way ... the older Newtonian model of one time for all (browers) facilitates communication with others much. 7 months ago Daniel Lichtblau 1 Vote Re "Wolfies"-- Bad term to use. 7 months ago It's a term of endearment, Daniel. Many of us have "known" you and Bruce longer than all but our oldest friends. Twenty-one years for me. So you're family -- tu, not vous. 7 months ago Re "Wolfies"-- Bad term to use. ... hmm; I used to use the abbreviation Mma for Mathematica until I learnt that MMA is the abbreviation for Mixed Martial Arts. Now I learnt that Wolfies is a name often used by grill restaurants ...On the other hand, there is the special character [Wolf] and if one enjoys the graphical design of this website Wolfram Language it was not too outrageous to use the term Wolfies. Even if I stop using it, it will be used over and over again Wolfie Keyboard, a Wolfram Alpha Client for Windows Phone 8 ... 7 months ago I think words ending in ies are often regarded as being derogatory. For example, fans of Star Trek call themselves Trekkers and are insulted when they are called Trekkies. 7 months ago I think words ending in ies are often regarded as being derogatory. Okay, sorry; then I will never use this again.
# area of a sector formula Angle described … When the angle at the centre is 360°, area of the sector, i.e., the complete circle = πr² Area of a sector formula. Area of Segment APB = Area of Sector OAPB – Area of ΔOAB = θ 360 x πr 2 – 1 2 r 2 sin θ Angle described by minute hand in 60 minutes = 360°. When the central angle formed by the two radii is 90°, the sector is called a quadrant (because the total circle comprises four quadrants, or fourths). In this video, I explain the definition of a sector and how to find the sector area of a circle. Relate the area of a sector to the area of a whole circle and the central angle measure. A quadrant has a 90° central angle and is one-fourth of the whole circle. Let this region be a sector forming an angle of 360° at the centre O. A circle containing a sector can be further divided into two regions known as a Major Sector and a Minor Sector. A = area of a sector. Your email address will not be published. r is the length of the radius.> The most common sector of a circle is a semi-circle which represents half of a circle. If you're seeing this message, it means we're having trouble loading external resources on our website. To calculate the area of the sector you must first calculate the area of the equivalent circle using the formula stated previously. Anytime you cut a slice out of a pumpkin pie, a round birthday cake, or a circular pizza, you are removing a sector. Here’s the formal solution: Find the area of circle segment IK. So if a sector of any circle of radius r measures θ, area of the sector can be given by: Local and online. The area and circumference are for the entire circle, one full revolution of the radius line. You can work out the Area of a Sector by comparing its angle to the angle of a full circle.Note: we are using radians for the angles.This is the reasoning: Area of Sector = θ 2 × r2 (when θ is in radians)Area of Sector = θ × π 360 × r2 (when θ is in degrees) See the video below for more information on how to convert radians and degrees Instead, the length of the arc is known. The formula for a sector's area is: A = (sector angle / 360) * (pi * r2) Calculating Area Using Radians If dealing with radians rather than degrees to … Acute central angles will always produce minor arcs and small sectors. To calculate area of a sector, use the following formula: Where the numerator of the fraction is the measure of the desired angle in radians, and r is the radius of the circle. = $$\frac{30^{0}}{360^{0}}\times \frac{22}{7}\times 9^{2}=21.21cm^{2}$$ Thus, when the angle is θ, area of sector, = $$\frac{\theta }{360^{o}}\times \pi r^{2}$$, = $$\frac{45^{0}}{360^{0}}\times\frac{22}{7}\times 4^{2}=6.28\;sq.units$$, = $$\frac{30^{0}}{360^{0}}\times \frac{22}{7}\times 9^{2}=21.21cm^{2}$$, video lessons on the topic, download BYJU’S -The Learning App. Using the formula for the area of a circle, , we can see that . You have it cut into six equal slices, so each piece has a central angle of 60°. = $$\frac{45^{0}}{360^{0}}\times\frac{22}{7}\times 4^{2}=6.28\;sq.units$$ Let me pop up the rules for area sector. The fixed distance from any of these points to the centre is known as the radius of the circle. A = rl / 2 square units. [insert cartoon drawing, or animate a birthday cake and show it getting cut up]. Area of a circle is given as π times the square of its radius length. You may have to do a little preliminary mathematics to get to the radius. Area of the sector = $$\frac{\theta }{360^{o}}\times \pi r^{2}$$. If you're asking for the area of the sector, it's the central angle of 360, times the area of the circle, for example, if the central angle is 60, and the two radiuses forming it are 20 inches, you would divide 60 by 360 to get 1/6. The radius is 5 inches, so: Get better grades with tutoring from top-rated private tutors. Questions 2: Find the area of the sector with a central angle of 30° and a radius of 9 cm. Find the area of the sector. Learn faster with a math tutor. Area of a circle is given as π times the square of its radius length. The formula for area, A A, of a circle with radius, r, and arc length, L L, is: A = (r × L) 2 A = ( r × L) 2. Area of Sector The area of a sector of a circle is ½ r² ∅, where r is the radius and ∅ the angle in radians subtended by the arc at the centre of the circle. A sector is created by the central angle formed with two radii, and it includes the area inside the circle from that center point to the circle itself. Relate the area of a sector to the area of a whole circle and the central angle measure. In this mini-lesson, we will learn about the area of a sector of a circle and the formula … Recall that the angle of a full circle is 360˚ and that the formula for the area of a circle is πr 2. Then, the area of a sector of circle formula is calculated using the unitary method. The central angle lets you know what portion or percentage of the entire circle your sector is. Remember, the area of a circle is {\displaystyle \pi r^ {2}}. Get better grades with tutoring from top-rated professional tutors. In such cases, you can compute the area by making use of the following. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16. Area of sector. Try it yourself first, before you look ahead! Each slice has a given arc length of 1.963 inches. And solve for area normally (r^2*pi) so you … Because 120° takes up a third of the degrees in a circle, sector IDK occupies a third of the circle’s area. Suppose you have a sector with a central angle of 0.8 radians and a radius of 1.3 meters. This formula helps you find the area, A, of the sector if you know the central angle in degrees, n°, and the radius, r, of the circle: For your pumpkin pie, plug in 31° and 9 inches: If, instead of a central angle in degrees, you are given the radians, you use an even easier formula. A = θ/360° ⋅ ∏r2 square units. The formula for area, A, of a circle with radius, r, and arc length, L, is: Here is a three-tier birthday cake 6 inches tall with a diameter of 10 inches. Visit www.doucehouse.com for more videos like this. The area of a sector is like a pizza slice you find the area of a circle times the fraction of the circle that you are finding. Want to see the math tutors near you? Sector area is found $\displaystyle A=\dfrac{1}{2}\theta r^2$, where $\theta$ is in radian. To find Area, A, of a sector with a central angle θ radians and a radius, r: Our beloved π seems to have disappeared! The portion of the circle's circumference bounded by the radii, the arc, is part of the sector. So if a sector of any circle of radius r measures θ, area of the sector can be given by: Let this region be a sector forming an angle of 360° at the centre O. r is the length of the radius. Area of the sector = $$\frac{\theta }{360^{0}}\times \pi r^{2}$$. The figure below illustrates the measurement: As you can easily see, it is quite similar to that of a circle, but modified to account for the fact that a sector is just a part of a circle. It hasn't, really. The area enclosed by a sector is proportional to the arc length of the sector. In a circle with radius r and center at O, let ∠POQ = θ (in degrees) be the angle of the sector. There are instances where the angle of a sector might not be given to you. You cannot find the area of a sector if you do not know the radius of the circle. The formula to calculate the sector area is: \ (\text {Sector area} = \frac {\text {angle}} {360} \times \pi r^2 \) Then, you must multiply that area by the ratio of the angles which would be theta/360 since the circle is 360, and theta is the angle of the sector. Questions 1: For a given circle of radius 4 units, the angle of its sector is 45°. In this video I go over a pretty extensive and in-depth video in proving that the area of a sector of a circle is equal to 1/2 r^2*θ. Required fields are marked *. A circle is a geometrical shape which is made up of an infinite number of points in a plane that are located at a fixed distance from a point called as the centre of the circle. In the figure below, OPBQ is known as the Major Sector and OPAQ is known as the Minor Sector. So 16 times 3.14 which is 50.4 and it is always the units squared. Be careful, though; you may be able to find the radius if you have either the diameter or the circumference. x is the angle of the sector. Area of a Sector Formula : $$\text{A}\;=\;\frac{1}{2}θr^2$$ Where, A shows Area of a Sector. We can use this to solve for the circumference of the circle, , or . Area of sector = $$\frac{\theta }{360} \times \pi r^{2}$$ Derivation: In a circle with centre O and radius r, let OPAQ be a sector and θ (in degrees) be the angle of the sector. First, we figure out what fraction of the circle is contained in sector OPQ: , so the total area of the circle is . When finding the area of a sector, you are really just calculating the area of the whole circle, and then multiplying by the fraction of the circle the sector represents. When the angle at the center is 1°, area of the sector = $$\frac{\pi .r ^{2}}{360^{0}}$$ Those are easy fractions, but what if your central angle of a 9-inch pumpkin pie is, say, 31°? Whenever you want to find area of a sector of a circle (a portion of the area), you will use the sector area formula: Where θ equals the measure of the central angle that intercepts the arc and r equals the length of the radius. What is the area A of the sector subtended by the marked central angle θ?What is the length s of the arc, being the portion of the circumference subtended by this angle?. So in the below diagram, the shaded area is equal to ½ r² ∅. True, you have two radii forming the central angle, but the portion of the circumference that makes up the third "side" is curved, so finding the area of the sector is a bit trickier than finding area of a triangle. Round the answer to two decimal places. Sector area formula The formula for sector area is simple - multiply the central angle by the radius squared, and divide by 2: Sector Area = r² * α / 2 But where does it come from? Here is a three-tier birthday cake 6 6 inches tall with a diameter of 10 10 inches. When the two radii form a 180°, or half the circle, the sector is called a semicircle and has a major arc. A sector is a portion of a circle which is enclosed between its two radii and the arc adjoining them. An arc is a part of the circumference of the circle. θ is the angle of the sector. As Major represent big or large and Minor represent Small, which is why they are known as Major and Minor Sector respectively. The angle between the two radii is called as the angle of surface and is used to find the radius of the sector. A 45° central angle is one-eighth of a circle. For more on this seeVolume of a horizontal cylindrical segment. To solve more problems and video lessons on the topic, download BYJU’S -The Learning App. Hope this video helpful. Explanation: . When θ2π is used in our original formula, it simplifies to the elegant (θ2) × r2. When the angle at the centre is 360°, area of the sector, i.e., the complete circle = πr², When the angle at the center is 1°, area of the sector = $$\frac{\pi .r ^{2}}{360^{0}}$$. To determine these values, let's first take a closer look at the area and circumference formulas. You have a personal pan pizza with a diameter of 30 cm. K-12 students may refer the below formulas of circle sector to know what are all the input parameters are being used to find the area and arc length of a circle sector. A sector is a fraction of the circle’s area. Formula A sector is an area formed between the two segments also called as radii, which meets at the center of the circle. When angle of the sector is 360°, area of the sector i.e. Formula For Area Of Sector (In Degrees) We will now look at the formula for the area of a sector where the central angle is measured in degrees. To find the segment area, you need the area of triangle IDK so you can subtract it from the area of sector … Length of an arc of a sector- The length of an arc is given as-. Area of sector = $$\frac{\theta }{360} \times \pi r^{2}$$. In a semi-circle, there is no major or minor sector. Now, OP and OQ are both equal to r, and PQ is equal to of the circumference of the circle, or . Measuring the diameter is easier in many practical situations, so another convenient way to write the formula is (angle / 360) x π x … l = θ/360° ⋅ 2∏r. Then, the area of a sector of circle formula is calculated using the unitary method. Formula to find area of sector is. Area of a Sector Answer Key Sheet 1 Find the area of each shaded region. Your formula is: You can also find the area of a sector from its radius and its arc length. Similarly below, the arc length is half the circumference, and the area … Get help fast. The area of the circle is equal to the radius square times . [insert drawing of pumpkin pie with sector cut at +/- 31°]. Radians are based on π (a circle is 2π radians), so what you really did was replace n°360° with θ2π. Find a tutor locally or online. What is the area, in square centimeters, of each slice? or. You can also find the area of a sector from its radius and its arc length. This calculation is useful as part of the calculation of the volume of liquid in a partially-filled cylindrical tank. We know that a full circle is 360 degrees in measurement. Given the diameter, d, of a circle, the radius, r, is: Given the circumference, C of a circle, the radius, r, is: Once you know the radius, you have the lengths of two of the parts of the sector. Using this formula, and approximating , the area of the circle is . π = 3.141592654. r = radius of the circle. 1-to-1 tailored lessons, flexible scheduling. Formula to find length of the arc is. For example in the figure below, the arc length AB is a quarter of the total circumference, and the area of the sector is a quarter of the circle area. The formula for the area of a sector is (angle / 360) x π x radius2. A  part of a curve lying on the circumference of a circle. In the formula given, A is the area of the sector, N is the degree of the central angle of the sector, pi is an irrational number that can be rounded to 3.14, and r is the length of the radius of the circle. You only need to know arc length or the central angle, in degrees or radians. In a semi-circle, there is no major or minor sector. Unlike triangles, the boundaries of sectors are not established by line segments. $$\text{A}\;=\;\frac{x}{360}πr^2$$ Where, A shows Area of a Sector. In a circle with centre O and radius r, let OPAQ be a sector and θ (in degrees) be the angle of the sector. A sector always originates from the center of the circle. Now, we know both our variables, so we simply need to plug them in and simplify. You cut it into 16 even slices; ignoring the volume of the cake for now, how many square inches of the top of the cake does each person get? In fact, a quadrant and a semicircle form a sector of the circle. The area of a segment is the area of the corresponding sector minus the area of the corresponding triangle. Area of sector formula and examples- The area of a sector is the region enclosed by the two radius of a circle and the arc. Step 2: Use the proportional relationship. We know that a full circle is 360 degrees in measurement. Circle Sector is a two dimensional plane or geometric shape represents a particular part of a circle enclosed by two radii and an arc, whereas a part of circumference length called the arc. Since the cake has volume, you might as well calculate that, too. Thus, when the angle is θ, area of sector, OPAQ = $$\frac{\theta }{360^{o}}\times \pi r^{2}$$. A sector is a section of a circle. The distance along that curved "side" is the arc length. In the formula, r = the length of the radius, and θ = the degrees in the central angle of the sector. The arc length formula is used to find the length of an arc of a circle; $\ell =r \theta$, where $\theta$ is in radian. Now that you know the formulas and what they are used for, let’s work through some example problems! Your email address will not be published. The formula to find the area of a sector is A = N/360 x (pi x r^2). the whole circle = $$πr^2$$ When the angle is 1°, area of sector … θ = central angle in degrees. , we know that a full circle is given as- and is one-fourth of circle... Can not find the radius of the arc length or the central angle is one-eighth a. Little preliminary mathematics to get to the area by making use of the circle is the arc is! Take a closer look at the centre O six equal slices, so what really! In square centimeters, of each slice has a 90° central angle measure, there is no Major Minor. Adjoining them in square centimeters, of each slice, 31° closer look at area! $is in radian example problems that the formula for the circumference a. 3.141592654. r = radius of 1.3 meters angle measure always originates from the of... X ( pi x r^2 ) in the below diagram, the length of arc! Sector might not be given to you } \theta r^2$, where $\theta$ is in radian 31°! The formal solution: find the area enclosed by a sector is 360°, area of the circle entire. Sheet 1 find the sector area is equal to r, and the area and circumference are for entire. A semi-circle which represents half of a circle π = 3.141592654. r = radius of the line! Circle which is why they are known as a Major sector and OPAQ is known as the angle the... Sector i.e an angle of 60° here is a portion of the sector area found... Two segments also called as radii, which area of a sector formula at the area of sector elegant ( θ2 ) r2! Is enclosed between its two radii form a 180°, or 1.963 inches θ2π is used in original... 30 cm say, 31° making use of the circle a = N/360 x ( pi x r^2.... Formula for the entire circle, or 360˚ and that the angle between the two radii is called a form... Radians are based on π ( a circle how to find the area, degrees... Use of the circle ’ s work through some example problems as Major and represent! } \theta r^2 $, where$ \theta $is in radian distance from any of points! Sheet 1 find the radius of the volume of liquid in a,. The central angle measure using this formula, it means we 're having trouble external... Not find the sector let 's first take a closer look at the area of a circle... Sector IDK occupies a third of the circle, one full revolution of the.. Relate the area of sector what is the arc, is part of the circle where the of! But what if your central angle, in degrees or radians Sheet 1 find the area of a circle given. Given as- since the cake has volume, you can also find the of! Not established by line segments and solve for area normally ( r^2 * pi ) so you area... Questions 1: for a given circle of radius 4 units, shaded. Simplifies to the elegant ( θ2 ) × r2 into two regions known a... Formula for the area of a sector to the area of a sector is an area formed between the segments! What they are known as a Major sector and a radius of degrees!, 31° a = area of a circle is 360 degrees in a circle of its radius and its length... Sector if you have it cut into six equal slices, so each piece has central! The radius as the Major sector and a semicircle form a sector forming angle! Me pop up the rules for area normally ( r^2 * pi ) so you area! = area of each slice has a 90° central angle is one-eighth of a horizontal cylindrical.. Area of the whole circle and the central angle lets you know what portion or percentage of the.! Similarly below, OPBQ is known as the Minor sector respectively unlike triangles, the area each! So what you really did was replace n°360° with θ2π takes up a third of the 's! Further divided into two regions known as the radius of the entire your. = N/360 x ( pi x r^2 ) relate the area … a = x! Yourself first, before you look ahead degrees in a circle, sector IDK occupies a third the. Let 's first take a closer look at the area of a is... Centimeters, of each shaded region that, too curve lying on the circumference, and approximating, sector... Arcs and Small sectors the volume of liquid in a partially-filled cylindrical tank the following a semi-circle which half! Be further divided into two regions known as a Major arc and how find... Let 's first take a closer look at the center of the circle:. How to find the area of a circle is given as- diameter or the central angle lets you know portion... Take a closer look at the centre is known as the angle of and. Cake has volume, you can not find the area of a circle is to r, PQ! Big or large and Minor sector shaded region and its arc length the. That you know what portion or percentage of the sector the most common sector of circle. Any of these points to the radius of the sector is an area formed the! Circumference are for the entire circle your sector is proportional to the radius of the ’... Arc adjoining them circle ’ s the formal solution: find the area of a circle is 2π radians,. By a sector with a central angle of a circle so what you really was! Circumference, and PQ is equal to r, and approximating, the angle the! 1: for a given arc length solve for the area … a = of... Three-Tier birthday cake and show it getting cut up ] arc of a circle is radius and its length... Square times lets you know the formulas and what they are known as a Major sector and OPAQ is as! Formed between the two segments also called as the Minor sector respectively instead, the area of arc! Pop up the rules for area sector n°360° with θ2π in the figure below, OPBQ known. Into two regions known as the radius is 5 inches, so what you did! Sector respectively 3.14 which is why they are used for, let 's first take a look... The sector i.e s area have a personal pan pizza with a diameter of 10 10.. When angle of the calculation of the entire circle, or half the circumference of the circle is degrees! Are both equal to of the circle of the circle, sector IDK a. \Theta$ is in radian x r^2 ) a sector and a Minor sector know both our variables,:...: for a given circle of radius 4 units, the arc length an. Is, say, 31° area of a sector formula to do a little preliminary mathematics to get to the radius square.! X ( pi x r^2 ) each slice can also find the area and are! Recall that the formula for the area enclosed by a sector of circle formula is calculated the. Say, 31° use of the sector because 120° takes up a third of the calculation of the sector,. Top-Rated professional tutors which meets at the area of circle formula is calculated using unitary... 120° takes up a third of the volume of liquid in a partially-filled cylindrical tank line... When angle of a sector and how to find the area of sector-. The length of the circle,, or yourself first, before you look ahead Major... Circle ’ s work through some example problems is one-eighth of a sector is an area formed between two! What is the area of a circle is 360 degrees in measurement in cases! This message, it simplifies to the arc length triangles, the area enclosed by a sector proportional... And simplify pan pizza with a diameter of 30 cm r^2 * )... Units squared that, too sector cut at +/- 31° ] are used for, let 's first a... More on this seeVolume of a sector if you 're seeing this message, it means 're... And what they are used for, let 's first take a closer at. Look at the area of circle formula is calculated using the formula to the!, OPBQ is known always produce Minor arcs and Small sectors, or the... Preliminary mathematics to get to the radius if you have a sector from its length... X radius2 times the square of its radius and its arc length further divided into two known... Find the area of a sector forming an angle of 0.8 radians and a semicircle form a sector of sector! For, let 's first take a closer look at the center of the circle is { \pi... Into two regions known as the Major sector and OPAQ is known region be sector... Cut up ] radii and the central angle measure a birthday cake 6 6 inches tall a... Curve lying on the circumference of a circle is 2π radians ), so each piece a... * pi ) so you … area of a circle OP and OQ are both equal to ½ r².... 360˚ and that the angle of surface and is used in our original formula, it simplifies to the O. Center of the circle, the sector we can use this to solve for sector! Liquid in a semi-circle, there is no Major or Minor sector each shaded region the definition of a is...
# Finite Automata ## Networks and Systems These allow us to recognise whether a given string belongs to a given language # Nondeterministic Finite Automata A nondeterministic finite automaton (NFA) consists of • A finite set S of states • The input alphabet $\Sigma$ (the set of input symbols) • A start state $s_0\in S$ (or initial state) • A set F of final states (or accepting states) Often we represent an NFA by a transition graph • Nodes are possible states • Edges are directed and labelled by a symbol from $\Sigma \cup \{\epsilon\}$ • The same symbol can label edges from a state s to many other states Note that if a symbol is not defined at a state and you read it, then it rejects You can move straight along a node represented by $\epsilon$ ## Representation The accepting states are represented by double circles\ For it to be accepting there needs to be a given route to the accepting state, this is why there is two options for a coming out of 0. $$\Sigma = \{a,b\}$$ $$s_0=0$$ $$F=\{3\}$$ Alternative representation is a transition table • Rows $\rightarrow$ states • Columns $\rightarrow$ symbols in $\Sigma \cup \{\epsilon\}$ • Entries $\rightarrow$ Transitions between states Advantage of transition table: more visible transitions Disadvantage of transition table: needs more space than the transition graph This accepts the language: $$(a|b)^*abb$$ ## Acceptance of NFA An NFA accepts an input string x if there exists a path that: • Starts at the start state $s_0$ • Ends at one of the accepting states in F • Concatenation of the symbols on its edges gives exactly x A language accepted (or defined) by an NFA: • The set of strings that this NFA accepts # Deterministic Finite Automata A deterministic finite automaton (DFA) is a special case of a NFA, where: • No edge is labelled by the empty string $\epsilon$ • For each state s and each input symbol a, there is exactly one edge out of s labelled with a. If in a state with a certain letter, there is exactly one choice, so not 0. A direct algorithm to decide whether a given string x is accepted by a DFA: • Start at the start state $s_0$ • Iteratively follow the edges labelled by the characters of x • Check whether you reach a final state when x ends: • If yes, then the DFA accepts x • Otherwise not • All of this meaning, follow the path guided by the arrows, see if you are in an accepting state at the end You can label a state with $\varnothing$ to represent a rejecting state # NFA vs DFA NFAs accept exactly the regular languages (i.e. the regular expressions) Therefore, simulation of an NFA can be used in the lexical analyser to recognise strings, identifiers etc However, the simulation of NFAs is not straightforward • Many alternative outgoing edges from a state • Transitions labelled with $\epsilon$ are possible NFAs accept exactly the same languages as DFAs i.e. for every NFA, we can construct an equivalent DFA # From regular expressions to NFA Our aim: given a regular expression r, construct a NFA that accepts r Recursive construction\ For any symbol $a\in \Sigma \cup \{\epsilon\}$ For any two regular expressions s and t with NFAs N(s) and N(t). If $r=s|t$ if $r=st$, then: If $r=s^*$, then # From NFA to DFA Our aim: Given an NFA, construct a DFA that accepts the same regular language. A DFA can be used directly as an automatic string/identifier recogniser. The main idea is that each state of the constructed DFA corresponds to a rest of states in the NFA Recursive construction of the DFA, after reading (any) input $a_1a_2\ldots a_k$ the DFA is in the state that corresponds to the set of states that the NFA reaches when reading the same input. # Extensions of DFA A context free language can be recognised by a push-down automaton (PDA). This is exactly the same as an NFA, with the addition of a stack # Push-Down Automata A push-down automaton (PDA) is a tuple $(Q,\Sigma, \Gamma, \delta, p, Z, F)$, where: • Q is a finite set of states • $\Sigma$ is the input alphabet • $\Gamma$ is the push-down alphabet • $\delta$ is a set of transitions • $p$ is the initial state • $Z$ is a push-down symbol, initially in the stack • $F$ is the same set of finial states In general a PDA is non-deterministic A move in a PDA consists of: • Reading a symbol of $\Sigma \cup \{\epsilon\}$ • Changing state • Replacing the top symbol of the stack by a (possibly empty) string Writing a symbol on the stack “pushes” all the other A PDA accepts an input string x if it reaches: • Either a final state in F • Or an empty stack ($\epsilon$)
# Tag Info 1 Kevin's Solution solved it for me, but the solution in the time domain is shifted in y by a value that depends on the max x value of the signal and the form of the signal itself. Edit: I just found the solution myself. It is shifted in y by the mean(y), so if I subtract mean(y)/2 of my ifft output I get the signal 2 [Pictures to follow] Let us start with a thought experiment (which can be simulated): imagine a constant signal with value $c$. Add a full period of a pure sine with non-zero frequency. If you can remove this harmonic contribution by zeroing out its frequency bin in the Fourier domain, then the resulting inverse Fourier signal will still have mean $c$. So ... 1 The magnitude of the FFT will likely scale by the number of samples N depending on the specific algorithm you use. So the IFFF will be 1/N. Just multiply your FFT bins by N to normalize it. If you use any windowing this will change the result accordingly -1 For your sub-carrier formula, you forgot the $j$ in front of the $b_n$; without that, you lose all your data, because your real and imaginary parts get summed. Can't do that! The whole idea of complex equivalent baseband is that the real anand imaginary parts are independent; this does not only apply to OFDM, but to any baseband technique; for example, QPSK ... Top 50 recent answers are included
# How would one prove the dihedral group D_n is a group? 1. Feb 8, 2012 ### ry22 I don't understand how to show that the reflections and rotations are associative. Thanks for any help. 2. Feb 8, 2012 ### SteveL27 Composition of bijections from a set to itself is associative. That's a handy principle to know, because it shortcuts the need to prove special cases. Instead of trying to visualize rotations and reflections, all you have to do is note that each reflection or rotation is permutation of the vertices. Say A is a set, and consider the set S(A) of all bijections of A to itself. You can also think of these as permutations of the elements of the set. If you compose two bijections you get another bijection (must be proved). And if f, g, and h are bijections, then (fg)g = f(gh) where "fg" means "f composed with g," often denoted f o g. So we could also say that (f o g) o h = f o (g o h). You should prove that. Once you do, then any time you have a collection of geometric transformations, you know that their compositions are associative. 3. Feb 9, 2012 ### ry22 Thanks man!! I got it!
# FixedLengthSpiral¶ class picazzo3.wg.spirals.cell.FixedLengthSpiral(*args, **kwargs) Spiral with incoupling sections that calculates its length. The total length is set by the property total_length and the inner size of the spiral will be adapted so that the total length of the spiral (including the incoupling sections) would be equal to total_length. The way this inner size is calculated can set using properties in the Layout view. Parameters: total_length: float and number > 0, optional Total design length of the spiral. n_o_loops: int and number > 0, optional Number of loops in the spiral trace_template: PCell and _TraceTemplate, optional Trace template used in the chain. cell_instances: _PCellInstanceDict, optional name: optional The unique name of the pcell traces: List with type restriction, allowed types: , locked n_o_traces: int and number > 0, locked Total number of traces used in the spiral. Views Layout The inner size of the spiral is calculated by assuming a minimal inner_size and growing it in either the direction set by growth_direction. An error is raised when that is impossible to do with the set number of loops. Parameters: growth_direction: optional stub_direction: optional view_name: str, optional The name of the view incoupling_length: float and Real, number and number >= 0, optional length of the incoupling section. spacing: float and Real, number and number >= 0, optional spacing between the individual loops. shapes: list, optional List of shapes used to build the traces flatten: optional If true the instances are flattened grids_per_unit: locked Number of grid cells per design unit inner_size: locked units_per_grid: locked Ratio of grid cell and design unit spiral_center: locked auto_transform: locked grid: float and number > 0, locked design grid. Extracted by default from TECH.METRICS.GRID unit: float and number > 0, locked design unit. Extracted by default from TECH.METRICS.UNIT Examples from technologies import silicon_photonics from picazzo3.wg.spirals import FixedLengthSpiral from ipkiss3 import all as i3 cell = FixedLengthSpiral(total_length=4000, n_o_loops=6, trace_template=i3.TECH.PCELLS.WG.DEFAULT) layout = cell.Layout(incoupling_length=10.0, spacing=4, stub_direction="H", # either H or V growth_direction="V" # either H or V ) # Checking if the trace length is indeed correct print layout.trace_length() layout.visualize()
# Exercise involving DFT The fourier matrix is a transformation matrix where each component is define Exercise involving DFT The fourier matrix is a transformation matrix where each component is defined as ${F}_{ab}={\omega }^{ab}$ where $\omega ={e}^{2\pi i/n}$. The indices of the matrix range from 0 to $n-1$ (i.e. $a,b\in \left\{0,...,n-1\right\}$) As such we can write the Fourier transform of a complex vector v as $\stackrel{^}{v}=Fv$, which means that ${\stackrel{^}{v}}_{f}=\sum _{a\in \left\{0,...,n-1\right\}}{\omega }^{af}{v}_{a}$ Assume that n is a power of 2. I need to prove that for all odd $c\in \left\{0,...,n-1\right\}$, every $d\in \left\{0,...,n-1\right\}$ and every complex vectors v, if ${w}_{b}={v}_{cb+d}$, then for all $f\in \left\{0,...,n-1\right\}$ it is the case that: ${\stackrel{^}{w}}_{cf}={\omega }^{-fd}\phantom{a}{\stackrel{^}{v}}_{f}$ I was able to prove it for $n=2$ and $n=4$, so I tried an inductive approach. This doesn't seem to be the best way to go and I am stuck at the inductive step and I don't think I can go any further which indicates that this isn't the right approach. Note that I am not looking for a full solution, just looking for a hint. You can still ask an expert for help ## Want to know more about Discrete math? • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Crevani9a Step 1 You need to use the fact that n is a power of 2 . I believe a key observation is that because c is odd, it is a unit in the ring of integers mod (for some positive integer r ), and so, as k ranges from 0 to in the sum the residues of ck mod will range over the same set of values, but in a different order. For each there is a (unique) integer with such that for some integer , and u will be a permutation on the set of integers . Step 2 We then have the last expression being obtained by rearranging the order of summation.
XP Math - Forums - View Single Post - Integration... I need answers quick View Single Post 09-05-2007 #4 alternet87 Guest   Posts: n/a If someone finishes the first set of problems for me, here are just a couple more questions that aren't AS important, but would be nice too. 1.) Consider the function $F(x)=\int_1^{e^x} \frac{2ln(t)}{t}dt$ a.) Compute the derivative $\frac{dF(x)}{dx}$ using the Second Fundamental Theorem of Calculus. b.) Find F(0) and justify your answer showing work 2.) Suppose that $\int_1^x f(t) dt=x^2-2x+1$ Find F(x).
# Direct photon production in d+Au collisions at sqrt(s_NN)=200 GeV at midrapidity Abstract : The differential cross section for the production of direct photons in p+p collisions at √s=200 GeV at midrapidity was measured in the PHENIX detector at the Relativistic Heavy Ion Collider. Inclusive direct photons were measured in the transverse momentum range from 5.5-25 GeV/c, extending the range beyond previous measurements. Event structure was studied with an isolation criterion. Next-to-leading-order perturbative-quantum-chromodynamics calculations give a good description of the spectrum. When the cross section is expressed versus xT, the PHENIX data are seen to be in agreement with measurements from other experiments at different center-of-mass energies. Document type : Journal articles http://hal.in2p3.fr/in2p3-00723336 Contributor : Dominique Girod <> Submitted on : Thursday, August 9, 2012 - 10:23:24 AM Last modification on : Thursday, March 5, 2020 - 6:23:00 PM ### Citation A. Adare, E. T. Atomssa, A. Baldisseri, H. Borel, Xavier Camard, et al.. Direct photon production in d+Au collisions at sqrt(s_NN)=200 GeV at midrapidity. Physical Review D, American Physical Society, 2012, 86, pp.072008. ⟨10.1103/PhysRevD.86.072008⟩. ⟨in2p3-00723336⟩ Record views
Opuscula Math. 30, no. 2 (), 155-177 http://dx.doi.org/10.7494/OpMath.2010.30.2.155 Opuscula Mathematica Fréchet differential of a power series in Banach algebras Abstract. We present two new forms in which the Fréchet differential of a power series in a unitary Banach algebra can be expressed in terms of absolutely convergent series involving the commutant $$C(T) : A \mapsto [A,T]$$. Then we apply the results to study series of vector-valued functions on domains in Banach spaces and to the analytic functional calculus in a complex Banach space. Keywords: Fréchet differentiation in Banach algebras, functional calculus. Mathematics Subject Classification: 58C20, 46H, 47A60.
# Prove that $\forall n \in \Bbb{N}, 1 <n, \left( 1 + \frac{1}{n} \right)^n < \sum_{i=0}^n \frac{1}{i!}$ I'm trying to do the following exercise: Prove that $\forall n \in \Bbb{N}, 1 <n, \biggl( 1 + \frac{1}{n}$ $\biggr)^n < \sum_{i=0}^n \frac{1}{i!}$ (I can't use limits, e, or any other tool from calculus) The textbook where I found this suggests using the binomial theorem. This is what I've got so far: $\biggl( 1 + \frac{1}{n} \biggr)^n = \sum_{i=0}^n \binom{n}{i} (\frac{1}{n})^i$ Therefore $\biggl( 1 + \frac{1}{n} \biggr)^n < \sum_{i=0}^n \frac{1}{i!} \iff \sum_{i=0}^n \binom{n}{i} (\frac{1}{n})^i < \sum_{i=0}^n \frac{1}{i!}$ $\iff \binom{n}{0} (\frac{1}{n})^0 + \binom{n}{1} (\frac{1}{n})^1 + \sum_{i=2}^n \binom{n}{i} (\frac{1}{n})^i < \frac{1}{0!} + \frac{1}{1!} + \sum_{i=2}^n \frac{1}{i!}$ $\iff 2 + \sum_{i=2}^n \binom{n}{i} (\frac{1}{n})^i < 2 + \sum_{i=2}^n \frac{1}{i!} \iff \sum_{i=2}^n \binom{n}{i} (\frac{1}{n})^i < \sum_{i=2}^n \frac{1}{i!}$ $\iff\sum_{i=2}^n \frac{n!}{i!(n-i)!n^i} < \sum_{i=2}^n \frac{1}{i!}$ $\iff \frac{n!}{(n-i)!n^i} < 1 , \forall i \ge 2$ Now I will prove by induction that $\frac{n!}{(n-i)!n^i} < 1$ : Let $n \in \Bbb{N}, n >1$ Let $P(i)::$ "$\frac{n!}{(n-i)!n^i} < 1 , \forall i \ge 2$" Base case: $\frac{n!}{(n-2)!n^2} = \frac{n(n-1)(n-2)!}{(n-2)!n^2} = \frac{n-1}{n} < 1$ so P(2) is true. Inductive step: (I assume as an inductive hypothesis that P(i) is true) P(i+1) is true if we have $\frac{n!}{(n-(i+1))!n^{i+1}} < 1 \iff \frac{n!}{(n-i-1)!n^i n} < 1$ $\iff \frac{n!}{(n-i)!n^i} * \frac{n-i}{n} < 1$ $\frac{n!}{(n-i)!n^i} < 1$ by the inductive hypothesis and $i \ge 2 \implies n-i < n \implies \frac{n-i}{n} < 1$. So P(i+1) is true $\forall i \ge 2$ Is this correct? If it is not, what are my mistakes? And if it is, is there a less complicated proof (maybe without induction)? By the binomial theorem $$\left(1+\tfrac{1}{n}\right)^n=1+n\cdot\frac{1}{n}+\tfrac{1}{2!}\left(1-\tfrac{1}{n}\right)+\tfrac{1}{3!}\left(1-\tfrac{1}{n}\right)\left(1-\tfrac{2}{n}\right)+...+\tfrac{1}{n!}\left(1-\tfrac{1}{n}\right)...\left(1-\tfrac{n-1}{n}\right)<$$ $$<2+\frac{1}{2!}+\frac{1}{3!}+...+\frac{1}{n!}.$$ I used $$\binom{n}{k}=\frac{1}{k!}\cdot n(n-1)...(n-k+1).$$
# SICP, summary and notes ### Chapter 4, Metalinguistic Abstraction This chapter takes us to a very different path. Instead of being limited to the features provided by programming languages, it says in the beginning, that trick is to learn the trick or to design our own language as per our needs! Metalinguistic abstraction means designing/implementing new languages. The new language can enable us to describe the problem in different, concise and easier ways. To understand the new syntax of our language we write an evaluator. An evaluator is also just another program written in a programming language that can understand the syntax of our shiny new language. Even we can regard every program as a evaluator that understands a specic syntax just like our package of polynomial arithmetic, digital logic simulator, or constraint propogator. #### The metacircular evaluator A metacircular evaluator is an evaluator written in the same language that it evaluates. In the book, a meta-circular evaluator is implemented for a smaller version of scheme(it contains almost all the features mainly debugging and error logging were omitted). Well, the first thing to note is we do not do character by character of tokenization, it happens automatically because it is written in scheme and scheme can read s-expressions directly. For eg if we read input (proc (+ 2 3)), we need not to read it character by character but list can directly give a list whose first element is proc and second element is another list (+ 2 3). Thus we need to understand at expression level not at the character/alphabets level. Now, the evaluation happens in a cycle/loop with two main parts (i) eval (ii) apply, Quoting directly from the book: 1. To evaluate a combination (a compound expression other than a special form), evaluate the subexpressions and then apply the value of the operator subexpression to the values of the operand subexpressions. 2. To apply a compound procedure to a set of arguments, evaluate the body of the procedure in a new environment. To construct this environment, extend the environment part of the procedure object by a frame in which the formal parameters of the procedure are bound to the arguments to which the procedure is applied. eval as the name suggests, evaluates a given expression by first checking on a case by case basis the expression type and then evaluating it based on the type of expression. Expression can be of any type like assignment (set! a 5) or self evaluating like 5 or quoted like 'a or lambda like (lambda ) or application like (fib 5) etc. Note that the specific syntax of each of the expression type is not part of eval but abstracted in specific evalutation procedures. Thus we can change the syntax of our language without changing the evaluator. Evaluator needs an environment where it looks for the variables as part of the evaluation. For eg: (set! y (+ x 5)) will require that we have en evironment where we can lookup the value of x and then assign a new value to y in the environment. apply is invocation of procedure with the supplied arguments. An application of procedure creates a new environment by extending the environment where procedure is actually defined(not invoked but defined). The new environment contains the parameters bounded to the actual values. Now, each of the expression inside the body of the procedure invoked is evaluated(thus calling eval) in the new environment. Tip: The environment stores the procedure object constructed by ‘make-procedure’ instead of the expressions list in procedure body. This procedure object internally contains the actual body as well as environment where the preocedure was defined. This procedure object is stored against the name of the procedure so that we can lookup the procedure using this name. special forms are implemented directly in the evaluator for eg: to evaluate (if pred consequent alternate) we write a specific procedure to evalute pred and then based on it outcome we evaluate consequent or alternate. derived forms are implemented on top of special forms. i.e. we convert the syntax of derived form into special form syntax. For eg: we converted cond into nested if expressions. And after conversion we evaluate the transformed expression using special form evaluation. We saw the halting problem in one of the exercise ex-4.15. There is an interesting discussion of internal definitions(variable or procedure definitions inside a procedure) whether we want the definitions to be sequential or truely simultaneous. Simultaneous definitions are specially needed for circular definitions as we need in last section in streams(chapter-3) where definition of one stream depended on definition of another and vice versa. Ex-4.19 is quite interesting and helps in understanding the difference in these ideas. Then finally we optimize our evaluator by separating the evaluation into analysis and execution part. The problem with the original evaluator that every time we encounter a procedure we re-evaluate each of its experession by checking whether it is if or set! etc. Instead we can first analyze the procedure and check for once what are all the expression and create a result which abstracts evaluation of each expression. The point is this result just knows the expression type and just invokes the evaluation of that expression without checking the expression type. For eg: if a procedure contains two expression say if followed by set!, then our analyzed procedure will not check whether the next expression is if it just knows and evaluates the if and then it evaluates set! instead of checking whether that expression is set!. Tip: Now the procedure object in environment is stored after analysis. Thus when a procedure in invoked then only evaluation of the procedure body occurs instead of analysis + eval like in earlier case. #### Variations on a Scheme - Lazy Evaluation We have seen, two forms of evaluation model for evaluating procedure arguments: • Applicative order, where arguments are evaluated when the procedure in applied. • Normal order, where evaluation of arguments is delayed untill they are actually needed. This mechanism is called lazy evaluation while such languages are called normal order languages. Then we modify our evaluator for normal order evaluation! This is simple to do, after implementing our own evaluator! The central idea is: • We mark argument expressions as thunks when the procedure application is evaluated. This is called delaying the evaluation. • When a thunk is evaluated, we unmark it and evaluate the actual expression. This is called forcing the evaluation. Now, note that we have to do the second step recursively because, it is possible that a thunk also contains a thunk! So we keep evaluating an expression until the last thunk gets evaluated. Also, our thunk evaluation should not harm non thunks. Thus we can always ask to evaluate an expression and if it is a thunk then it will be evaluated else it will be returned as such. Note that we also need to capture the environment while delaying an expression because when it is evaluated it must be evaluated in the same environment where the procedure was applied! Apart from above change, there are other changes in procedure application for delaying the evaluation of arguments. We see some interesting exercises(4.27, 4.30) to demonstrate the problems when mixing assignments and lazy order evaluation. Later, we also see using normal order evaluation to implement lazy lists(or lazier lazy list!). They are similar to streams but a bit are extra lazy as in streams we atleast evaluate the first argument in cons and delay the other argument while in lazy lists both arguments will be delayed. These lazy lists provide atleast one advantage over the streams - to define circularly dependent things. We saw in chapter-3 streams that even using streams circular dependent definitions were not allowed in MIT scheme. For eg: Here y depends on dy and vice versa. This is possible to define such definitions with lazy lists. #### Variations in Scheme - Nondeterministic Computing This was a more difficult and in authors words - profound - change that we do in our evaluator compared to the previous normal order evaluation change. Tip: To understand the implementation we should see how it is used but with some idea that how it is going to be implemented later. Thus I think first read the complete section without attempting any exercises and then re-read and while attempting exercises. I found this section unique in the sense that exercises are easier than the contents! The central idea of non-deterministic computing is to provide a way where we say what instead of how while writing programs. For example there are logical puzzles like(from the book): Baker, Cooper, Fletcher, Miller, and Smith live on different floors of an apartment house that contains only five floors. Baker does not live on the top floor. Cooper does not live on the bottom floor. Fletcher does not live on either the top or the bottom floor. Miller lives on a higher floor than does Cooper. Smith does not live on a floor adjacent to Fletcher’s. Fletcher does not live on a floor adjacent to Cooper’s. Where does everyone live? We can solve such puzzles by telling the constraints in our program instead of writing code on how to work our those constraints as follows: As we can see above the code is not completely avoiding how but still it is far closer to what then how. The central concept and the construct that we implement and use here is amb(I think the name amb came from ambiguous). The amb expression can return any of the value passed to it but only one at a time. That is to say that if the first value failed then we backtrack and try another value. Note that when there are no expressions inside amb like (amb) then it means it is a dead end and we should back track to try other possible branches. The other important concept is require which we implement as a procedure on the top of our language using empty amb or (amb). It takes a predicate and fails if that predicate is not true. Thus when a require fails we backtrack and try other possible branch points unless/until there are no other possible branch points. Again, now the main contruct that can give us different branch point is amb. Thus when an expression fails we backtrack until there is another amb which can give alternate value. Then from that amb expression we again evaluate the expressions following that amb but now we have a new value that we use. For example, in the above program, first amb assigns 1 to each of the variables baker, cooper…, smith. Now when a constraint require fails, the evaluator backtrack to the last amb and assign smith to 2. Now we again proceed and check the constraints. If a constraint is failed again, then we again backtrack and smith gets 3 and proceed. Now when all the values in smith had been tried and we still failed, then we backtrack to miller, which then assigns miller the value 2 and proceed. Note that now in the next statement we assign smith 1 and proceed further. Now when we fail then we back track and smith gets 2(note miller is 2 now). Thus we try all the combinations of baker, smith until we get success or failed after exhausting all possiblities and back track to fletcher and then proceed! Thus the evaluator stops when either all the possibilities are tried or some possibility has yielded an outcome. In a way amb provides an impression of streams but with a difference. Quoting from the book: It is instructive to contrast the different images of time evoked by nondeterministic evaluation and stream processing. Stream processing uses lazy evaluation to decouple the time when the stream of possible answers is assembled from the time when the actual stream elements are produced. The evaluator supports the illusion that all the possible answers are laid out before us in a timeless sequence. With nondeterministic evaluation, an expression represents the exploration of a set of possible worlds, each determined by a set of choices. Some of the possible worlds lead to dead ends, while others have useful values. The nondeterministic program evaluator supports the illusion that time branches, and that our programs have different possible execution histories. When we reach a dead end, we can revisit a previous choice point and proceed along a different branch. It is important to understand the cost of trying all the possibilities. There are ways when we can narrow down the possibilities, see exercise - 4.40, where the idea is we eliminate the impossible items as early as possible so that they won’t be tried. For example if we know that baker can not be 5 then we can do this before even thinking of other variables like fletcher or cooper. Thus we eliminate all the combination when baker with value 5 is tried with different possiblities of other variables. Some tips that may help in better understanding: • The main construct that can give multiple values, one at a time, is amb. • To support backtracking, every expression must have a way to go back or proceed further. This is done using success and fail two procedures which we pass to every expression when it is avaluated. • If an expression is successfully evaluated then we proceed further else we go back using fail. • Check notes in solution of ex-4.78 for few more subtle details. I think thats it! Parsing english language section is interesting but can be skipped without losing any concepts apart from ideas for parsing a natural language. #### Logical Programming Things get interesting further! • Mathematics is about “what” as described in first chapter and programming is about “how”. Higher order languages helps us to move a bit close to “what” by freeing us from many details of “how” • Quoting from book: Expression oriented languages are based on the “pun” that an expression that describes the value of a function may also be interpreted as a means of computing that value. Because of this most programming languages are strongly biased towards unidirectional style(computations with well defined inputs and outputs.) • Remember the constraint based program in the book for converting temperature units - departs from the unidirectional approach. Similarly non-deterministic computing also departs from that approach as each expression can have more than one value and different paths are tried to arrive at a solution. Thus in non deterministic programming we are dealing with relations(mathematical relations) rather than single valued functions. • Logic programming extends this idea further by combining the relational vision of programming with a powerful kind of symbolic pattern matching called unification. • This approach is certainly not for every programming but when it works, it can be quite powerful. For example, a single “what is” can solve multiple problems of “how to”. Let’s say we want to append two lists. We can describe “what is” as: • An empty list and any other list y, append to form y. • For any u, v, y and z, If v and y append to form z then (cons u v) and y append to form (cons u z). • The program will look like: • As we can see from the above rule, the idea is if there exist some values for which the rule body(second expressinon in the rule) holds true then rule conclusion is also true(first expression in rule). • Notice that for above code if we query (append-to-form (1 2 3) (4 5 6) ?x) would return (append-to-form (1 2 3) (4 5 6) (1 2 3 4 5 6)). But it will also work for query (append-to-form (1 2 3) ?x (1 2 3 4 5)) and returns (append-to-form (1 2 3) (4 5) (1 2 3 4 5)). • Unlike the above example, It might not always possible that “what” can be used to deduce “how”. • Logic programming excels in quering information from databases. In this section a simple logic evaluator is built that works as described in the above example. It can query data from databases too. For example if database contains records like (job <person-name> <job-title>) then we can query all people working as computer programmer as (job ?x (computer programmer)). • If we want to query all the jobs computer related then (job ?x (computer . ?x)). Notice the “.”, which matches the remaining part. If “.” is removed then a job like (computer programmar analyst) won’t match as analyst can not match to anything. But with “.”, ?x gets matched to (programmer analyst). • We can have and, or and not for example (and (lives-in ?x (new delhi)) (job ?x (computer . ?y))). Notice that ?x is same to match the names. • I find this query language similar to sql and it also gives some idea how sql can be implemented in a very basic way. • There are three subsections in the book, first section descibes using the query language, second describes the design of query language evaluator and a discussion that why logic programming is not mathematical logic, and third section describes the implementation of the query language evaluator. • The central idea in designing this system is unification. It is a kind of pattern matching. A simple example is to execute query (job ?x (Computer . ?y)) we search in the records in the databse(in-memory list of records) and look for patterns that match this query. For example the database record (job (Bitdiddle Ben) (Computer Programmar)) will match by binding ?x to (Bitdiddle Ben) and ?y to (Programmar). But the database record (job (William Oliver) (Big Wheel)) won’t match as Big is not equal to Computer. • Things can go much more complicated than above example when we might want to match from both sides. Like (?x b c) and (a b ?y) can result in a match by binding ?x to a and binding ?y to c. • Apart from pattern match, there is infrastructure how to bring those records and the rules in database to match against the query. In the book this infra is done using streams but it can also be implemented using non-deterministic evaluator(check ex-4.78). • I think for a quick overview, read the design part of this section and leaving the implementation part. • We draw parallels between the normal programming and logic programming evaluator as there are three main ideas: • A way to write simple queries. For example writing (job ?x ?y). • A way to combine the these queries using and and or analogous to combining statements in normal langauge. • A way to build abstractions. Here we have rules analogous to procedures. • These parallels can be seen in the design and implementation too. The design of rules evaluation is similar to procedure evaluation. • We might start thinking that logic programming is mathematical logic. The book contains a great explanation for why it is not so. The main point is our clauses are procedural! For example when evaluating (and <exp2> <exp2>), we first evaluate exp1 and if it passed for a certain data then we check the second expresion on the same data. Thus there is a direction unlike in mathematical logic. Then we also see that not is also not exactly like mathematical logic not but works only on the results from the previous query it is combined with. • The point is logic programming is powerful enough to look like mathematical logic but weak enough so that we can optimize and write code in it. • This section can take more than one or two readings to get the feel of the ideas. Most of the exercises are easier than understanding the contents(leaving ex-4.67 and last few execises 4.77, 4.78, 4.79). #### Interesting exercises: In section 4.1, almost all the exercises seems to be conceptual and important. Simple Evaluator(Sec 4.1): 4.1, 4.3, 4.4, 4.6, 4.9, 4.11, 4.12, 4.14, 4.15, 4.16, 4.19, 4.20, 4.21, 4.22, 4,23 Lazy Evaluation: 4.27(Conceptual), 4.30(Conceptual), 4.31(Conceptual and Interesting - this asks to provide constructs such that user can choose normal order or applicative order while defining procedures). Lazier lazy lists: 4.32(Conceptual), Exercises 4.33 and 4.34 can turn out quite difficult to implement if our concepts for implementation are not clear enough. So, they are good exercises but not so important for understanding the main concept(which is lazy list). Also, they can be quite fun/frustrating :) Note that through 4.34, we can see mixing host language and implementation language for a task can turn out quite difficult. It can also turn out easy if the implementation/syntax concepts are very clear but even a small bug can take hours to figure out! Nondeterministic programming(usecases): 4.35(Conceptual), 4.39(Conceptual), 4.40(Conceptual), 4.41 and 4.44 are interesting to see the difference between normal scheme version and using amb or non-dterministic computing. Do atleast 4.44 to see the difference. If reading english parsing, then do 4.47 as its a good conceptual exercise. Nondeterministic programming(implementation): 4.50(conceptual, not important), Atleast do 4.51 and 4.52 as both can help in better understanding of the implementation. Logic programming(uses): 4.55, 4.56, 4.57, 4.63, 4.68, 4,69(uses 4.63) - practice. Conceptual - 4.64, 4.66 Logic programming(implementation): 4.70 to 4.75 Logic programming(moderate difficult, challenging): 4.67, 4.76, 4.77, 4.78. I think 4.67 and 4.78 are worth trying and can be fun. 4.79 i have not done as it might take a lot of time(may be weeks or even months!).
### 20 Responses 1. a guy trying to fix model says: hello I know that why this model have weird animation while playing online maybe that’s why your replacement model can’t sync with server I’m curious why your replacement model has animation problem, so I decompiled the model I have spotted that the problem is your model’s “skeleton” you must delete “original” skeleton , re-joint new skeleton Completely . You can’t just use old skeleton I know this take too much time, but if you didn’t re-joint “NEW skeleton” it’s 100% has “sync problem” and can’t be fixed 1. xenoaisam says: re-joint new skeleton? example: 1) import Rochelle bone 4) Rig the model into bone 5) Export to game like that? 1. a guy trying to fix model says: >1) import Rochelle bone >4) Rig the model into bone >5) Export to game yes, the point is, you can’t use “old skeleton” because it’s not “Match” server side (server still using Nick’s bone) that’s how “weird animation” appear just delete old skeleton, import new one and re-joint if you want to keep using jiggle bone as Miku’s hair, add new bone as jiggle bone after you re-joint Nick’s skeleton completely 2. Rikulen says: Hi Xenoaisam!! I wanted to ask you if it’s possible to you to make Len replacing Ellis and Luka replacing Coach and maybe doing the same thing you did with nick voice to make them a feminine voice and maybe doing the same thing with rochelle voice… Think about it… 3. ?? says: ?????! ?????????????????????? ????????? 4. a guy trying to fix model says: hello I have found another problem I just spotted these in your QC file $includemodel “survivors/anim_TeenAngst.mdl”$includemodel “survivors/gestures_TeenAngst.mdl” you should use Nick’s animation model or it will cause error 1. xenoaisam says: theres a reason i did use Zoey animation.. if i’m pointing to Nick animation, Miku will be having ‘manly’ figure… so its ugly.. plus the error only occur on online without my model install.. if my model install in both side, the problem didn’t occur 5. #lOuIs# says: After I install this add-on in my “Steam\SteamApps\common\left 4 dead 2\left4dead2\addons” and launch game, why I can see that the replacement of Nick? Lastly, I am a Chinese in HK, hope you can forgive my mistakes (grammars of english)! I will always stand on your side. : ) Good luck! 1. xenoaisam says: dont change the folder.. just change the c:/programfile/ to your directory 1. #lOuIs# says: That means I should intall this add-ons in my Then, use Steam to open “Left4dead2.exe” ? 6. Lammong says: It is really nice, I hope Mikus Model can be made for Team Fortress 2 Mikus Slim Body,Physics and Style are perfect for the Scout who is a Fast Mover in the game. Keep it On! 1. xenoaisam says: My miku not slim XD plus I dont have tf2 1. Andrey Ivanov says: Hey! I see you have Miku AND Rin model! 1. xenoaisam says: yes! 7. Lammong says: Can you tell me how do I change the mdl folder for Team Fortress 2? Or through Games? 8. Show me ya butt. says: Now my problem is that I can’t seem to get this to work with Campaign mode, single player is fine but neither I or anyone I’m playing with can see the mod except for at the “Choose your character” screen. 9. Anonymous says: Won’t run for me, says it’s incompatible with my version of windows D: 1. xenoaisam says:
# Graphing Program: Solution¶ Here is my solution. You can download a .ZIP file version of the project here.. This canvas is where the graph will go. Next Section - Appendix A: Basics of HTML
## 2.3 Bar Charts Bar charts are a useful way to graphically present categorical data. If preferred, bar charts can display relative frequencies instead, as follows: It is worth noting that the shape of both charts looks exactly the same. The difference is in the scale on the $$y$$ (vertical) axis: in the second chart, it represents percentages rather than frequencies.
## Cryptology ePrint Archive: Report 2017/215 SEVDSI: Secure, Efficient and Verifiable Data Set Intersection Ozgur Oksuz, Iraklis Leontiadis, Sixia Chen, Alexander Russell, Qiang Tang, and Bing Wang Abstract: We are constantly moving to a digital world, whereby the majority of information is stored at the weakest link: users' devices and data enclaves -- vulnerable to attacks by adversaries. Users are often interested to share some information, such as the set intersection of their data sets. So as to reduce the communication bandwidth, cloud based protocols have been proposed in the literature, with a rather weak security model. Either there is a semi-honest cloud, which is far from realistic nowadays, or the leakage from the protocol puts the data at risk. In this paper, we achieve the best of the two worlds: We design and analyze a non-interactive, cloud based private set intersection (PSI) protocol secure in a stronger security model. Our protocol assures privacy on data set inputs in case of a malicious cloud and enforces authorized only computations by the users. Moreover the computation is verifiable and the asymptotic communication cost for the computation of the intersection is linear on the common elements $k$. Our protocol is secure in the random oracle model under standard assumptions and a new mathematical assumption whose security evidence is given in the generic group model. Category / Keywords: confidentiality, integrity, private set intersection Date: received 1 Mar 2017, last revised 4 Mar 2017 Contact author: leontiad at njit edu Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2017/215 [ Cryptology ePrint archive ]
## Differentiability Suppose that f : [a, b] to R is differentiable and c is an element of [a, b]. Then show that there exists a sequence {x_n} converges to c where x_n does not equal c for all n, such that f '(c) = lim f ' (x_n) Note: I use the definition of differentiation of: lim (x goes to c) (f(x) - f(c)) / (x - c). • Let h(x) = f(x) - g(x). Note that h(a) = f(a) - g(a) = 0. It is easy to check that h is continuous on [a,b] and differentiable on (a,b). Thus the Mean Value Theorem applies, and we get [h(b) - h(a)] / (b - a) = h'(c) for some c in (a,b). ==> [(f(b) - g(b)) - 0] / (b - a) = f'(c) - g'(c) < 0, since f '(x)<g '(x) for a<x<b. Thus, [(f(b) - g(b)) - 0] / (b - a) < 0 ==> f(b) - g(b) < 0 ==> f(b) < g(b), as required.
# How do I simplify and find the product of 5d^2r and 3d xx 4d^3? Jul 21, 2017 Add the indices of any like bases. #### Explanation: To multiply in algebra you simply ADD the exponents of the same variables. $5 {d}^{2} r$ cannot be simplified on its own. $3 d \times 4 {d}^{3} = 12 {d}^{4}$ If these two expressions were meant to be multiplied together the product would be: $5 {d}^{2} r \times 12 {d}^{4} = 60 {d}^{6} r$
00:00 - 04:0004:00 - 00:00 12:11 AM Anyone around? 12:23 AM just me , I guess Hey magma Hey Rojo Que tal? Jaja In Spanish we laugh with a "j" :P All good, you? Vives en Baires? Sí, ¿vos? 12:27 AM En Italia Mi sorella es fanática de Italia :) I never went yet she is here? No, sorry In Brazil now Hey, by any chance you understand well what Leonid usually means with the lack of mutable data structures and all that? I will come to Baires sooner or later. I dance tango, so it's like a pilgrimage :-) Hehe Tango is 80% for tourists nowadays 12:29 AM what is your field of expertise? I know, i know. But it is very very strong in Italy now. And most of Europe, really I study electronic engineering, aiming at signal processing, but I still haven't finished so, no expertise yet :) Did you try System modeler? it is a very nice piece of software Yeah, tried it, but expecially with MMA my month trial has expired and I didn't have time to dig in deep I found it nice, though I don't know how it compares to similar software... You use it? 12:33 AM you can ask for some extra time. i will do that too. I almost did that yesterday they now have a home edition so the price is more reasonable now Because, truth is I might use it during the next 6 months, but not in the next one 12:34 AM @Jin Framed@Quiet@ ColorConvert[ Erosion[ColorNegate@(MorphologicalComponents@(EdgeDetect@#) // Colorize), 3], "HSB"], RemoveAlphaChannel@ColorConvert[#, "HSB"]] &@ Import@"http://www.gravatar.com/avatar/\ @belisarius is that a virus? @Rojo Jin colorized It's nice to see he got the software, hehe Haha, jin colorized I gotta do something about my gravatar @magma Did you use other similar software? time to go to sleep. very late here. Good night @Rojo I will pay you $10 if you let me make your avatar and you use it after 12:37 AM similar to system modeler? @magma Yeah @belisarius I'll give you a 10$ discount if I can reserve my veto no @Rojo no chance like yours. i am an Electronic engineer with a keen interest in pure math and physics good night 12:40 AM Night magma :-) @belisarius 10$discount and I trust you for a day @Rojo The making of an avatar is an artistic moment. You are betraying art and friends by being so suspicious @belisarius Third world habits A week! Nahh ... making the avatar you deserve will take me more than a week. A full life, I believe, suffering the stressful state of not getting to perfection 12:47 AM Haha Haha @belisarius Better than having Peron's face @Rojo Bonavena's is not a bad idea Haha Grr, I just hanged my session while trying to do the most basic image processing ,I suck at that 12:53 AM I'll never let go of my veto you coward and I still haven't let my imagination flow @Rojo For example. You could try this for a week just to find your real self Hehe I'll put something provisional HAHA circuits overload! I love those four eyes @R.M Those are three iconic Argentinian actors 1:03 AM Now it's just one super iconic... thing! @R.M Perhaps the woman is too iconic Now, to change the picture I have to create a gravatar account?? I thought it was just uploading or putting a link @Rojo yes. You upload at gravatar.com Making your own avatar is a rite of passage ...and now..... ...how to link the gravatar account to this? 1:14 AM If you've used the same email in both places, it'll auto refresh The gravatar is linked to your email Ok We'll see tomorrow, I just put a plain red If it updates, we'll see what's the definitive one :) Rojo, Argentina 11.6k 23 52 How creative! indeed You could Inset a Big "R" and win a Guggenheim grant 1:18 AM Back, now it's updated I did it in Paint I'm looking to retag this question. Obviously, it shouldn't have as it is discussing std::vector not$\vec{x}$. But, maybe ? Maybe not. @rcollyer I think C++, perhaps mathlink c++ might be good... We have fortran, python, java, C#, etc. That answers that. But, ? I don't think so. Nor, . @rcollyer agree 1:22 AM Any objections to my removing those to also? nope done. , , , should be sufficient, I think. why the last one? it isn't output, per se. I went by this paragraph: > I want to store each of these vectors as a list in Mathematica, so, for instance, if three iterations returns the vectors (0, 1), (0.02, 0.9) and (0.04, 0.73), I want this output to be written into a Mathematica list of the form {{0,1},{0.02, 0.9},{0.04, 0.73}}. I guess is more for NumberForm, etc. 1:25 AM okay. 6 of one, half a dozen of the other by my thinking. But, adding . @Rojo now you look like the Soviet Union, or China. But smaller, obviously. @OleksandrR. Yeah, I'm not liking it much. Too aggressive @OleksandrR. careful, he may get an inferiority complex, and before you know it he'll be threatening Japan and the US. :P @belisarius I would say more like a right nuisance! I'm not completely happy with my current one but it clashes with the site design less than the previous one did, so I can't be bothered to change it again. @OleksandrR. :) 1:30 AM @Rojo as I understand it, he's referring to the fact that in many cases, attempts to change a part of a data structure will result in the entire structure being copied and the part changed only in the copy, leaving the original unchanged. ReplacePart does this, for example. Anybody knows how MaoTsé entered my monitor? @OleksandrR. Thanks. What I'm actually interested in understanding well is, what does he(/you, etc) think would be nice to have. I've read so much about it that I want to give it a shot and I trust his judgement on what's missing and would be useful For example, Having a function to "subscribe" a symbol as a copy of another, and a custom set that keeps both up to date when set without having 2 copies of the data and having the data remain on the other symbols if you clear one or set one with the regular Set and no leaks That would be enough? Well, that sounds like it could be useful, but IMO the problem in general is more one of efficiency than semantics. The thing is, I think that "that" could be done quite simply Haven't tried it yet, but if it worked, it would as inefficiet as MMA is in general Yes. I understand where you're coming from, but it still doesn't buy you in-place updates. Built-in functions still copy extensively. 1:39 AM Definately Built-ins grr So, in conclusion For example, someone asked me a while ago if it was possible to do an in-place FFT. I didn't follow up on that but I am certain it is not without some serious messing with LibraryLink. What he means is something that can't be helped as long as one can't make built-ins work in -place I believe so, yes. Of course one can use things like linked lists which have much less to copy and thus have fewer performance pitfalls. Yes, built-ins copy a lot, but if you have a long list, say, "l" and you do l+1 There is some object called a "raw array" which I believe is basically used when calling Fortran (LAPACK for example) because of the Fortran semantics. However how it works I don't know. 1:44 AM You are right You have RawArray, DevFromRawArray, RawArrayQ. Had no idea So, list+1 or, worse ++list while it is evaluating, it uses up memory as if copying the list but, does that imply it's slower that it would be in place? Yes, right? Because it doesn't mean it copies the list... Probably reserves the space and puts the results there instead of in the original I'm just trying to finally understand this big issue a little better Well, that still involves a copy. It could update in place. Although, there aren't really semantics to express that. Even the compiler copies extensively, it would seem. I see that there's copying in functions that modify only a little part of a possibly long expression such as ReplacePart (which is one of the few that can be reimplemented in place) (with part) I think fundamentally it's hard to mix functional/term-rewriting semantics with mutable data structures without making a mess of things. Part is one of very few exceptions... I think even Part may copy in some circumstances though. For instance, when operating on a packed array. So far, the limitation I really see is the one of not being able to extract a subpart of an expression (say half of a long list) without making a full copy of it Yes, that's a major one. Of course you can use linked lists in that case. Bags are also possibly useful in that they can be passed around like pointers to their contents. 1:53 AM Any different from passing a Hold? Well, you can destructure Bags without (necessarily) copying. Of course if you call BagPart[b, All] then it copies. As I understand it a Bag is simply a very low level linked list. ... a linked list with heads that hold, that allows you to extract parts of it easily (not like a linked list), but doesn't let you remove elements and that's actually an atom which who knows why should I care I'd consider that semantically the same as a Bag. I was describing a bag Hehe Oh, okay. :) 1:59 AM Thanks for the Heap ;) Did you make any use of it? I haven't tried, yet. Not real use. Fake use, when I tested it a little bit Heavens to Betsy! 0 exp := -5.082019443309081E^(-0.7036967536123101 (2/25 - (2 x)/ 25)) (E^(-0.11728279226871835(32/25 - (32 x)/25) - 0.11728279226871835 (2/25 - (2 x)/25)) - E^(-0.2345655845374367(17/25 + (8 x)/25)) (1 + 0.2345655845374367 x))^2 x (5.515552687747721\ E^(-0.23456558453743... I'mi gonna upvote that 2:12 AM It has been a long time since I saw a question that I am so glad to pass on "I have a very long integrand," 3 Hahahaha @Rojo Are you boasting? 2 @Rojo Hmm, I didn't mean it that way. But maybe this poster deserves it. :P @verbeia shouldn't this be a comment? 0 It's hard to work out the problem when the code creates syntax errors once pasted back into Mathematica. However, I would point out that you have some very long floating-point numbers multiplying exact numbers, all taken to the exponent. Integrate will use numerical methods anyway in that case. ... @belisarius Rotfl :) That formula is the nearest thing to spam maths could possibly be 2:21 AM 2 must be a standard warning on any question longer than X starred! @belisarius would you please remove your comment on his question. Thanks. @rojo My integrand is bigger than yours @rcollyer Oh well, but I warn you Mma 9 has amazing new msgs @SjoerdC.deVries Simplify will take care of that problem. :) 2:26 AM @SjoerdC.deVries Too bad, it doesn't grow with x Hehehe @SjoerdC.deVries Depending on the method used, Integrate[exp ...], Integrated[Expand ...], Integrate[#, ...]& /@ Expand[exp], etc. I get different answers. So, I think she has the correct answer. @SjoerdC.deVries I pity you then It is actually interesting to ask you all How big our integrands are? No, not really, we have ladies in the forum 2:31 AM ;-D ;-D How to know what's the form of an expression that leads to less numerical errors That is an interesting question, and I wish I knew. I'm not talking about integration any more Horner form? @Rojo don't work with exp(big number)... instead try to work with log and then take exp 2:32 AM Is there an easy way for NDSolve to refine its range (ie make it smaller) based on a condition on the derivatives? NDSolve[ {y''[t] == ... && y'[t] > 0 ...}, y, {t, 0, 1}] @R.M Why is that? There was that question from that physics.se optics guy There are two close votes on the question. What do you all think? It's a bit specific @SjoerdC.deVries three 3 votes, now. mine. 2:36 AM mine and phantomas @belisarius no, it's mine! :) @SjoerdC.deVries I think it is too localized. Voted NaRQ. The integrand is a mess and Integrate's assessment of it is totally dominated by numerical error at machine precision. and Mma::psycho, who was forced to left the party yeah, but he's just no very constructive, or conducive to learning. Hey, Sjoerd didn't even have to touch it. @Rojo The exponentials blow up: Exp[10.^16] Exp[-10.^16] vs Exp[Log[1]]==1 2:40 AM (For some strange reason, my mind keeps going into the gutter.) @R.M Also, you're effectively using the series form of Exp` which increases the number of multiplies dramatically. Even in Horner form. @rcollyer The art of delegation... In an unrelated issue, 100MBs is approximately the max bigint it's saving witout complainig 30 I'm writing a numerical optimization, and I'm having a problem with an expression of the form $$e^{-t} (1+\mathrm{erf}(t))$$ The overall shape of the function looks correct, but when$t$is small,$e^{-t}$is huge while$(1+\mathrm{erf}(t))\$ is very small, and their product is also small. This... @Rojo ^ @R.M Niiice @SjoerdC.deVries I figured. We're piranha and all. And, someone chummed the waters! 2:43 AM @rcollyer We must bully someone from time to time. If not, we risk eating each other @belisarius as I said: @belisarius Let's visit a Maple forum 4 mins ago, by rcollyer (For some strange reason, my mind keeps going into the gutter.) Anyone planning to go to the wolfram tech conference? @Rojo or matlab. 2:44 AM @Rojo Hey! That's a NICE idea @SjoerdC.deVries I've been thinking about it, but I need to get funds for it. @rcollyer R.M is about to write a blog post to evangelize Matlab users, and belisarius hates Maple @rcollyer SE can do that 1 Maplesoft Proposed Q&A site for users of all experience ranges with Maplesoft's numerical analysis software. Currently in definition. @SjoerdC.deVries Can rep points be cashed ? 2:45 AM Join! @Rojo are there any? I wondered that the other day but was unable to find anything like what we have for Mathematica. @SjoerdC.deVries that's true. Maybe a moderator should bring that up with the powers that be? :P ;D Let's joiiin @rcollyer Jin said that they are fully aware of the conference and they do plan to send some members of the community 16 I read Supporting Community Conferences, which states, Depending on the circumstances and location, we can also sponsor community leaders to attend an event on behalf of their site. We will subsidize your costs to attend, within reason, and provide you with a bunch of swag to use as an ice-br... 2:46 AM @Rojo as long as you're trying to "save" the matlab users, then sure. @R.M You are a professional agitator Jul 13 at 3:10, by Jin I'll have our community team make a meta post about this, to see who's all interested in going Aarthi indicated that SE would indeed be willing to sponsor this @rcollyer You used Matlab? 2:49 AM I attended last year, fully self-funded and had to take vacation @Rojo I was in a class where the professor demanded that we use it despite the fact that I copied the functionality we were supposed to use in mma in under a 1/2 hour. Before I nominate myself to go, I'd have to check a few things. I would like to attend. If you @All support my candidature I promise bringing back an autographed copy of NKS v2 for each one (I really have to get my mind out of the gutter!) Bring back a copy of v9 for us and I'll support you You can either go to the conference or get v9... hmm. Tough choice 2:51 AM I'm happy enough with the MMA9 documentation All of us, every user with 500+ rep I suppose v9 will be announced there Let's hope likely. depends on if they get the testing done and the features in in time. @SjoerdC.deVries mmmm ... ok. I'll give you Maple v9 copy 2:52 AM @rcollyer Unlikely that they'd be working on features now, no? @belisarius aren't they on version 15 or 16? @rcollyer No idea really @rcollyer Maple9 is the latest in Argentina Maple is at 16 currently. That's matlab I'd say 2:52 AM Uh, and we are at 8. Half Meh. (old) Firefox vs Chrome, really. I forgot, why is my name in blue here? mod @R.M depends on a lot. If they're releasing in Oct., then putting in new stuff now would put them behind. But, they may just be announcing when it will be ready for sale in Oct. I really have to get my mind out of the gutter 2:54 AM @rcollyer That sounds more like Apple's strategy... "World's most advanced OS, blappity blah Mountain Lion... coming this July" — announced in June @R.M I have no insight into their strategy, so I'm just guessing. I've heard rumors beta testing is underway In beta features are frozen, right? yup @SjoerdC.deVries They should be. But, not if they go the netscape route. We could blackmail the SE team to force them to send all of us to the conference. If not, we will start posting the "long integrand" question on all sites 2:57 AM But, being hellbanned from the entirety of SE wouldn't be worth it. We'd get more stuff done, though. See ya. @R.M Auguri @belisarius The meta post says "community leaders", which sounds sufficiently plural to me. @SjoerdC.deVries true, but you have to define "leader." 2:58 AM @rcollyer Bye @R.M Ciao @SjoerdC.deVries I was saying bye to RM. @rcollyer bye! ohh my @belisarius copy cat. purrr 2:59 AM Harhar Thunderstorms here @belisarius Well, that hits my limit of not being able to respond due to it being a public forum! So, good night all. I have to go beat on a cluster. @rcollyer You shouldn't use your real name Bye Bye again I am not saying goodbye again :P 3:01 AM @belisarius So, your name isn't real? @SjoerdC.deVries Ohh yes This is me Flavius Belisarius (, ca. 500 – 565) was a general of the Byzantine Empire. He was instrumental to Emperor Justinian's ambitious project of reconquering much of the Mediterranean territory of the former Western Roman Empire, which had been lost less than a century previously. One of the defining features of Belisarius' career was his success despite a lack of support from Justinian. He is also among a select group of men considered by historians to be the "Last of the Romans". Early life and career Belisarius was probably born in Germane or Germania, a city that once stood on the site ... only younger What's in a name That which we call a rose By any other name would smell as sweet. Umberto Eco would be proud of you He is Tell us more 3:06 AM @SjoerdC.deVries Not if you called 'em stench blossoms. I managed to finish Foucault's pendulum... @SjoerdC.deVries I enjoyed the name of the rose a lot more @belisarius The pendulum started ok, but the plot totally derailed in the final chapters. @SjoerdC.deVries I've read "The name.." in two days or so. The "pendulum" took me a month I forgot whether I read it or saw the movie 3:20 AM @SjoerdC.deVries Oh ..the movie wasn't up to the book @SjoerdC.deVries I thought it was too long. Can I even remove the answer once the question is closed? I posted it before others started commenting about numerical issues. @Verbeia I was just wondering. Rcollyer thought it was Ok, and the point is moot now anyway. @SjoerdC.deVries If the community feels strongly, I'm happy to remove it. @verbeia that's not the case. So, no problem. You might want to scroll a few pages back to see the discussion that went on. Getting some sleep now. Bye! 3:39 AM @SjoerdC.deVries I just did, and I agree, this sort of code dumping should be discouraged. But I think the way to encourage good behaviour is to tell them what they are doing wrong, and still give them some help, if not complete. If they are repeat offenders, by all means come down harder. But this user has another question with 10 votes, so I think they are amenable to polite education. 00:00 - 04:0004:00 - 00:00
# Multiple Regression with Predictors that Restrict other Predictors I'm not even sure if the title of my question makes sense at first sight, so let me try to explain it. I'd like to fit a parametric multiple regression model to data. But depending on the value chosen for a given predictor (it actually should only be a discrete integer, but maybe I can also consider it as continuous), the possible values for another predictor are restricted to a subset of values. Would this make sense in a regression exercise? Would it incur multicollinearity? Any suggestions of how to go about this problem? Thanks! In general, deterministically related regressors are not prohibited in a regression model. In introductory econometrics textbooks one may encounter a model of the form $$y=\beta_0+\beta_1 x+\beta_2 x^2+\varepsilon$$ This is one special case of what you are concerned about, if I understand correctly. A restriction in the regression model is that the regressors cannot be linearly dependent. (For example, you cannot have $x_1=\alpha_0+\alpha_1 x_2$.) If they are, the marginal effects of the linearly dependent regressors cannot be disentangled; the $\beta$ coefficient vector taken from the regression $y=X \beta + \varepsilon$ is not unique ($X$ is the design matrix where each column is a different regressor). Other than that, you may have deterministically related regressors. However, you have to be careful interpreting the estimated coefficients when the regressors are deterministically related. If you have both $x$ and $x^2$ as regressors, you cannot say that "keeping all other regressors constant, a unit change in $x$ brings a $\beta_1$ change in $y$" -- because when $x$ changes, $x^2$ also changes. • Thanks, Richard. In my case, I could think of a conditional statement between predictors. For example, if $x_1$ takes on a particular value, say $a$, then another predictor $x_2$ should only take on a subset of values. I'm afraid that if I just consider these two predictors independently in the regression model, I won't capture that relationship. So I'm wondering if that can be done, and how. Apr 27 '15 at 11:40
$$\require{cancel}$$ # 1.3: Extensive and Intensive Quantities There is a useful and important distinction in thermodynamics between extensive (or “capacitive”) and intensive quantities. Extensive quantities are those that depend upon the amount of material. Examples would include the volume, or the heat capacity of a body. The heat capacity of a body is the amount of heat required to raise its temperature by one degree, and might be expressed in J Co−1. Intensive quantities do not depend on the amount of material. Temperature and pressure are examples. Another would be the specific heat capacity of a substance, which is the amount of heat required to raise unit mass of it through one degree, and it might be expressed in J kg−1 Co −1. This is what is commonly (though loosely) called “the specific heat”, but we shall use the correct term: specific heat capacity. Incidentally, we would all find it much easier to understand each other if we all used the word “specific” in contexts such as these to mean “per unit mass”. “Molar” quantities are also intensive quantities. Thus the “molar heat capacity” of a substance is the amount of heat required to raise the temperature of one mole of the substance through one degree. I shall have to define “mole” in the next section. Some authors adopt the convention that extensive quantities are written with capital letters, and the corresponding intensive quantities are written in small letters. Thus C would be the heat capacity of a body in J Co −1 and c would be the specific heat capacity of a substance in J kg−1 Co −1. This is undeniably a useful distinction and one that many will find helpful. I have a few difficulties with it. Among these are the following: Some authors (not many) use the opposite convention – small letters for extensive quantities, capitals for intensive. Some authors make exceptions, using P and T for the intensive quantities pressure and temperature. Also, how are we to distinguish between extensive, specific and molar quantities? Three different fonts? This may indeed be a solution – but there is still a problem. For example, we shall become familiar with the equation dU = T dSP dV. Here U, S and V are internal energy, entropy and volume. Yet the equation (and many others that we could write) is equally valid whether we mean extensive, specific or molar internal energy, entropy and volume. How do we deal with that? Write the equation three times in different fonts? Because of these difficulties, I am choosing not to use the capital letter, small letter, convention, and I am hoping that the context will make it clear in any particular situation. This is, I admit, rather a leap of faith, but let’s see how it works out.
# Ocean heat content (Redirected from Oceanic heat content) Global Heat Content in the top 2000 meters of the ocean, NOAA 2020 Global Heat Content (0–700 meters) layer Oceanographer Josh Willis discusses the heat capacity of water, performs an experiment to demonstrate heat capacity using a water balloon and describes how water's ability to store heat affects Earth's climate. This animation uses Earth science data from a variety of sensors on NASA Earth observing satellites to measure physical oceanography parameters such as ocean currents, ocean winds, sea surface height and sea surface temperature. These measurements can help scientists understand the ocean's impact on weather and climate. (in HD) In oceanography and climatology, ocean heat content (OHC) is a term for the energy absorbed by the ocean, which is stored as internal energy or enthalpy. Changes in the ocean heat content play an important role in the sea level rise, because of thermal expansion. Ocean warming accounts for 90% of the energy accumulation from global warming between 1971 and 2010.[1] About one third of that extra heat has been estimated to propagate to depth below 700 meters.[2] Beyond the direct impact of thermal expansion, ocean warming contributes to an increased rate of ice melting in the fjords of Greenland [3] and Antarctic ice sheets.[4] Warmer oceans are also responsible for coral bleaching.[5] ## Definition and measurement The areal density of ocean heat content between two depth levels is defined using a definite integral:[6] ${\displaystyle H=\rho c_{p}\int _{h2}^{h1}T(z)dz}$ where ${\displaystyle \rho }$ is seawater density, ${\displaystyle c_{p}}$ is the specific heat of sea water, h2 is the lower depth, h1 is the upper depth, and ${\displaystyle T(z)}$ is the temperature profile. In SI units, ${\displaystyle H}$ has units of J·m−2. Integrating this density over an ocean basin, or entire ocean, gives the total heat content, as indicated in the figure to right. Thus, the total heat content is the product of the density, specific heat capacity, and the volume integral of temperature over the three-dimensional region of the ocean in question. Ocean heat content can be estimated using temperature measurements obtained by a Nansen bottle, an ARGO float, or ocean acoustic tomography. The World Ocean Database Project is the largest database for temperature profiles from all of the world’s oceans. The upper ocean heat content in most North Atlantic regions is dominated by heat transport convergence (a location where ocean currents meet), without large changes to temperature and salinity relation.[7] ## Recent changes Several studies in recent years have found a multi-decadal rise in OHC of the deep and upper ocean regions and attribute the heat uptake to anthropogenic warming.[8] Studies based on ARGO indicate that ocean surface winds, especially the subtropical trade winds in the Pacific Ocean, change ocean heat vertical distribution.[9] This results in changes among ocean currents, and an increase of the subtropical overturning, which is also related to the El Niño and La Niña phenomenon. Depending on stochastic natural variability fluctuations, during La Niña years around 30% more heat from the upper ocean layer is transported into the deeper ocean. Model studies indicate that ocean currents transport more heat into deeper layers during La Niña years, following changes in wind circulation.[10][11] Years with increased ocean heat uptake have been associated with negative phases of the interdecadal Pacific oscillation (IPO).[12] This is of particular interest to climate scientists who use the data to estimate the ocean heat uptake. A study in 2015 concluded that ocean heat content increases by the Pacific Ocean, were compensated by an abrupt distribution of OHC into the Indian Ocean.[13] ## References 1. ^ IPCC AR5 WG1 (2013). "Summary for policymakers" (PDF). www.climatechange2013.org. Retrieved 15 July 2016. 2. ^ 3. ^ Church, J.A. (2013). "Sea Level Change". In Intergovernmental Panel On Climate Change (ed.). Sea Level Change, pp. 1137-1216. Climate Change 2013 – The Physical Science Basis: Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. pp. 1137–1216. doi:10.1017/cbo9781107415324.026. ISBN 9781107415324. Retrieved 2019-02-05. 4. ^ Jenkins, Adrian; et al. (2016). "Decadal Ocean Forcing and Antarctic Ice Sheet Response: Lessons from the Amundsen Sea | Oceanography". tos.org. Retrieved 2019-02-05. 5. ^ "The Great Barrier Reef: a catastrophe laid bare". The Guardian. 6 June 2016. 6. ^ Dijkstra, Henk A. (2008). Dynamical oceanography ([Corr. 2nd print.] ed.). Berlin: Springer Verlag. p. 276. ISBN 9783540763758. 7. ^ Sirpa Häkkinen, Peter B Rhines, and Denise L Worthen (2015). "Heat content variability in the North Atlantic Ocean in ocean reanalyses". Geophys Res Lett. 42 (8): 2901–2909. Bibcode:2015GeoRL..42.2901H. doi:10.1002/2015GL063299. PMC 4681455. PMID 26709321.CS1 maint: uses authors parameter (link) 8. ^ Abraham; et al. (2013). "A review of global ocean temperature observations: Implications for ocean heat content estimates and climate change". Reviews of Geophysics. 51 (3): 450–483. Bibcode:2013RvGeo..51..450A. CiteSeerX 10.1.1.594.3698. doi:10.1002/rog.20022. 9. ^ Balmaseda, Trenberth & Källén (2013). "Distinctive climate signals in reanalysis of global ocean heat content". Geophysical Research Letters. 40 (9): 1754–1759. Bibcode:2013GeoRL..40.1754B. doi:10.1002/grl.50382. Essay Archived 2015-02-13 at the Wayback Machine 10. ^ Meehl; et al. (2011). "Model-based evidence of deep-ocean heat uptake during surface-temperature hiatus periods". Nature Climate Change. 1 (7): 360–364. Bibcode:2011NatCC...1..360M. doi:10.1038/nclimate1229. 11. ^ Rob Painting (2 October 2011). "The Deep Ocean Warms When Global Surface Temperatures Stall". SkepticalScience.com. Retrieved 15 July 2016. 12. ^ Rob Painting (24 June 2013). "A Looming Climate Shift: Will Ocean Heat Come Back to Haunt us?". SkepticalScience.com. Retrieved 15 July 2016. 13. ^ Sang-Ki Lee, Wonsun Park, Molly O. Baringer, Arnold L. Gordon, Bruce Huber & Yanyun Liu (18 May 2015). "Pacific origin of the abrupt increase in Indian Ocean heat content during the warming hiatus" (PDF). Nature Geoscience. 8 (6): 445–449. Bibcode:2015NatGe...8..445L. doi:10.1038/ngeo2438.CS1 maint: uses authors parameter (link)
# Interesting Zeta Calculus Level pending $\large \displaystyle \sum_{k=2}^{\infty} \frac{\zeta(k)-1}{k+1} = \frac{A}{B} - \frac{C}{B}\ln(D \pi) - \frac{\gamma}{B}$ The above equation holds true for positive integers $$A$$, $$B$$, $$C$$, and $$D$$. Find $$A+B+C+D$$. Notation: $$\gamma \approx 0.5772$$ denotes the Euler-Mascheroni constant. ×
# Does the Direct The Strike power provoke OA? Does using the Warlord's Direct The Strike power provoke opportunity attacks? It is a Ranged 5 power with only the Martial keyword (not an Implement or Weapon attack). It seems to me like it should not provoke OA (isn't it just the character shouting?), but since it is not a Close Burst 5, RAW indicates it would. Should it still provoke OA's while wielding a staff with the Staff Expertise feat? The feat says: ...when you make a ranged or an area attack with a staff as an implement, you don’t provoke opportunity attacks for doing so. I'm wielding my staff as an implement, but the power doesn't have the Implement keyword... There's a few bits of rules to sift through here. These quotes are from the D&D Compendium. ## Direct the Strike provokes opportunity attacks Ranged and Area Powers Provoke: If an enemy adjacent to you uses a ranged power or an area power, you can make an opportunity attack against that enemy. Direct the Strike is a ranged power - you're targeting any individual within a range of 5. Under normal circumstances, it would provoke an opportunity attack. ## Staff Expertise feat In addition, when you make a ranged or an area attack with a staff as an implement, you don’t provoke opportunity attacks for doing so. So as long as you can cast Direct the Strike "with a staff as an implement", you don't provoke an OA. However, Direct the Strike doesn't have the Implement keyword, so this might not apply. It doesn't have the Weapon keyword either, so implement-as-a-weapon rules don't apply. So does it count as casting it with the implement? ## This needs interpretation The rules here are unclear. Staff Expertise was published in books from 2010 - Heroes of the Fallen Lands and Heroes of the Forgotten Kingdoms. To understand what they were trying to say, we're going to have to do some historical comparison. Later books, like Mordenkainen's Magnificent Emporium, used a different and more specific wording. For instance, Forceful weapons have this description: Whenever you pull, push, or slide a target with an implement attack using a forceful implement, you can increase the distance of the forced movement by 1 square. By contrast, in Staff Expertise's case, it reads like using any ranged power. In addition, it says "with an implement" not "using an implement", so one could interpret this as just incidentally having the implement in hand whilst not using it. However, HotFK and HotFL also include a feat named Axe Expertise: You gain a +1 feat bonus to weapon attack rolls you make with an axe. This clues us in on their language. By "with", they mean "using". Someone who's an expert with an axe shouldn't be getting attack bonuses to sword attacks because they're holding an axe in their other hand. So by recent standards it should read more like this: In addition, when you make a ranged or an area attack using a staff as an implement, you don’t provoke opportunity attacks for doing so. ## The conclusion No, you can't use Staff Expertise with Direct the Strike. Direct the Strike provokes an OA even if you have Staff Expertise. You cannot use an implement to cast a non-implement power, and you cannot use Direct the Strike via your staff. Staff Expertise only applies when you can do that, and so it doesn't apply.
# Equilibrium recombination by Ailar Tags: equilibrium, recombination P: 2 In terms of the conserved baryon/photon ratio η find the CMB temperature and redshift at which recombination ended, as defined by the condition that the photon mean-free scattering rate equals the expansion rate, ne.σT.c = H. Use the Saha equation, assuming the parameters of the ΛCDM model plus the present CMB temperature T0 = 2.725 K and the baryon abundance ΩBh2 = 0.02 thanks Related Discussions Atomic, Solid State, Comp. Physics 3 Electrical Engineering 0 Introductory Physics Homework 0 Engineering, Comp Sci, & Technology Homework 0 General Astronomy 1
Browse Questions # From the top of a building 40 m tall, a ball a thrown vertically upwards with a velocity of $10\; ms^{-1}$ After how long will the ball hit the ground? $\begin{array}{1 1} 4s \\ 3s \\ 2s \\7s\end{array}$ $s= -40 m, u= +10 ms^{-1} ,a=-10 ms^{-2}$ Now $s= ut+ \large\frac{1}{2}$$at^2 => -40 -10t+\large\frac{1}{2}$$\times (-10) t^2$ => $r^2-2t -8 =0$ => $(t+2) (t-4) =0$ => $t=-2 \;or\; 4 s$ The negative value of t is not possible Hence the ball will hit the ground after 4 s
You are currently offline. Some features of the site may not work correctly. # Irene Fonseca Irene Maria Quintanilha Coelho da Fonseca is a Portuguese-American applied mathematician, the Mellon College of Science Professor of Mathematics at… Expand Wikipedia ## Papers overview Semantic Scholar uses AI to extract papers important to this topic. 2017 2017 s 1 A New Class of Pattern-Forming Equations in Continuum Mechanics (Amit Acharya… Expand 2012 2012 • 2012 • Corpus ID: 85535456 In recent years, the introduction of new DNA sequencing platforms dramatically changed the landscape of genetic studies. These… Expand 2008 2008 A questao dos tempos do conjuntivo e um tema aliciante mas ainda pouco estudado. No entanto, Fernanda Irene Fonseca elegeu-o como… Expand 2008 2002 2002 • 2002 • Corpus ID: 116892564 Abstract. The study of existence of solutions of boundary-value problems for differential inclusions \left\{ \begin{array}{ll… Expand 2001 2001 • 2001 • Corpus ID: 61707717 This paper is a summary of the Round Table: The Impact of Mathematical Research on Industry and Vice Versa'' held at 3ecm in… Expand Highly Cited 1995 Highly Cited 1995 Young measures and their limitations are discussed. Some relations between Young measures and H-measures are described and used… Expand 1985
## S1-Tiling, on demand ortho-rectification of Sentinel-1 images on Sentinel-2 grid => Sentinel-1 is currently the only system to provide SAR images regularly on all lands on the planet. Access to these time series of images opens an extraordinary range of applications. In order to meet the needs of a large number of users, including our needs, we have created an automatic processing chain to generate "Analysis Ready" time series for a very large number of applications. Sentinel-1 data is ortho-rectified on the Sentinel-2 grid to promote joint use of both missions. ## S1Tiling : ortho-rectification à la demande des données Sentinel-1 sur la grille Sentinel-2 => ​Sentinel-1 est actuellement le seul système à fournir des images SAR régulièrement sur toutes les terres de la planète. L'accès à ces séries temporelles d'images ouvre un champ d'application hors du commun. Afin de répondre aux besoins d'un grand nombre d'utilisateurs, dont les nôtres, nous avons créé une chaîne de traitement automatique permettant de générer des séries temporelles "prêtes à l'emploi" pour un très grand nombre d'applications. Les données Sentinel-1 sont ortho-rectifiées sur la grille Sentinel-2 pour favoriser l'usage conjoint des deux missions. The French Sentinel mirror site, PEPS, has a very clever data management facility. All the products are stored on tapes, with a capacity of several PB, and there is some sort of cache made of disks. The products accessed recently are on disks, while the other products stay on tapes. The storage costs and also power consumption are therefore largely optimized. The drawback is that before accessing a file on tape, some time is needed to get the tape, and read the file on tapes. This can take something like 2 to 10 minutes. My little tool, peps_download.py was designed when most of the products were on disks, and it was quite slow to download products on tapes. As I am not a patient person, I have tried to speed it up, and it works well, thanks to good advise from CNES peps  colleagues (Christophe Taillan and Erwann Poupart). The previous version was working like that : Make catalog request For all product in the request result : - if still on tape, wait for 2 minutes As a result, for each product on tape, it was necessary to wait for 2 to 10 minutes. Now, it works like that Make catalog request - Redo catalog request - If some products are not on disk yet - wait for 2 minutes On my computer, it used to take more that 12 hours to download 2 years of Sentinel-2 data for a given tile. It has now been reduced to less that 3 hours (but my computer is on CNES network). I hope you will have similar results ! ## Using NDVI with atmospherically corrected data ### NDVI NDVI is by far the most commonly used vegetation index. NDVI was developed in the early seventies (Rouse 1973, Tucker 1979), and widely used with remote sensing in the nineties until now. It is computed from the surface reflectance in the red and near infra-red channels on each side of the red-edge. $NDVI=\frac{\rho(NIR)-\rho(RED)}{\rho(NIR)+\rho(RED)}$ where $\rho(NIR)$ and $\rho(RED)$ are reflectances in the NIR and RED. Although several users still use top-of atmosphere reflectances (TOA), surface reflectances should be used to reduce sensitivity to variations of aerosol atmospheric content. A time profile of surface reflectance from Sentinel2 satellite for the blue, green, red and NIR spectral bands for a summer crop in South East France. The observation under constant viewing angles minimizes directional effects.One can also notice that reflectance variations related to vegetation status are greater in the near infra-red, while the noise is usually lower. As a result, a vegetation index should rely more on the NIR than on the red. I think NDVI is mainly used for the following reasons (but feel free to comment and add your reasons) : • it has the large advantage of qualifying the vegetation status with only one dimension, instead of N dimensions if we consider the reflectances of each channel. Of course, by replacing N dimensions by only one, a lot of information is lost. • it  enables to reduce the temporal noise due to directional effects. But with the Landsat, Sentinel-2 or Venµs satellites, which observe under constant viewing angles, the directional effects have been considerably reduced. I therefore tend to tell students that if NDVI is convenient, it is not the only way to monitor vegetation. ## Canigou 3D Lo Canigó és una magnòlia immensa que en un rebrot del Pirineu se bada - Jacint Verdaguer i Santaló The Canigó is an immense magnolia that blooms in an offshoot of the Pyrenees 3D view of the Canigou on 19-Dec-2017 (with a fancy tiltshift effect) ## New version of PEPS (French Sentinel mirror site) As you probably know, PEPS is the French Collaborative ground segment for Copernicus Sentinel program. And, first of all, it is a mirror site that distributes all the Sentinel data in near real time. These last weeks, real time was not available for Sentinel-2, as the data format and structure of Sentinel-2 products had deeply changed, and the software needed adaptation. PEPS team created a new collection, named "Sentinel-2 Single Tiles", coded "S2ST" to separate the old format from the new one. Now that the new version has been installed and validated, the PEPS mirror site is once again up to date. ## (Enfin !) Téléchargement par script des produits de niveau 2A de Sentinel-2 de Theia => La production des données de niveau 2A de Sentinel-2 se poursuit au CNES, mais un peu moins vite que prévu pour le moment.  Nous avons connu un jour faste pendant lequel 600 tuiles ont été produites, mais le rythme de production a souvent été plus lent : nous avons  résolu progressivement de petites anomalies et en même temps, le centre informatique du CNES sur lequel nous nous appuyons a connu de petits soucis. Pendant ce temps, mes collègues de l'équipe MUSCATE, et notamment Dominique Clesse et Remi Mourembles de CAP GEMINI, on ajouté au site de distribution la possibilité de télécharger les données par script, sans un clic. Le script est très facile à utiliser, par exemple, la ligne ci-dessous télécharge les données Sentinel-2 de la tuile 31TCJ (Toulouse), acquises en Septembre 2016 : python ./theia_download.py -t 'T31TCJ' -c SENTINEL2 -a config_theia.cfg -d 2016-09-01 -f 2016-10-01 ## (At last !) Automated download of Sentinel-2A Level 2A products from Theia => The production of Sentinel-2 L2A data is on-going at CNES THEIA, but it is still a little slower than expected. We had one fast day on which the exploitation team managed to process 600 tiles, but the production has often been slower as we needed to solve a few glitches, and as the whole CNES processing center had also its own issues. Meanwhile, my colleagues at CNES MUSCATE Center, with the precious help of Dominique Clesse (CAP GEMINI) and Remi Mourembles (CAP GEMINI), have implemented the possibility to download the images via a script and no click. By the way, the shop cart, which did not work when we ordered more than 10 products has also been repaired. The script is very easy to use, for instance, the following line downloads the SENTINEL-2 products above tile T31TCJ (Toulouse), acquired in September 2016 : python ./theia_download.py -t 'T31TCJ' -c SENTINEL2 -a config_theia.cfg -d 2016-09-01 -f 2016-10-01 ## The iota2 Land cover processor has processed some Sentinel-2 data => You already heard about iota2 processor, and you must know that it can process LANDSAT 8 time series et deliver land cover maps for whole countries. These las days, Arthur Vincent completed the code that allows processing Sentinel-2 time series. Even if atmospherically corrected Sentinel-2 data are not yet available above the whole France, we used  the demonstration products delivered by Theia to test our processor. Everything seems to work fine, and the 10 m resolution of Sentinel-2 seems to allow seeing much more details. The joined images show two extracts near Avignon, in Provence, which show the differences between Landsat 8 and Sentinel-2. Please just look only at the detail level, and not at the differences in terms of classes. Both maps were produces using different time periods, and a period limited to winter and beginning of spring for Sentinel-2, and the learning database is also different. Please don,'t draw conclusions too fast about the thematic quality of the maps. First extract shows a natural vegetation zone, with some farmland (top LANDSAT8, bottom Sentinel-2) ## On Google Earth Engine, beware of the Mrs-Armitage-on-Wheels Syndrom => A few colleagues replied to our campaign to explain some of the dangers of Google Earth Engine. They said : "well, after all you are probably right, but don't worry, we only use it to do quick and dirty stuff, not real scientific work" As most (...) of these colleagues are quite sensible, I am not worrying too much. But as far as I am concerned, I would have some chances to be a victim of Mrs-Armitage-on-wheels Syndrom (AWS). I guess I do not need to explain it to our british colleagues who consult this blog, this syndrom originates form the great children book from Quentin Blake, that I used to read to my children, some time ago (every night for the two first weeks, then once in a while...) : Mrs Armitage on wheels. Another daddy reads it for you here.
# pyscal Trajectory¶ Trajectory is a pyscal module intended for working with molecular dynamics trajectories which contain more than one time slice. Currently, the module only supports LAMMPS dump text file formats. It can be used to get a single or slices from a trajectory, trim the trajectory or even combine multiple trajectories. The example below illustrates various uses of the module. Trajectory is an experimental feature at the moment and may undergo significant changes in future releases from pyscal import Trajectory traj = Trajectory("traj.light") When using the above statement, the trajectory is not yet read in to memory. Just the basic information is available now. traj Trajectory of 10 slices with 500 atoms We can know the number of slices in the trajectory and the number of atoms. Trajectory only works with fixed number of atoms. Now, one can get a single slice or multiple slices just as is done with a python list. Getting the 2nd slice (counting starts from 0!). sl = traj[2] sl Trajectory slice 2-2 natoms=500 This slice can now be converted to a more usable format, either to a pyscal System or just written to another text file. Convert to a pyscal System object, sys = sl.to_system() sys [<pyscal.core.System at 0x7f81f93971d0>] System objects contain all the information. The atomic positions, simulation box and so on are easily accessible. sys[0].box [[18.22887, 0.0, 0.0], [0.0, 18.234740000000002, 0.0], [0.0, 0.0, 18.37877]] sys[0].atoms[0].pos [-4.9941, -6.34185, -6.8551] If information other than positions are required, the customkeys keyword can be used. For example, for velocity in the x direction, sys = sl.to_system(customkeys=["vx"]) sys [<pyscal.core.System at 0x7f81f9397530>] sys[0].atoms[0].custom["vx"] '-1.21558' Instead of creating a System object, the slice can also be written to a file directly. sl.to_file("test.dump") Like normal python lists, multiple slices can also be accessed directly sl1 = traj[0:4] sl1 Trajectory slice 0-3 natoms=500 to_system and to_file methods can be used on this object too. Multiple slices can be added together sl2 = traj[5:7] sl2 Trajectory slice 5-6 natoms=500 slnew = sl1+sl2 slnew Trajectory slice 0-3/5-6 natoms=500 Once again, one could write the combined trajectory slice to a file, or create a System object out of it.
# Category:Compatible Relations This category contains results about Compatible Relations. Definitions specific to this category can be found in Definitions/Compatible Relations. Let $\struct {S, \circ}$ be a closed algebraic structure. Let $\RR$ be a relation on $S$. Then $\RR$ is compatible with $\circ$ if and only if: $\forall x, y, z \in S: x \mathrel \RR y \implies \paren {x \circ z} \mathrel \RR \paren {y \circ z}$ $\forall x, y, z \in S: x \mathrel \RR y \implies \paren {z \circ x} \mathrel \RR \paren {z \circ y}$
# How to solve signal MFSK or FHSS question (received signal+ noise+jamming) I'm trying to solve the following: $$$$A \cos(2\pi f t + \theta_1) + B \cos(2\pi f t + \theta_2) = D\cos(?f?\theta)$$$$ I just need to know the correct value of D, the value of frequency and delta is not important since it is non-coherent. • Link Search for "arbitrary phase shift"... – mateC Jan 21 at 10:20 $$A\cos(2\pi f+\theta_1)+B\cos(2\pi f+\theta_2)=C\cos(2\pi f+\theta_3)$$ where $$C=|u|\quad\textrm{and}\quad \theta_3=\arg\{u\}$$ with $$u=Ae^{j\theta_1}+Be^{j\theta_2}$$ The constant $$C$$ can be written as $$C=\sqrt{A^2+2AB\cos(\theta_1-\theta_2)+B^2}$$
# American Institute of Mathematical Sciences May  2019, 24(5): 2219-2235. doi: 10.3934/dcdsb.2019092 ## On numerical methods for singular optimal control problems: An application to an AUV problem 1 Faculdade de Engenharia Universidade do Porto, DEEC, Porto, Portugal 2 Department of Applied Mathematics, Faculty of Mathematics and Computer Science, Amirkabir University of Technology, No. 424, Hafez Ave., Tehran, Iran * Corresponding author: Z. Foroozandeh Received  January 2018 Revised  January 2019 Published  March 2019 We discuss and compare numerical methods to solve singular optimal control problems by the direct method. Our discussion is illustrated by an Autonomous Underwater Vehicle (AUV) problem with state constraints. For this problem, we test four different approaches to solve numerically our problem via the direct method. After discretizing the optimal control problem we solve the resulting optimization problem with (ⅰ) A Mathematical Programming Language ($\text{AMPL}$), (ⅱ) the Imperial College London Optimal Control Software ($\text{ICLOCS}$), (ⅲ) the Gauss Pseudospectral Optimization Software ($\text{GPOPS}$) as well as with (ⅳ) a new algorithm based on mixed-binary non-linear programming reported in [7]. This algorithm consists on converting the optimal control problem to a Mixed Binary Optimal Control (MBOC) problem which is then transcribed to a mixed binary non-linear programming problem ($\text{MBNLP}$) problem using Legendre-Radau pseudospectral method. Our case study shows that, in contrast with the first three approaches we test (all relying on $\text{IPOPT}$ or other numerical optimization software packages like $\text{KNITRO}$), the $\text{MBOC}$ approach detects the structure of the AUV's problem without a priori information of optimal control and computes the switching times accurately. Citation: Z. Foroozandeh, Maria do rosário de Pinho, M. Shamsi. On numerical methods for singular optimal control problems: An application to an AUV problem. Discrete & Continuous Dynamical Systems - B, 2019, 24 (5) : 2219-2235. doi: 10.3934/dcdsb.2019092 ##### References: [1] R. Baltensperger and M. R. Trummer, Spectral differencing with a twist., SIAM J. Sci. Comput., 24 (2003), 1465-1487. doi: 10.1137/S1064827501388182. [2] J. T. Betts, Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, vol. 19 of Advances in Design and Control, 2nd edition, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2010. doi: 10.1137/1.9780898718577. [3] R. Byrd, J. Nocedal and R. Waltz, Knitro: An integrated package for nonlinear optimization, in Large-Scale Nonlinear Optimization (eds. G. Di Pillo and M. Roma), vol. 83 of Nonconvex Optimization and Its Applications, Springer US, 2006, 35–59. doi: 10.1007/0-387-30065-1_4. [4] C. Canuto, M. Y. Hussaini, A. Quarteroni and T. A. Zang, Spectral Methods in Fluid Dynamics, Springer Series in Computational Physics, Springer-Verlag, New York, 1988. doi: 10.1007/978-3-642-84108-8. [5] M. d. R. de Pinho, Z. Foroozandeh and A. Matos, Optimal control problems for path planing of auv using simplified models, in 2016 IEEE 55th Conference on Decision and Control (CDC), 2016,210–215. doi: 10.1109/CDC.2016.7798271. [6] B. Fornberg, A Practical Guide to Pseudospectral Methods, vol. 1 of Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, Cambridge, 1996. doi: 10.1017/CBO9780511626357. [7] Z. Foroozandeh, M. Shamsi and M. d. R. de Pinho, A mixed-binary non-linear programming approach for the numerical solution of a family of singular optimal control problems, International Journal of Control, 1–16. [8] Z. Foroozandeh, M. Shamsi and M. d. R. De Pinho, A hybrid direct–indirect approach for solving the singular optimal control problems of finite and infinite order, Iranian Journal of Science and Technology, Transactions A: Science, 42 (2018), 1545-1554. doi: 10.1007/s40995-017-0176-2. [9] Z. Foroozandeh, M. Shamsi, V. Azhmyakov and M. Shafiee, A modified pseudospectral method for solving trajectory optimization problems with singular arc, Mathematical Methods in the Applied Sciences, 40 (2017), 1783-1793. doi: 10.1002/mma.4097. [10] R. Fourer, D. M. Gay and B. Kernighan, Algorithms and model formulations in mathematical programming, Springer-Verlag New York, Inc., New York, NY, USA, 1989, chapter AMPL: A Mathematical Programming Language, 150–151. [11] D. Garg, Advances in Global Pseudospectral Methods for Optimal Control, PhD thesis, 2011, Thesis (Ph.D.)–University of Florida. [12] D. Garg, M. Patterson, W. W. Hager, A. V. Rao, D. A. Benson and G. T. Huntington, A unified framework for the numerical solution of optimal control problems using pseudospectral methods, Automatica, 46 (2010), 1843-1851. doi: 10.1016/j.automatica.2010.06.048. [13] H. Maurer, Numerical solution of singular control problems using multiple shooting techniques, J. Optimization Theory Appl., 18 (1976), 235-257. doi: 10.1007/BF00935706. [14] H. Maurer, On optimal control problems with bounded state variables and control appearing linearly, SIAM J. Control Optim., 15 (1977), 345-362. doi: 10.1137/0315023. [15] M. A. Patterson and A. V. Rao, GPOPS-Ⅱ: A matlab software for solving multiple-phase optimal control problems using hp-adaptive gaussian quadrature collocation methods and sparse nonlinear programming, ACM Trans. Math. Softw., 41 (2014), Art. 1, 37 pp. doi: 10.1145/2558904. [16] P. D. Pinto da Silva, Planeamento Otimizado de Movimento de Robot Submarino, (universidade do porto, faculdade de engenharia, msc thesis), 2014. [17] L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze and E. F. Mishchenko, The Mathematical Theory of Optimal Processes, Translated from the Russian by K. N. Trirogoff; edited by L. W. Neustadt, Interscience Publishers John Wiley & Sons, Inc. New York-London, 1962. [18] S. Takriti, R. Fourer, M. Gay and B. Kernighan, Ampl: A Modeling Language for Mathematical Programming, 1994. [19] E. J. Van Wyk, P. Falugi and E. C. Kerrigan, Iclocs, 2010, URL http://www.ee.ic.ac.uk/ICLOCS. [20] R. Vinter, Optimal Control, Birkhäuser Basel, 2010. doi: 10.1007/978-0-8176-8086-2. [21] A. Wächter and L. T. Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming, Math. Program., 106 (2006), 25-57. doi: 10.1007/s10107-004-0559-y. [22] J. A. C. Weideman and S. C. Reddy, A MATLAB differentiation matrix suite, ACM Trans. Math. Software, 26 (2000), 465-519. doi: 10.1145/365723.365727. [23] B. D. Welfert, Generation of pseudospectral differentiation matrices. I., SIAM J. Numer. Anal., 34 (1997), 1640-1657. doi: 10.1137/S0036142993295545. show all references ##### References: [1] R. Baltensperger and M. R. Trummer, Spectral differencing with a twist., SIAM J. Sci. Comput., 24 (2003), 1465-1487. doi: 10.1137/S1064827501388182. [2] J. T. Betts, Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, vol. 19 of Advances in Design and Control, 2nd edition, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2010. doi: 10.1137/1.9780898718577. [3] R. Byrd, J. Nocedal and R. Waltz, Knitro: An integrated package for nonlinear optimization, in Large-Scale Nonlinear Optimization (eds. G. Di Pillo and M. Roma), vol. 83 of Nonconvex Optimization and Its Applications, Springer US, 2006, 35–59. doi: 10.1007/0-387-30065-1_4. [4] C. Canuto, M. Y. Hussaini, A. Quarteroni and T. A. Zang, Spectral Methods in Fluid Dynamics, Springer Series in Computational Physics, Springer-Verlag, New York, 1988. doi: 10.1007/978-3-642-84108-8. [5] M. d. R. de Pinho, Z. Foroozandeh and A. Matos, Optimal control problems for path planing of auv using simplified models, in 2016 IEEE 55th Conference on Decision and Control (CDC), 2016,210–215. doi: 10.1109/CDC.2016.7798271. [6] B. Fornberg, A Practical Guide to Pseudospectral Methods, vol. 1 of Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, Cambridge, 1996. doi: 10.1017/CBO9780511626357. [7] Z. Foroozandeh, M. Shamsi and M. d. R. de Pinho, A mixed-binary non-linear programming approach for the numerical solution of a family of singular optimal control problems, International Journal of Control, 1–16. [8] Z. Foroozandeh, M. Shamsi and M. d. R. De Pinho, A hybrid direct–indirect approach for solving the singular optimal control problems of finite and infinite order, Iranian Journal of Science and Technology, Transactions A: Science, 42 (2018), 1545-1554. doi: 10.1007/s40995-017-0176-2. [9] Z. Foroozandeh, M. Shamsi, V. Azhmyakov and M. Shafiee, A modified pseudospectral method for solving trajectory optimization problems with singular arc, Mathematical Methods in the Applied Sciences, 40 (2017), 1783-1793. doi: 10.1002/mma.4097. [10] R. Fourer, D. M. Gay and B. Kernighan, Algorithms and model formulations in mathematical programming, Springer-Verlag New York, Inc., New York, NY, USA, 1989, chapter AMPL: A Mathematical Programming Language, 150–151. [11] D. Garg, Advances in Global Pseudospectral Methods for Optimal Control, PhD thesis, 2011, Thesis (Ph.D.)–University of Florida. [12] D. Garg, M. Patterson, W. W. Hager, A. V. Rao, D. A. Benson and G. T. Huntington, A unified framework for the numerical solution of optimal control problems using pseudospectral methods, Automatica, 46 (2010), 1843-1851. doi: 10.1016/j.automatica.2010.06.048. [13] H. Maurer, Numerical solution of singular control problems using multiple shooting techniques, J. Optimization Theory Appl., 18 (1976), 235-257. doi: 10.1007/BF00935706. [14] H. Maurer, On optimal control problems with bounded state variables and control appearing linearly, SIAM J. Control Optim., 15 (1977), 345-362. doi: 10.1137/0315023. [15] M. A. Patterson and A. V. Rao, GPOPS-Ⅱ: A matlab software for solving multiple-phase optimal control problems using hp-adaptive gaussian quadrature collocation methods and sparse nonlinear programming, ACM Trans. Math. Softw., 41 (2014), Art. 1, 37 pp. doi: 10.1145/2558904. [16] P. D. Pinto da Silva, Planeamento Otimizado de Movimento de Robot Submarino, (universidade do porto, faculdade de engenharia, msc thesis), 2014. [17] L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze and E. F. Mishchenko, The Mathematical Theory of Optimal Processes, Translated from the Russian by K. N. Trirogoff; edited by L. W. Neustadt, Interscience Publishers John Wiley & Sons, Inc. New York-London, 1962. [18] S. Takriti, R. Fourer, M. Gay and B. Kernighan, Ampl: A Modeling Language for Mathematical Programming, 1994. [19] E. J. Van Wyk, P. Falugi and E. C. Kerrigan, Iclocs, 2010, URL http://www.ee.ic.ac.uk/ICLOCS. [20] R. Vinter, Optimal Control, Birkhäuser Basel, 2010. doi: 10.1007/978-0-8176-8086-2. [21] A. Wächter and L. T. Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming, Math. Program., 106 (2006), 25-57. doi: 10.1007/s10107-004-0559-y. [22] J. A. C. Weideman and S. C. Reddy, A MATLAB differentiation matrix suite, ACM Trans. Math. Software, 26 (2000), 465-519. doi: 10.1145/365723.365727. [23] B. D. Welfert, Generation of pseudospectral differentiation matrices. I., SIAM J. Numer. Anal., 34 (1997), 1640-1657. doi: 10.1137/S0036142993295545. Controls computed by the method of section 3.2 with $s = 5$ and $n = 20$ The control functions using implicit Euler method computed with AMPL interfaced with IPOPTS with $N = 10000$ The control function computed by ICLOCS with $N = 10000$ The control function computed by GPOPS with $N = 40$ Control computed by the method of section 3.2 with $s = 4$ and $n = 14$ States computed by the method of section 3.2 with $s = 4$ and $n = 20$ AUV's Problem: computed values of switching times and performance index for $s = 4$ and various values of $n$ $n$ $t_1$ $t_2$ $t_3$ $t_4$ $t_f$ 6 0.049341739 0.6454231790 14.626412022 14.72348234 14.956410213 8 0.049051513 0.6456145668 14.623890223 14.72412511 14.950131513 10 0.049052155 0.6456134127 14.623817123 14.72416201 14.950161798 12 0.049052100 0.6456134114 14.623817653 14.72416210 14.950161782 14 0.049052100 0.6456134114 14.623817653 14.72416210 14.950161782 $n$ $t_1$ $t_2$ $t_3$ $t_4$ $t_f$ 6 0.049341739 0.6454231790 14.626412022 14.72348234 14.956410213 8 0.049051513 0.6456145668 14.623890223 14.72412511 14.950131513 10 0.049052155 0.6456134127 14.623817123 14.72416201 14.950161798 12 0.049052100 0.6456134114 14.623817653 14.72416210 14.950161782 14 0.049052100 0.6456134114 14.623817653 14.72416210 14.950161782 Comparing results for AUV's problem with no initial guess $\text{Methods}$ $N$ $t_i$ $\mathcal{J}$ $\text{Iter}$ $\text{CPU Time}$ $\text{Euler}$ 1000 $\text{Not available explicitly}$ 14.941407 180 1809.557 $\text{ICLOCS}$ 1000 $\text{Not available explicitly}$ 14.943225 1075 20.265 $\text{GPOPS}$ 80 $\text{Not available explicitly}$ 14.949946 20682 1027.5 Section 3.2 14 Explicitly available 14.950161 6 4.472 $\text{Methods}$ $N$ $t_i$ $\mathcal{J}$ $\text{Iter}$ $\text{CPU Time}$ $\text{Euler}$ 1000 $\text{Not available explicitly}$ 14.941407 180 1809.557 $\text{ICLOCS}$ 1000 $\text{Not available explicitly}$ 14.943225 1075 20.265 $\text{GPOPS}$ 80 $\text{Not available explicitly}$ 14.949946 20682 1027.5 Section 3.2 14 Explicitly available 14.950161 6 4.472 [1] Marcus Wagner. A direct method for the solution of an optimal control problem arising from image registration. Numerical Algebra, Control & Optimization, 2012, 2 (3) : 487-510. doi: 10.3934/naco.2012.2.487 [2] Maria do Rosário de Pinho, Ilya Shvartsman. Lipschitz continuity of optimal control and Lagrange multipliers in a problem with mixed and pure state constraints. Discrete & Continuous Dynamical Systems - A, 2011, 29 (2) : 505-522. doi: 10.3934/dcds.2011.29.505 [3] Georg Vossen, Torsten Hermanns. On an optimal control problem in laser cutting with mixed finite-/infinite-dimensional constraints. Journal of Industrial & Management Optimization, 2014, 10 (2) : 503-519. doi: 10.3934/jimo.2014.10.503 [4] Kareem T. Elgindy. Optimal control of a parabolic distributed parameter system using a fully exponentially convergent barycentric shifted gegenbauer integral pseudospectral method. Journal of Industrial & Management Optimization, 2018, 14 (2) : 473-496. doi: 10.3934/jimo.2017056 [5] Hamid Reza Marzban, Hamid Reza Tabrizidooz. Solution of nonlinear delay optimal control problems using a composite pseudospectral collocation method. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1379-1389. doi: 10.3934/cpaa.2010.9.1379 [6] Fabio Bagagiolo. An infinite horizon optimal control problem for some switching systems. Discrete & Continuous Dynamical Systems - B, 2001, 1 (4) : 443-462. doi: 10.3934/dcdsb.2001.1.443 [7] Vladimir Gaitsgory, Alex Parkinson, Ilya Shvartsman. Linear programming based optimality conditions and approximate solution of a deterministic infinite horizon discounted optimal control problem in discrete time. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1743-1767. doi: 10.3934/dcdsb.2018235 [8] Hang-Chin Lai, Jin-Chirng Lee, Shuh-Jye Chern. A variational problem and optimal control. Journal of Industrial & Management Optimization, 2011, 7 (4) : 967-975. doi: 10.3934/jimo.2011.7.967 [9] Qun Lin, Ryan Loxton, Kok Lay Teo. The control parameterization method for nonlinear optimal control: A survey. Journal of Industrial & Management Optimization, 2014, 10 (1) : 275-309. doi: 10.3934/jimo.2014.10.275 [10] Tianliang Hou, Yanping Chen. Superconvergence for elliptic optimal control problems discretized by RT1 mixed finite elements and linear discontinuous elements. Journal of Industrial & Management Optimization, 2013, 9 (3) : 631-642. doi: 10.3934/jimo.2013.9.631 [11] Térence Bayen, Marc Mazade, Francis Mairet. Analysis of an optimal control problem connected to bioprocesses involving a saturated singular arc. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 39-58. doi: 10.3934/dcdsb.2015.20.39 [12] Huaiqiang Yu, Bin Liu. Pontryagin's principle for local solutions of optimal control governed by the 2D Navier-Stokes equations with mixed control-state constraints. Mathematical Control & Related Fields, 2012, 2 (1) : 61-80. doi: 10.3934/mcrf.2012.2.61 [13] Ellina Grigorieva, Evgenii Khailov, Andrei Korobeinikov. An optimal control problem in HIV treatment. Conference Publications, 2013, 2013 (special) : 311-322. doi: 10.3934/proc.2013.2013.311 [14] Jiongmin Yong. A deterministic linear quadratic time-inconsistent optimal control problem. Mathematical Control & Related Fields, 2011, 1 (1) : 83-118. doi: 10.3934/mcrf.2011.1.83 [15] Peter I. Kogut. On approximation of an optimal boundary control problem for linear elliptic equation with unbounded coefficients. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 2105-2133. doi: 10.3934/dcds.2014.34.2105 [16] Matthias Gerdts, Martin Kunkel. A nonsmooth Newton's method for discretized optimal control problems with state and control constraints. Journal of Industrial & Management Optimization, 2008, 4 (2) : 247-270. doi: 10.3934/jimo.2008.4.247 [17] Bastian Gebauer, Nuutti Hyvönen. Factorization method and inclusions of mixed type in an inverse elliptic boundary value problem. Inverse Problems & Imaging, 2008, 2 (3) : 355-372. doi: 10.3934/ipi.2008.2.355 [18] Galina Kurina, Sahlar Meherrem. Decomposition of discrete linear-quadratic optimal control problems for switching systems. Conference Publications, 2015, 2015 (special) : 764-774. doi: 10.3934/proc.2015.0764 [19] Karl Kunisch, Markus Müller. Uniform convergence of the POD method and applications to optimal control. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4477-4501. doi: 10.3934/dcds.2015.35.4477 [20] Elham Mardaneh, Ryan Loxton, Qun Lin, Phil Schmidli. A mixed-integer linear programming model for optimal vessel scheduling in offshore oil and gas operations. Journal of Industrial & Management Optimization, 2017, 13 (4) : 1601-1623. doi: 10.3934/jimo.2017009 2017 Impact Factor: 0.972
How do I differentiate this equation? I am trying to differentiate this equation. I have done it one way and got this solution (I don't know how to put it into Math formatting): $$b-2x-xy/(1+x)^2$$ I used the quotient rule, and took out $$y$$ as a constant. However when I looked at the solutions, my lecturer got: $$b-2x-y/(1+x)+(xy)/(1+x)^2$$ I've simplified her solution so I know mine is right. However in order to complete the rest of the question it's far easier to have it in her form. Can someone please walk me through, step by step, how to get this solution? I'm getting myself very confused. Original Question: I am working on part a.3. I have to find the Jacobian, and set the trace equal to zero and to start I need to find the derivative of x(dot) with respect to x. • I am puzzled by this. You say you want to differentiate and equation. The derivative of an equation is, again, an equation. But what both you and your teacher give are expressions, not equations. – user247327 Jan 4 at 13:15 • Apologies, it's probably me not being clear on my understanding of the differences between those. Does an equation have an =? – MathsIsFun Jan 4 at 13:17 • Please take the time to enter crucial parts of your question as text instead of pasting pictures. Your question should be comprehensible without images, which are neither searchable nor accessible to screen readers. See math.meta.stackexchange.com/a/10992/265466. – amd Jan 4 at 20:40 You've multiplied $$x$$ with each of the terms in the parenthesis, and you took the derivative. You got the first two terms right. The last one is $$-\frac{xy}{1+x}$$You can use product rule, where the first function is $$x$$ and the second is $$-y/(1+x)$$. So the derivative of that is $$-\frac y{1+x}-x\frac{-y}{(1+x)^2}$$ If you use the quotient rule, you don't have a constant $$y$$ at the numerator, but you have $$xy$$ instead. • @Emily you already did it right, except that you forgot to multiply it by $-x$ – Andrei Jan 4 at 13:22
### What is it about the word cracker? I think most people have already seen the video of King Shimmery Shabazz, or whatever his name is, shouting that he hates "crackers." That man has more hate in him than anyone I've ever seen before. But there's something that has me confused. What is it about the word cracker that's supposed to be offensive and intimidating? I mean, he's calling white people a thin, flaky wafer. Oooh, scary!! Seriously, though. I must be missing something because I just don't get how calling me a cracker is supposed to upset me. Does he have a particular cracker in mind? A Ritz, maybe? If so, that's not so bad. They're buttery and melt in your mouth. I especially like to eat them with a good cheese ball. Perhaps he means we're all Triscuits? I don't think I'd mind that since they're crunchy and come in some really great flavors like Roasted Garlic and go great with salsa. Now, I might get upset if Mr. Shabazz called me a saltine because they're really quite dry and bland and stick to the roof of my mouth if they're not accompanied with something else like cheese. So, I confess that I'm stumped. Can anyone explain to me why being called a cracker is supposed to be so terrible? I resent the heck outta bei... (Below threshold) I resent the heck outta being called a 'cracker.' It's inappropriate and inaccurate. I shall be hence forth addressed as a 'Saltine-American'. Kim, for what it's worth...... (Below threshold) Chip: Kim, for what it's worth...from the Urban Dictionary cracker Noun. Slang word used to refer to those of European ancestry. The word is thought to have either derived from the sound of a whip being cracked by slave owners, or because crackers are generally white in color. The only time up to a few y... (Below threshold) Soozer: The only time up to a few years ago I've ever heard the term was in "Gone With The Wind". The poor whites were called "crackers" and Scarlett absolutely abhorred the term. But she never said why. I assumed it was a Georgia thing. Seldom if ever heard hereabouts in Texas. Enlighten me someone. Please. I think the way Shabazz ... (Below threshold) 914: I think the way Shabazz spit's all over the place spewing his hate is what's so offensive. I think I will label him a Graham cracker or wheat thin. Wrong, Chip. Your source, ... (Below threshold) twolaneflash: Wrong, Chip. Your source, "Urban Dictionary", gives the history-rewritten PC definition the progressives and race-hustlers would have you believe. The word "Cracker" or "Georgia Cracker" is derived from the sound of the whip of the drivers of mule-drawn wagons. The agricultural products of the South, mainly cotton in the day, was transported by wagon trains to the Atlantic coast for shipping abroad - largely to the textile mills of England. These trains passed through populated areas, with the drivers often showing off their skill with their long whips to entertain the crowds, especially the ladies. "Cracker" had nothing to do with slavery or beating humans with whips. Ignoranamus. Can I be known as a Big Che... (Below threshold) pcole168: Can I be known as a Big Cheez-It, Shabazzy? Limbaugh devoted <a href="h... (Below threshold) Limbaugh devoted an entire segment of today's show to the whole "cracker" controversy. He actually had a pretty good suggestion: We are told in our culture that it's okay for members of groups to use derogatory or inflammatory or offensive terms about themselves but that nobody else can. For example, it's okay for rappers and others in the black community to call each other the N-word, but nobody else can. If you do that they'll Imus you. But we are crackers, are we not? ... Therefore we can use the term, right? ... But they can't call us crackers. If they call us crackers it's offensive ... So ... how about a "cracker crackdown"? We raise high profile holy hell every time a non-white person uses the "c-word". Sounds like fun to me. Heck, we could even boycott Keebler and Nabisco, followed by a high profile lawsuit asking for billions in emotional damages for all white people, kind of like our own tobacco settlement. What better way to illustrate absurdity, than by being equally absurd? What do you all think? twolaneflash,as I ... (Below threshold) Chip: twolaneflash, as I said in my comment...."For what it's worth", I did not drag out "Urban Dictionary" as definitive. As for you calling me an ignoramus, I think that was a little uncalled for. I'm an onion ring.... (Below threshold) I'm an onion ring. See if I ever eat at 'crack... (Below threshold) 914: See if I ever eat at 'cracker barrel' again. I didn't know I was being offended. I prefer biscuits with my t... (Below threshold) I prefer biscuits with my tea. Preferably SHORTBREAD. You can have crackers at your Tea Party, but I'll only settle for Walker's. No and I don't mean Johnny. Biccies and scones if you are from down under. Colleagues, reflect for a m... (Below threshold) Jay Guevara: Colleagues, reflect for a moment on the tragic plight of our aspiring race hustlers. They yearn for a derogatory term for Caucasians that will throw us into frenzies of rage, but ... they just can't find one. They hurl what they consider to be their vilest epithets at us, at most one or two people look up from their newspapers or laptops, roll their eyes, and go back to reading. That's it? No spluttering rage, no threats of violence, no purple faces, just ... mild disinterest. The least we could do, if we weren't so racist, would be to feign offense just to make race hustlers feel appreciated, relevant, and offensive, rather than ignored, laughed at, and politely patronized. I asked my son what's the b... (Below threshold) BluesHarper: He said it's about cracking the whip. I'm not saying he's right, but that is what he thinks. I always thought it was about being white, like a cracker - saltine or whatever. Being a whip cracker isn't so funny. I'm not going to get bent over it though. I'd rather that we all like each other enough that we can call each other names and still laugh. Of course it doesn't feel good to be called something bad, but that's okay because I plan on calling him something bad later - all in good fun. I think it's more of an attitude thing than anything else. Because, if you call me sweetheart in the wrong tone of voice, I might take offense. This cracker wants to defen... (Below threshold) mpw280: This cracker wants to defend himself from the outright and open threat to his life by that POS and should assume the right to shoot on sight any New Black Panther Party member that shows up in my neighborhood as their leader has declared war on all whites. I know I shoot better and am probably better armed and I know I won't stop shooting till the NBPP asshole is on the ground as he is a threat to me and mine. How is that for a response? mpw I'm with you Kim,whats the ... (Below threshold) G.: I'm with you Kim,whats the big deal... but call me a crigger or a wigger...OH HELL NO! :-) What I want to know is when... (Below threshold) P. Bunyan: What I want to know is when they're going to start bleeping out the work "cracker" from movies and t.v. shows like they do to that other word used in (for example like AMC did to Blazing Saddles). Oh wait, it doesn't work that way does it... D'oh let me try that again,... (Below threshold) P. Bunyan: D'oh let me try that again, sorry: What I want to know is when they're going to start bleeping out the word "cracker" from movies and t.v. shows like they do to that other word (for example like AMC did to Blazing Saddles). Oh wait, it doesn't work that way does it... Man, crigger IS a real slan... (Below threshold) G.: Man, crigger IS a real slang word..had no idea. But according to a racist definition it fits Barack. "Cracker"?? I was just gett... (Below threshold) "Cracker"?? I was just getting used to "Honkie"! "I must be missing somethin... (Below threshold) jim m: "I must be missing something because I just don't get how calling me a cracker is supposed to upset me. " You're missing the victim attitude. Once you have that in mind then everything becomes a racial slight. Firecracker, clearly a reference to burning white people alive. Wisecrack is obviously a contraction of wisecracker which is clearly meant as a slur on the intelligence of white people. You see? It's easy to be racially offended once you throw out common sense and eliminate a rational thought process. How else to explain the NAACP? Kim, you're absolutely righ... (Below threshold) Kim, you're absolutely right. There really is no insult for white people, that even comes close to the the hateful words for black people. Why is that? That sure is an interesting question, isn't it? I don't really know the answer either, to be honest with you. "There really is no insult ... (Below threshold) GarandFan: "There really is no insult for white people." For an 'insult' to take place, the person 'insulted' 1s has to take offense. What pisses Shibaz off is that he's got nothing. But then, that's the subtotal of his life. Just goes to show where hate gets you. Shitbaz = Zero At least Cinque promoted himself to "General Field Marshall". "There really is no insult ... (Below threshold) BlueNight: "There really is no insult for white people, that even comes close to the the hateful words for black people." Being called a redneck or hick gets on my nerves, especially since I've lived in a city all my life, and grew up next to a university. Sure, it can get on your ne... (Below threshold) Sure, it can get on your nerves. I'm just saying, it has nowhere near the offensive power of the n-word. Or even the offensive word for Chinese that rhymes with "sink". Black racists never made an... (Below threshold) rich K: Black racists never made any progress invoking ridicule on whites. Thats why anger,hostility and inducing guilt became their stock in trade. Show Me the Money! It's all so mature in polit... (Below threshold) Son Of The Godfather: It's all so mature in politics and race-baiting nowadays. Think I'll regress to my old stand-by and start referring to the bad guys as "doody heads". twolaneflash - "Ignor... (Below threshold) Marc: twolaneflash - "Ignoranamus." "Nice" closing salutaion. For a not so inclusive nitwit. The term "cracker" has been... (Below threshold) ridgerunner: The term "cracker" has been in use since the Revolution. It was used by the British and the Charleston elite to refer to Piedmont settlers of Scots-Irish descent. The origin may go all the way back to Scotland, where in the 18th century "cracker" meant a "boastful person." The origin has nothing to do with slavery. Let's see. Whites are "crac... (Below threshold) Let's see. Whites are "crackers." Blacks who act white are "Oreos." Mexicans are "beaners." No wonder America has an obesity problem... we really are obsessed with food. I'd continue, but it's time for breakfast. J. Edward A. Schuster: Jebus...... Am I the only o... (Below threshold) donabernathy: Jebus...... Am I the only one that see's, with the way Barry and pals is ruining the economy, We is all Toast. roflmao If someone calls me an Ofay... (Below threshold) Overly sensitive 1930s man: If someone calls me an Ofay White Devil again I'm going cry. He needs to learn how the w... (Below threshold) rileyb: He needs to learn how the word cracker came about and what it described. Florida cattlemen used big bullwhips and were extremely proud of how loud those whips could/can be snapped. This created a loud "cracking" noise and thus, the people that did this came to be described as Crackers. A cracker, in other words, except to brain deads, is not derogatory to anyone but the creature misusing the word. Originally "Georgia cracker... (Below threshold) mojo: Originally "Georgia cracker", I believe. Goes alonng with "good ol' boys from LSU" and the like. FWIW: I never understood "honkey", it's just dumb. According to Kim's logic, b... (Below threshold) Wayne: According to Kim's logic, blacks shouldn't be offended by being called a nigger since the original meaning of the word simply meant someone from Niger. Taking one meaning of a word that is clearly is not what the speaker meant is silly. A black calling a white "cracker" is clearly not the same as calling the white person a thin wafer, cattleman or mule team driver. Pretending otherwise is asinine. There really is... (Below threshold) Jay Guevara: There really is no insult for white people, that even comes close to the the hateful words for black people. I'm just saying, it has nowhere near the offensive power of the n-word. You're half right. There are lots of hateful words (i.e., words intended to convey hatred) for white people. You're incorrect on that score. But you're correct in saying that none has any offensive power. The reason does not lie with the words, but rather with our reaction to them. We do not give them any power. We laugh them off. Which is what blacks should do too, to defang the words. By reacting strongly to them they give others power over them. According to Kim's logic, blacks shouldn't be offended by being called a nigger since the original meaning of the word simply meant someone from Niger. Close. Recast your sentence out of the passive voice. You imply that blacks are offended by the word, that they have no choice in the matter, it's like gagging on a spoon. Yet they call each other "nigger" all the time, so it's not the word, it's the context, and the conscious decision on whether or not to take offense in any given situation. Would the epithet have any power if blacks insisted on being referred to as "niggers?" Poof! Epithet gone. I'm old enough to remember when references to Africa in connection with blacks were considered grossly impolite. Now? Everything is African this, that, or the other. So it can be done. Obama is "half-cracker"... (Below threshold) Neo: Obama is "half-cracker" How come I never hear raceb... (Below threshold) 914: How come I never hear racebaiter $harpton or Rev hustler Jack$on using such flattering derogatory term's as 'cracker'? Or 'Honkie'? Because I never listen to them that's why. Jay I agree that it is not ... (Below threshold) Wayne: Jay I agree that it is not always what a word definition or origination is that makes it offensive but how it is deliver and\or additional implied meanings place upon it by the speaker. A person can state "he is from Harvard" and make it sound offensive. He can say it in such a way to make the person sound like a snob, elitist, etc. Many have done this to many words and phrases including Rednecks, being from the Country\big city, etc. I also agree that much of the power of words is given to it by the listeners. However that doesn't negate the fact of the intent by the user to use them in an offensive manner. The use of cracker, Redneck, hick, and nigger are often use in hateful and angry speech. Just because "some" do not allow themselves to be greatly offended when they are used in a hateful way, does not mean that those words being used in such matter is right and should not be denounce. Racist hate speech should be denounced regardless of it being done by whites, blacks, Hispanics, etc. I have never been, and prob... (Below threshold) Big Mo: I have never been, and probably never will be, offended if anyone maliciously called me "cracker" or "honkey." Those words have absolutely no power. I have been called a "white motherf----r" before by black teenagers, and that DID make me angry. But "cracker?" Pass the cheese. I usually laugh when "honkey" is used during one of my favorite James Bond films, Live and let Die, Roger Moore's first turn as 007. Mr. Big says of Bond, "Take this honkey out and waste him." Somehow, the British Roger Moore just doesn't strike me as a "honkey." The use of crac... (Below threshold) Jay Guevara: The use of cracker, Redneck, hick, and nigger are often use in hateful and angry speech. Just because "some" do not allow themselves to be greatly offended when they are used in a hateful way, does not mean that those words being used in such matter is right and should not be denounce. Racist hate speech should be denounced regardless of it being done by whites, blacks, Hispanics, etc. I understand your point, and semi-agree with it. But consider the point of denouncing such speech - to eradicate it, right? But it doesn't. To the contrary, it gives the speech power. My approach deflates the power of the words. That's the way to eradicate it: by making it pointless. Consider a less emotive issue, say, baldness. It's no fun to crack wise about baldness to a guy who's OK with it. Doing so is a waste of time, because the guy just laughs it off. Now if a guy's going (or has gone) bald, and is acutely sensitive about it... that's when it's worthwhile telling bald jokes, to get the reaction. That's my point. No response to something = no point in saying it. From now on I'm referring t... (Below threshold) 914: From now on I'm referring to every john I know as 'cracker jack'. JayI think we are so... (Below threshold) Wayne: Jay I think we are somewhere near being on the same page with this. Your bald example is good. However if someone continues making bald jokes every couple minutes, it gets tiring. I have seen this happen with different groups including black and white groups (softball teams, families, countries, etc) get together. People from both groups will tell jokes about both groups to signify that everyone is past that but sometime someone will go on and on with the jokes. Sometimes out of insecurity and sometimes out of malice. Regardless there is a point where it becomes inappropriate. Wayne, we are indeed on ess... (Below threshold) Jay Guevara: Wayne, we are indeed on essentially the same page. Someone running the jokes into the ground does indeed become tiresome, but the person who looks bad is the guy telling the jokes. Eventually someone takes him aside and tells him to put a lid on it. No drama, and problem solved. The one I had heard was tha... (Below threshold) Ryan: The one I had heard was that it had to do with cracking corn. You're absolutely right, Ki... (Below threshold) Remarkulus: You're absolutely right, Kim. That's why it is so funny when whites pretend to be offended by the term. Jay Guevara, you almost got... (Below threshold) Remarkulus: Jay Guevara, you almost got it right but the reason whites can only feign offense at derogatory words directed to them by other races, particularly blacks, is not because whites do not give the words power. It's because whites have historically never been in a subordinate position to blacks in this country. The n-word is not powerful and offensive because blacks won't let go of the term, it is because 150 years later they remain socially, economically, and politically subordinate to whites. In the backs of most Americans' minds- and the front of many- blacks will always be linked to slavery. In other words, the power of the words are directly tied to the power behind the speaker. "Uppity" is no longer overtly used to describe a black man who dares push his way into the white power structure. "Arrogant" has replaced it. They mean the same thing. And you can bet your bottom dollar that when a successful black man is called arrogant, whites and blacks know what is being said. "I think most people have a... (Below threshold) demvoter: "I think most people have already seen the video" Well you'd be wrong. Most people have no idea who this moron or the New Black Panthers are. They are a tiny racist fringe group who have no standing anywhere except FOX News. Members of the NBP have appeared on FOX almost 70 times but virtually nowhere else in the media. FOX could try to get any number of real leaders from the African American community to come on but instead promote these nobodies in order to gin up racial animosity. RE: post #7, I'm wit... (Below threshold) John: RE: post #7, I'm with you Michael I think us Crackers should take offense at the use of the term cracker by anyone other than whites. I would take it one step further, as soon as it becomes politically incorrect to use the term and another term is subsituted designed to be non offensive I think we should take offense at that term too. It's a never ending game of "you can say that because it offends me" If you think racism is dead... (Below threshold) Ron Cantrell: Kim -- Could've saved yours... (Below threshold) beejeez: Kim -- Could've saved yourself a lot of words by just saying: "Nyah, nyah, stupid coloreds! Can't even think of a slur that hurts!" Jay Guevara, yo... (Below threshold) Jay Guevara: Jay Guevara, you almost got it right but the reason whites can only feign offense at derogatory words directed to them by other races, particularly blacks, is not because whites do not give the words power. It's because whites have historically never been in a subordinate position to blacks in this country. Been out of the country for a while? You might want to peruse the news reports from Nov. 2008. No, the reason whites do not take offense at derogatory words is that they are not insecure, and thus do not give the words power. I've lived in Hawaii (even went to Punahou - small world, eh, Barry?) where whites are in a distinct minority and "haole" is a term used to disparage whites, much as "nigger" is to blacks on the mainland. Yet whites never get upset about it. We laughed it off. So spare us the Marxist oppressor/oppressed analysis. It doesn't wash. It's the very same situation with the bald jokes I mentioned upthread. Some guys are extremely sensitive about going bald, others are not. It's not that the latter have never been in a subordinate position to the hirsute. It's that they are comfortable in and of themselves, whereas the former are not. reason does not li... (Below threshold) reason does not lie with the words, but rather with our reaction to them. We do not give them any power. We laugh them off. Which is what blacks should do too, to defang the words. By reacting strongly to them they give others power over them. I think you're dodging the point. "Cracker" isn't as hurtful as "n***er" because "Cracker" doesn't carry with it the same history - whereas n***ger implies a group of people *should be* bought, sold, treated and killed like animals It's quite easy to say what others should and shouldn't be offended by. But I don't think that suits reality. People get offended by offensive things because they're intended to hurt them. So I can sit here and say that Jane's mother shouldn't be insulted when I call her a c**ksucking b**ch. But is she *wrong* to be offended? After all, it's just words right? Re # 51 Ron, so the ... (Below threshold) John: Re # 51 Ron, so the comments here are racist? Really how so, all I see is people commenting on words that are being tossed around. Do you think that the idiot from the NBPP thinks that Cracker is term of endearment or do you suppose he intends to offend? Before the Braves moved the... (Below threshold) Davis: Before the Braves moved there, Atlanta's minor league team was called the Crackers. It was originally a Georgia, north Florida term. Hm, gee, let's think about ... (Below threshold) Fred: The N-word is associated with: lynchings, attack dogs, fire hoses, Jim Crow laws, miscegnation laws, and institutionalized racism. There is a LOT to be offended about there. To suggest that the N-word and historical racism aimed at blacks in America is something people should "get over," or that it is a "victim mentality" that carries on the memories of such things is asinine and, frankly, stupid. Look, if your great-grandfather had been hanged from a tree while being called the N-word - by white people - don't you think you and your family would have a particular emotional connection to the word? If I came to your house and said, "I think it's funny your great-grandfather was murdered by a mob," would you have no less of an offended reaction? As to why "cracker" has no power over white people? Well, think for ten seconds - has anyone in your family been hanged from a tree by a black mob? Has anyone in your family been enslaved by a black person? Has anyone in your family been sicced by attack dogs owned by 100% black police departments? Have black firemen from blacks-only fire departments turned fire hoses on you? What's that? No? Never? Not your grandfather or great-grandfather? Didn't think so. Seriously, is history so hard for you? You people tell black people to "get over" slavery, "get over" the victim mentality - it was a hundred years ago! White people have NEVER in all of American history gone through the terror that black people, historically, have - bear in mind the first terrorist organization in America - perhaps the world - was the KKK. An all-white, anti-black terror organization. So, when you tell a young person, "YOU were never enslaved. Stop being a victim!" - you're being insensitive to the fact that their grandparents marched with MLK Jr. and were attacked by police and had fire hoses turned on them. Their great-grandparents were hanged from trees in the South in the thirties. Their great-great-grandfathers toiled as slaves in the cotton fields. Seriously - think for ten seconds how you would feel if I insulted your grandfather - or your great-grandfather - and then, GET OVER IT! It was a hundred years ago! Just because someone is young and didn't personally experience the intense racism of the past two hundred years, it doesn't mean that memory hasn't been passed down to them. Seriously, how clueless can you people be? How inhuman of you to lack basic human sympathy. tl;dr version:The ... (Below threshold) Fred: tl;dr version: The word "cracker" has no power over white people because it has no historical background to it; nothing whatsoever in white people's history compares to the racism and hatred we've poured onto black people for the past two hundred years. We have literally nothing to complain about in the race-chasing department. "Oh, my grandfather was Irish." STFU. He wasn't lynched. This isn't hard to grasp, people. Seriously, I cannot get ove... (Below threshold) Fred: People: L-Y-N-C-H-I-N-G. White people used to just hang black people from trees for fun. Do you not understand this? The N-word is associated with terror, mob violence and fire hoses - police officers who can arrest you for anything they feel like. The word cracker? It's associated with............................. What in white people's history even begins to compare with L-Y-N-C-H-I-N-G?!?!?!?!?!?!?!?! Seriously, just think for ten seconds - an angry mob comes into your house, ties you up and hangs you from a tree until dead - and you want young people, who inherit that memory, and inherit that terror, to just GET OVER it?!?!?!?!?!?!?!?! You are all sick, diseased, ignorant people. All of you. Bluenight: you grew up NEX... (Below threshold) The Fool: Bluenight: you grew up NEXT to a university? Well shucks and go-oolleeee! You're obviously too smart to be a cracker then! ## Contact Send e-mail tips to us: [email protected] ## Credits Section Editor: Maggie Whitton Editors: Jay Tea, Lorie Byrd, Kim Priestap, DJ Drummond, Michael Laprarie, Baron Von Ottomatic, Shawn Mallow, Rick, Dan Karipides, Michael Avitablile, Charlie Quidnunc, Steve Schippert Emeritus: Paul, Mary Katherine Ham, Jim Addison, Alexander K. McClure, Cassy Fiano, Bill Jempty, John Stansbury, Rob Port In Memorium: HughS Hosting by ServInt Ratings on this site are powered by the Ajax Ratings Pro plugin for Movable Type. Search on this site is powered by the FastSearch plugin for Movable Type.
# Fourth-order moveout Series Investigations in Geophysics Öz Yilmaz http://dx.doi.org/10.1190/1.9781560801580 ISBN 978-1-56080-094-1 SEG Online Store A review of the moveout equation (3) to attain higher accuracy at far offsets is given in Section C.1. At first, it seems that including the terms up to the fourth-order in equation (3) should achieve this objective: ${\displaystyle t^{2}=C_{0}+C_{1}x^{2}+C_{2}x^{4}+C_{3}x^{6}+\cdots ,}$ (3) ${\displaystyle t^{2}=t_{0}^{2}+{\frac {x^{2}}{v_{rms}^{2}}}+C_{2}x^{4}.}$ (5a) Nevertheless, to compute a velocity spectrum using this equation requires scanning for two parameters — vrms and C2; thus, making equation (5a) cumbersome to use for velocity analysis. Below, a practical scheme to compute a velocity spectrum using equation (5a) is suggested: 1. Drop the fourth-order term to get the small-spread hyperbolic equation (4b). Compute the conventional velocity spectrum (Velocity analysis) by varying vrms in equation (4b), and pick an initial velocity function vrms(t0). 2. Use this picked velocity function in equation (5a) to compute a velocity spectrum by varying the parameter C2, and pick a function C2(t0). 3. Use the picked function C2(t0) in equation (5a) to recompute the velocity spectrum by varying vrms. Finally, pick an updated velocity function vrms(t0) from this velocity spectrum. ${\displaystyle t^{2}=t_{0}^{2}+{\frac {x^{2}}{v_{rms}^{2}}}.}$ (4b) Castle (1994)[1] shows that a time-shifted hyperbola of the form ${\displaystyle t=t_{0}\left(1-{\frac {1}{S}}\right)+{\sqrt {\left({\frac {t_{0}}{S}}\right)^{2}+{\frac {x^{2}}{Sv_{rms}^{2}}}}}}$ (5b) is an exact equivalent of the fourth-order moveout equation (5a). Here, S is a constant (Section C.1). For S = 1, equation (5b) reduces to the conventional small-spread moveout equation (4b). As for the fourth-order moveout equation (5a), the time-shifted hyperbolic equation (5b) can, in principle, be used to conduct velocity analysis of CMP gathers. 1. Set S = 1 in equation (5b) to get equation (4b). Compute the velocity spectrum by varying vrms in equation (4b), and pick an initial velocity function vrms(t0). 2. Use this picked velocity function in equation (5b) and compute a velocity spectrum by varying the parameter S. Pick a function S(t0), and 3. use it in equation (5b) to recompute the velocity spectrum by varying vrms. Finally, pick an updated velocity function vrms(t0) from this velocity spectrum. [2] offers an alternative moveout equation to achieve higher-order accuracy at far offsets: ${\displaystyle t=(t_{0}-t_{p})+{\sqrt {t_{p}^{2}+{\frac {x^{2}}{v_{s}^{2}}}}},}$ (5c) where t0 is the two-way zero-offset time, tp is related to the time at which the asymptotes of the hyperbolic traveltime trajectory converge (Section C.1), and vs is the reference velocity assigned to the layer below the recording surface (not the near-surface layer). When tp = t0, equation (5c) reduces to the small-spread hyperbolic equation (4b). [3] demonstrate the use of equation (5c) to obtain a stacked section with a higher stack power compared to the conventional stack derived from the small-spread moveout equation (4b). To use equation (5c) for velocity analysis, choose a fixed value of reference velocity vs. Then, for each output time t0 and for each offset x, apply time shift tp to traces in the CMP gather and compute the input time t for the offset under consideration. Compute a velocity spectrum for a range of tp values. Finally, pick a function tp(t0) from the velocity spectrum.
Suggested papers for Tue, Nov 13, 2018, Thu, Nov 15, 2018, and Fri, Nov 16, 2018 at 11 am 14 Nov 2018 ### Constraints on Decaying Dark Matter from the Isotropic Gamma-Ray Background If the dark matter is unstable, the decay of these particles throughout the universe and in the halo of the Milky Way could contribute significantly to the isotropic gamma-ray background (IGRB) as measured by Fermi. In this article, we calculate the high-latitude gamma-ray flux resulting from dark matter decay for a wide range of channels and masses, including all contributions from inverse Compton scattering and accounting for the production and full evolution of cosmological electromagnetic ... 14 Nov 2018 ### Prestige Bias on Time Allocation Committees? (No abstract for this journals: article commences: ) Fairness is a key issue in the careers of astronomers. I examine here the anecdota l suggestion that "you're more likely to get time if you're on the TAC", using public and published data for a large international telescope facility... 9 Nov 2018 ### The hidden giant: discovery of an enormous Galactic dwarf satellite in Gaia DR2 We report the discovery of a Milky-Way satellite in the constellation of Antlia. The Antlia 2 dwarf galaxy is located behind the Galactic disc at a latitude of $b\sim 11^{\circ}$ and spans 1.26 degrees, which corresponds to $\sim2.9$ kpc at its distance of 130 kpc. While similar in extent to the Large Magellanic Cloud, Antlia~2 is orders of magnitude fainter with $M_V=-8.5$ mag, making it by far the lowest surface brightness system known (at $32.3$ mag/arcsec$^2$), $\sim100$ times more diffuse... Volunteers: Abhimat | Volunteer to discuss Mark paper as discussed 7 Nov 2018 ### Thermophysical Modeling of Asteroid Surfaces using Ellipsoid Shape Models Thermophysical Models (TPMs), which have proven to be a powerful tool in the interpretation of the infrared emission of asteroid surfaces, typically make use of a priori obtained shape models and spin axes for use as input boundary conditions. We test then employ a TPM approach - under an assumption of an ellipsoidal shape - that exploits the combination of thermal multi-wavelength observations obtained at pre- and post-opposition. Thermal infrared data, when available, at these observing circ... 1 Nov 2018 ### Two new free-floating planet candidates from microlensing Planet formation theories predict the existence of free-floating planets, ejected from their parent systems. Although they emit little or no light, they can be detected during gravitational microlensing events. Microlensing events caused by rogue planets are characterized by very short timescales $t_{\rm E}$ (typically below two days) and small angular Einstein radii $θ_{\rm E}$ (up to several uas). Here we present the discovery and characterization of two free-floating planet candidates iden... We present a $\approx 11.5$ year adaptive optics (AO) study of stellar variability and search for eclipsing binaries in the central $\sim 0.4$ pc ($\sim 10''$) of the Milky Way nuclear star cluster. We measure the photometry of 563 stars using the Keck II NIRC2 imager ($K'$-band, $λ_0 = 2.124 \text{ } μ\text{m}$). We achieve a photometric uncertainty floor of $Δ m_{K'} \sim 0.03$ ($\approx 3\%$), comparable to the highest precision achieved in other AO studies. Approximately...
# Can acceleration be both the “rate of increase of velocity” and the “rate of increase of speed” in Physics? A Dictionary of Physics (Oxford University Press) defines acceleration as: The rate of increase of speed or velocity However, from reading many other definitions it seems to me that acceleration in Physics is generally held to refer to the rate of increase of velocity (rather than speed), and that acceleration is generally held to be a vector quantity rather than scalar quantity. How does this relate to the definition above of acceleration being able to be an increase in speed? What are the contexts in which acceleration is treated as an increase in speed? Do these tend to be more simplistic, less real-world treatments of acceleration or do they belong to another field such as Mathematics where the treatment may be more abstract and direction can be ignored? I note that the SI unit for acceleration is metre per second squared (m/s2). Is it significant that this unit does not specify direction? Does this allow for accelertion to be a scalar or vector quantity depending on whether or not a direction is specified? • Closely related question here. You're right that the two notions are not consistent. It's all semantics. Laymen usually use the first definition and physicists usually use the second. – knzhou Jul 31 '18 at 10:51 • There's no fundamental difference, people are just using a word in different ways. It's like debating over whether a sunset is yellowish-red or reddish-yellow. – knzhou Jul 31 '18 at 10:51 • Also, you really can't say much by looking at the units alone -- what would "specifying a direction" by units look like? Would the units be "meters times direction per second squared"? Units just don't give that much information about the quantity that carries them. – knzhou Jul 31 '18 at 10:54 • Thanks for replying knzhou. The issue I have is that the writers of dictionary definitions such as OUP's A Dictionary of Physics are generally very careful about the terminology they use. – PrettyHands Jul 31 '18 at 10:54 • @PrettyHands It's a bit ambiguous. I suppose you could say that scalar acceleration is "the rate of change in speed", and vector acceleration is "the rate of change in velocity". However, one could also sensibly define scalar acceleration to mean "the magnitude of the vector acceleration", which is different. The point is, there is no "official" dictionary out there that everyone must adhere to. People use words in different ways. In practice, this is never going to matter as long as you pay attention to context. – knzhou Jul 31 '18 at 12:01 The distinction is important because the Laws of Physics are usually written as equations in terms of vector quantities wherever appropriate : for example $\mathbf{F}=m\mathbf{a}$. The scalar equation $F=ma$ ignores changes of direction when speed is constant, as in uniform circular motion. Vector equations are often more compact and enable a consistent sign convention to be applied. Equations written in terms of scalar magnitudes and angular directions are more complex and cause confusion when the sign convention changes to avoid negative values.
## Discrete Multi-Valued Particle Swarm Optimization Discrete optimization is a difficult task common to many different areas in modern research. This type of optimization refers to problems where solution elements can assume one of several discrete values. The most basic form of discrete optimization is binary optimization, where all solution elements can be either 0 or 1, while the more general form is problems that have solution elements which can assume $n$ different unordered values, where $n$ could be any integer greater than 1. While Genetic Algorithms (GA) are inherently able to handle these problems, there has been no adaption of Particle Swarm Optimization able to solve the general case. Published in: Proceedings of IEEE Swarm Intelligence Symposium, 103 - 110 Presented at: IEEE Swarm Intelligence Symposium, Indianapolis, Indiana, USA, May 12-14 Year: 2006 Keywords: Laboratories:
Let $f: \mathbb{C} \rightarrow \mathbb{C}$ be an entire function such that $\lim _{z \rightarrow 0}\left|f\left(\frac{1}{z}\right)\right|=\infty$. Then which of the 1. $f$ is constant 2. $f$ can have infinitely many zeros 3. $f$ can have at most finitely many zeros 4. $f$ is necessarily nowhere vanishing
News Atom-Feed Published on Saturday, 07 January 2017 kramdown 1.13.2 released This release fixes some minor issues - updating is recommended. Changes • 3 bug fixes: • Fix footnote link spacing to use non-breaking space (pull request #399 by Martyn Chamberlin) • Show warning for unreferenced footnote definitions (fixes #400 reported by Kyle Barbour) • Fix test cases with respect to Ruby 2.4 (fixes #401 reported by Connor Shea) Published on Friday, 25 November 2016 kramdown 1.13.1 released This release fixes the GFM header ID generation for more cases, updating is very recommended. Changes • 1 bug fix: • Fix GFM header ID generation when code spans, math elements, entities, typographic symbols or smart quotes are used (fixes #391 reported by Nick Fagerlund) Published on Sunday, 20 November 2016 kramdown 1.13.0 released The biggest change in this release is the introduction of a converter for man pages. Although there already exist two solutions (ronn and kramdown-man), both are not completely satisfactory: • Ronn doesn’t use standard Markdown syntax for all elements. • kramdown-man only converts a subset of the available element types. The new man page converter uses standard kramdown syntax and supports nearly all element types, including tables. This release also brings some enhancements for the GFM parser. One thing to note is that the header ID generation is now more compatible to GFM which also means that some IDs will be different - so check the documents on which you use the GFM parser, especially when you are using Jekyll or Github Pages. Organizational-wise, issues and pull requests on Github that pertain to feature requests have been closed and are now tracked through a dedicated kramdown project on Github. Changes • 4 minor changes: • Add new converter for man pages • Header ID generation for the GFM parser is now more compatible to GFM (fixes #267, requested by chadpowers) • Update to the MathJax math engine to allow formatting the preview as code / pre > code (pull request #372 by Florian Klampfer) • Allow tabs in table separator lines (pull request #370 by Shuanglei Tao) • 2 bug fixes: • Compactly nested lists are now handled correctly after fixing a bug in indentation detection (fixes #368 reported by Christopher Brown) • GFM parser: Allow indenting the delimiting lines of fenced code blocks for better GFM compatibility (pull request #369 by Shuanglei Tao) • 2 other fixes and enhancements: • Added information on how to run tests to README.md (fixes #377 reported by Aron Griffis) • Added information about how to use KaTeX with the MathJax math engine (fixes #292 reported by Adrian Sieber, information by Dato Simó) Published on Monday, 15 August 2016 kramdown 1.12.0 released This release features two enhancements for definition lists: 1. IALs can now be applied to definition terms: {:.classy} term : and its definition 2. IDs for definition terms can now be created automatically (similar to header IDs) and optionally assigned a prefix: {:auto_ids} term1 : definition term2 : definition ^ {:auto_ids-prefix} term1 : definition term2 : definition Furthermore, compatibility of the GFM parser has been improved in regards to list/blockquotes/codeblocks that are used directly after a paragraph (i.e. without a blank line). Changes • 4 minor change: • Allow using an IAL for definition terms (<dt>) as is already possible with definitions themselves (<dd>) • Added automatic generation of IDs (with optional prefix) for terms of definition lists (fixes #355, requested by Greg Wilson) • Removed obfuscation for e-mail links (fixes #343, requested by Anton Tsyganenko) • New option ‘gfm_quirks’ for enabling/disabling parsing differences of the GFM parser with respect to the kramdown parser • 4 bug fixes: • Added support for HTML5 element <main> (fixes #334, reported by Jean-Michel Lacroix) • Fixed math element output for HTML converter when no math engine is set (fixes #342, reported by Adrian Sampson) • Fixed problem when using custom HTML formatter for syntax highlighter rouge (fixes #356, patch by Alexey Vasiliev) • Better compatibility with GFM when lists/blockquotes/codeblocks are used directly after a paragraph (fixes #336 (reported by Shuanglei Tao), #359 (reported by Matti Schneider) via the patch #358 by Shuanglei Tao) • 3 other fixes and enhancements: • Added some more examples for how list indentation works (fixes #353, requested by Robbert Brak) • Using RbConfig instead of deprecated Config for determining data directory (fixes #345, patch by Cédric Boutillier) • JRuby is now also tested via TravisCI (fixes #363, patch by Shuanglei Tao) Published on Sunday, 01 May 2016 kramdown 1.11.1 released This release fixes an emphasis parsing regression introduced in the last version. Changes • 1 bug fix: • Fixed emphasis parsing regression (fixes #333, reported by Marcus Stollsteimer) Published on Sunday, 01 May 2016 kramdown 1.11.0 released This release fixes some bugs and includes one minor change in regards to HTML syntax highlighting. Changes • 1 minor change: • The syntax highlighting language is now always included in the output as class name even if a syntax highlighter is used (fixes #328, requested by SLaks) • 3 bug fixes: • Fixed the GFM fenced code block parser to correctly split a provided highlighter name into language name and options parts • Fixed problem with underscores being processed even if inside a word (fixes #323, reported by Haruki Kirigaya) • Fixed HTML/XML parser to correctly, case sensitively parse XML (fixes #310, reported by cabo) • 2 other fixes: • Updated copyright year (fixes #331, reported by Oscar Björkman) • Updated supported Ruby version on installation page (reported by cabo) Published on Wednesday, 02 March 2016 kramdown 1.10.0 released This release brings the usual bug fixes but also support for the strikethrough syntax in the GFM parser as well as some enhancements regarding the specification of language names for syntax highlighting purposes. Changes • 4 minor changes: • Support for the math-engine MathJax-Node was updated to use the new mathjax-node package (fixes #313, pull request by Tom Thorogood) • URL query parameters can now be appended to language names specified in fenced code blocks if the syntax highlighting engine accepts them (fixes #234) • Added strikethrough syntax to the GFM parser (fixes #184 and #307; initial pull request by Diego Galeota, updated by Parker Moore) • Allow almost all characters in class names that are defined via a special syntax (fixes #318, requested by cabo) • 4 bug fixes: • Fixed a problem where Kramdown::Document.new would only accept the symbol :input but not the string ‘input’ as valid key (fixes #312, pull request by Sun Yaozhu) • Fixed inconsistent behavior: Empty link text is now also allowed for normal links, not just images (fixes #305, reported by cabo) • The HTML5 <mark> element is now recognized as span level element (fixes #298, reported by Niclas Darville) • Fixed problem where e-mail autolinks containing an underscore character were not correctly recognized (fixes #293, reported by erikse) • 3 other fixes: • Fixed missing package update statement for Travis (by Parker Moore) • Add some more documentation regarding MathJax (fixes #296, pull request by Christopher Jefferson) • Fixed bad link in API documentation (fixes #315, reported by Tom MacWright) Published on Thursday, 01 October 2015 kramdown 1.9.0 released This release contains some minor updates and bug fixes. Changes • 3 minor changes: • The Rouge syntax highlighter can now be enabled/disabled for spans and/or blocks and options can now be set for both spans and blocks as well as only for spans or only for blocks (fixes #286, requested by Raphael R.) • Setting the ‘footnote_backlink’ option to an empty string now completely suppresses footnotes (fixes #270, requested by Kyle Barbour) • New converter HashAST for creating a hash from the internal tree structure (fixes #275, pull request by Hector Correa) • 1 bug fix: • When using the ‘hard_wrap’ option for the GFM parser, line numbers were lost (fixes #274, pull request by Marek Tuchowski) Published on Saturday, 04 July 2015 kramdown 1.8.0 released This release contains only some minor updates and bug fixes. Changes • 4 minor changes: • The LaTeX converter now uses \texttt instead of \tt for code spans (fixes #257, reported by richard101696) • New option footnote_backlink for changing the backlink of footnotes in the HTML converter (fixes #247, requested by Benjamin Esham) • A quote directly followed by an ellipsis is now converted into an opening quotation mark (fixes #253, requested by Michael Franzl) • Removed warning for self-closing HTML elements that are not self-closed (fixes #262, requested by Gregory Pakosz) • 3 bug fixes: • Fixed #251: The special character sequence \ now works correctly when used in footnotes or headers that appear in the table of contents (reported by Peter Kehl) • Fixed #254: kramdown crashed on encountering a table with multiple consecutive separator lines (reported by Christian Kruse) • Fixed #256: Certain footnote definitions and codeblocks lead to crashes or unneeded backtracking in the regular expression engine - fixed by using atomic grouping (reported by Ali Ok) Published on Monday, 27 April 2015 kramdown 1.7.0 released This release brings among other things support for the ‘minted’ syntax highlighter for LaTeX and a new math engine based on MathJax-Node that outputs to MathML. Changes • 4 minor changes: • The syntax highlighter ‘minted’ for the LaTeX converter is now available (fixes issue #93, initial patch #242 by l3kn) • A new math engine based on MathJax-Node that outputs to MathML is now available (patch #240 by Tom Thorogood) • Fixed #244, #246: Fenced code blocks now allow a dash in the code language name (requested and patched by Dennis Günnewig) • The option list in the man page as well in the output of kramdown --help is now sorted. • 2 bug fixes: • Fixed #230: Warning message for method in lib/kramdown/utils/configurable.rb will not show anymore (reported by Robert A. Heiler) • Fixed #239: Handling of single/double quotes in reference style links now follows the same rules as with inline links (reported by Josh Davis) Published on Saturday, 28 February 2015 kramdown 1.6.0 released This release contains many fixes and minor enhancements as well as one major goodie that comes with a small caveat: block IALs can now be applied to link and abbreviation definitions! It may not sound like much but allowing block IALs to be applied to link definitions alleviates the problem that additional attributes could only be specified via span IALs. Now such attributes can be stored together with the URL and title at the link definition, for example: This is a ![resized image]. [resized image]: some_image.jpg "with a title" {: height="36px" width="36px" style="border: 1px solid green"} There is one small caveat, though. Regard the following construct: [linkdef]: http://example.com {:.block-ial} block element, e.g. a paragraph The block IAL would have been applied to the paragraph in previous versions but now it is applied to the link definition. However, such a construct is not very likely encountered in the real world. Changes • 7 minor changes: • Block IALs can now be applied to link and abbreviation definitions (inspired by issue #194 from cabo) • The syntax highlighting engine for Rouge now allows custom formatter classes to be used (issue #214, requested by BackOrder) • The MathJax math engine now allows adding previews (issue #225, requested by jethrogb) • The “toc_levels” option can now also take a Range object (pull request #210 by Jens Krämer) • The generated table of contents of the HTML converter now contains ID attributes on the links so that back-references can be used (issue #195, requested by Ciro Santilli) • A warning is now generated when duplicate HTML attributes are detected (issue #201, requested by winniehell) • Updated used version of prawn to 2.0.0 • 8 bug fixes: • Fixed #192: Emphasis by using underscore sometimes wrongly worked within a word (reported by Michael Franzl) • Fixed #198: Empty alt attributes on <img> tags are now correctly handled by the kramdown converter (reported by winniehell) • Fixed #200: Trailing whitespace is now really removed in paragraphs (reported by winniehell) • Fixed #220: HTML blocks with attributes weren’t correctly detected when directly after another block (reported by Bill Tozier) • Fixed #199: Empty title attributes are now ignored for images when using the kramdown converter (reported by and pull request #206 from winniehell) • Leading and trailing white space from math statements is now stripped as the whitespace sometimes lead to LaTeX conversion errors • Fixed #226: Class names may now start with a dash in IALs/ALDs (reported by Adam Hardwick) • Multiple consecutive block IALs before an element are now correctly processed Published on Saturday, 25 October 2014 kramdown 1.5.0 released This release brings the addition of Rouge as supported syntax highlighting engine besides Coderay as well as support for MathML output in the HTML converter through the libraries Ritex or itex2MML as alternatives to MathJax. By restructuring the code it will now be very easy to add other syntax highlighters or math engines in the future. Please also note that the old ‘coderay_*’ options are still supported but they are deprecated as of now. It is recommended to use the new ‘syntax_highlighter’ and ‘syntax_highlighter_opts’ options instead. The latter also take precedence over the former ‘coderay_*’ options. Changes • 6 minor changes: • Syntax highlighters are now configurable via the new ‘syntax_highlighter’ option. • Rouge has been added as alternative to Coderay for syntax highlighting (requested originally as Pgyments support in #24 by Jonathan Martin and then #68 by Eric Mill and #141 by Jeanine Adkisson). • The <div> tag surrounding syntax highlighted code now gets a class highlighter-NAME attached where NAME is the syntax highlighter used (requested in #76 by Marvin Gülker) • Math engines are now configurable via the new ‘math_engine’ option. • A math engine based on Ritex for MathML output has been added (requested by Tom Thorogood who provided the initial pull request #169). • A math engine based on itex2MML for MathML output has been added (requested by Tom Thorogood) • 2 bug fixes: • Fixed #171: Hard line wrapping in the GFM parser didn’t work correctly when an inline element started a new line (reported by Zach Ahn) • Fixed #173: The HTML <button> element is now recognized as span-level element (pull request by Morandat) Published on Tuesday, 16 September 2014 kramdown 1.4.2 released This release fixes some bugs and brings location information to more element types. A performance regression introduced in 1.4.0 has also been fixed – see the graphs of the benchmarks. Changes • 1 minor change: • Closes #166: Location information is now available in nearly all elements (requested by Mark Harrison) • 6 bug fixes: • Option ‘footnote_nr’ is now correctly supported by the LaTeX converter • Fixes #161: Footnotes inside footnotes are now recognized (reported by Nate Cook) • Fixes #164: Escaped hash signs at the end ot atx headers now work (reported by Alexander Köplinger) • Fixes #158: Sometimes line numbers were incorrectly reported due to the usage of a false method (reported by Mark Harrison) • Fixes #152: Line breaks are now recognized in GFM parser when hard_wrap is off (reported by mathematicalcoffee) • Fixes #155: HTML <details> and <summary> tags are now interpreted as elements with block content model (reported by cheloizaguirre) Published on Saturday, 02 August 2014 kramdown 1.4.1 released This release brings better line number reporting in warning messages and fixes some bugs. Changes • 1 minor change: • Improved line number reporting in warning messages • 3 bug fixes: • Fixed #147: HTML <textarea> tags were not parsed as block level elements (reported by Nguyen Anh Quynh) • Fixed #144: HTML <u> tags were not recognized as span level elements (reported by Yong-Yeol Ahn) • Fixed #148: GFM input and PDF output was missing in CLI help text (pull request by Sebastian Boehm) Published on Wednesday, 18 June 2014 kramdown 1.4.0 released This release fixes all outstanding bugs, so it is recommended to update. The one new feature is that the location of the footnotes can now be defined by attaching the reference name “footnotes” to an ordered or unordered list (like with the table of contents). One major problem was that unescaped pipe characters | often led to involunatary tables. This release introduces some changes that should prevent this for more cases than before. Additionally, since this is the most common problem case, it is advised to use \vert instead of | in inline math statements. Both do the same in LaTeX but the latter may inadvertently start a table, so better use the former! Changes • 2 minor changes: • Implemented #106: Users can now define the location of the footnotes (feature request by Matt Neuburg) • Merged #97: rake gemspec now generates a local kramdown.gemspec file (pull request by Nathanael Jones) • 9 bug fixes: • Fixed #128: <script> tags are now removed from math statements when converting to HTML (reported by Trevor Wennblom) • Fixed #129: Internal state of custom string scanner class was corrupted due to backtracking which led to problems with location tracking (reported by Mark Harrison) • Fixed #112: The content of <kbd>, <samp> and <var> is now also treated as raw HTML (reported by Denis Defreyne) • Fixed #114: Added missing HTML entity names (reported by Tomer Cohen) • Fixed #101: Fixed exception on missing alignment information when parsing HTML tables to native kramdown elements (initial pull request by zonecheung) • Fixed #117: The GFM parser now needs a space after a hash so that the line is identified as an atx header (reported by Trevo Wennblom) • Fixed #131: Location tracking in nested list was incorrect (reported by Mark Harrison) • Fixed/Worked around #23, #46, #107, #134: Parsing math blocks that contain pipe characters now works, adjusting inline math statements to use \vert instead of | resolves the other problems (reported by many) • Fixed #135: Escaped pipes in alternative text of image links were not correctly escaped (reported by Philipp Rudloff) Published on Monday, 17 March 2014 kramdown 1.3.3 released This release just fixes a bug with the default HTML document template. Changes • 1 bug fix: • The string charset= was missing from a <meta> tag in the HTML document template. Published on Sunday, 16 February 2014 kramdown 1.3.2 released This release brings some small performance optimizations and the ability to define custom rules for updating predefined link definitions. The latter is used in webgen 1.2.1 to drastically reduce the time for converting kramdown documents that use a lot of predefined link definitions. Changes • 2 minor changes: • Small (mostly string) performance optimizations • New method Kramdown::Parser::Kramdown#update_link_definitions method Published on Sunday, 05 January 2014 kramdown 1.3.1 released This release mitigates a performance problem introduced due to the storing of the location information. On Rubies prior to 2.0 the performance impact was negligible but on Ruby 2.0 and 2.1 performance was much worse. With the fix the performance is not on prior levels but much better. See the tests page which has been updated with current performance graphs. Also note that for PDF support you now need the newer Prawn versions (i.e. 0.13.x instead of 1.0.0.rc*)! Changes • 1 minor change: • Now depending on the newer Prawn versions, i.e. 0.13.x • 1 bug fix: • Mitigated a performance regression on Ruby 2.0 and Ruby 2.1 (introduced due to the storing of the location information) Published on Sunday, 08 December 2013 kramdown 1.3.0 released This release brings a pure Ruby PDF converter for kramdown based on the Prawn library. The PDF output can be customized by sub-classing the converter or by using a template file that adjusts the converter object. Changes • 1 major change: • A pure Ruby PDF converter based on Prawn is now available • 7 minor changes: • New option ‘auto_id_stripping’ can be used to strip HTML and other formatting before converting a header text to an ID (fixed GH#90 requested by Tuckie) • New option ‘hard_wrap’ for configuring the line break behaviour of the GFM parser (GH#83, patch by Brandur) • Location information (only line numbers) are now available in the :location option on most kramdown elements (GH#96 patch by Jo Hund) • Minitest 5.x is now used for testing. • A converter class can now specify whether a template should be applied before, after or before and after the conversion. • If a file specified with the “template” option is not found and the option starts with “string://”, the rest is assumed to be the template. • Unknown option keys are now passed through and not removed anymore • 5 bug fixes: • Fixed GH#77: Line break inside inline math statement now works correctly (reported by memeplex) • Fixed problem with line breaks in GFM parser • Fixed GH#95: Option coderay_bold_every now also accepts the value false (reported by Simon van der Veldt) • Fixed GH#91: Template extension is now the same as the converter name (initial patch by Andreas Josephson) • Fixed output of consecutive em/strong elements in kramdown converter • 3 documentation fixes: • The kramdown website is now hosted at http://kramdown.gettalong.org - please update your bookmarks! • Fixed GH#80: Typo in README.md (patch by Luca Barbato) • Fixed GH#81: Typo in options documentation (patch by Pete Michaud) • Deprecation notes: • Using .convertername instead of .converter_name is deprecated and will be removed in 2.0 • The option ‘auto_id_stripping’ will be removed in 2.0 because it will be the default. Published on Saturday, 31 August 2013 kramdown 1.2.0 released Some people wanted to see Github Flavored Markdown features in kramdown for a long time and now the waiting is over, thanks to the new GFM parser by Arne Brasseur. Aside from this new feature some bugs were also fixed. One that may have affected many people was the missing support for new stringex library versions. Changes • 2 minor changes: • Added a parser for Github Flavored Markdown (resolves GH#68 by Arne Brasseur who provided the initial implementation) • HTML attributes are now output for horizontal lines • 5 bug fixes: • The correct encoding on the result string is now set even when the template option is used • Fixed GH#72, GH#74: All ways to set a header ID now follow the same scheme which is compliant with HTML IDs (except that dots are not allowed) (reported and initial patch by Matti Schneider) • Fixed GH#73: The default HTML template now has a DOCTYPE and sets the encoding correctly (initial patch by Simon Lydell) • Fixed GH#67: URLs of link elements are now escaped in the LaTeX converter to avoid problems (patch by Henning Perl) • Fixed GH#70: Any version of the stringex library is now supported (reported by Simon Lydell) Published on Tuesday, 02 July 2013 kramdown 1.1.0 released This is just an incremental release bringing two new features and several bug fixes. Changes • 2 minor changes: • Footnote markers can now be repeated (resolves GH#62 and GH#63 by Theodore Pak who provided the initial patch) • The LaTeX acronym package is now used for abbreviations (resolves GH#55 by Tim Besard who provided the initial patch) • 3 bug fixes: • Fixed GH#60: Numbers are now recognized in addition to word characters when converting underscores (patch by Trevor Wennblom) • Fixed GH#66: HTML elements <i>, <b>, <em> and <strong> are now converted correctly by the LaTeX converter (patch by Henning Perl) • Fixed GH#57: Better smart quote handling when underscores are directly after or before quotation marks (reported by Bill Tozier) Published on Thursday, 09 May 2013 kramdown 1.0.2 released This release fixes some bugs; updating is recommended. Some notes: • The tests page has been updated to include relative times in the benchmark so that it is possible to better gauge the performance of kramdown (requested by postmodern). • The kramdown Wiki now contains a listing of libraries that extend kramdown (idea by postmodern). Changes • 4 bug fixes • Fixed GH#51: Try requiring a parser/converter library based on the specified input/output name (requested by postmodern) • Fixed GH#49: Convert non-breaking space to ~ for LaTeX converter (patch by Henning Perl) • Fixed GH#42: No more warning for IALs/ALDs/extensions without attributes (reported by DHB) • Fixed GH#44: Removed trailing whitespace in link definition for kramdown converter (patch by Marcus Stollsteimer) Published on Monday, 11 March 2013 kramdown 1.0.1 released This release just fixes a bug where kramdown was modifying the input string, so updating is recommended. Changes • 1 bug fix • Fixed GH#40: Input string was unintentionally modified Published on Sunday, 10 March 2013 kramdown 1.0.0 released Finally! After four years of development I proudly present you kramdown 1.0.0! Naturally, it is recommened to update to this version. Although the version number now starts with one, the changes from the last release are mostly bug fixes and some small changes. The biggest change is the license change: Until now kramdown was released under the GPL but starting from 1.0.0 it is released under the MIT license! The MIT license allows the use of kramdown in a commercial setting. However, if you are using kramdown in a commercial setting, I ask you to contribute back any changes you make for the benefit of the community and/or to make a donation - thanks in advance! Changes • 4 minor changes • New option transliterated_header_ids for transliterating header text into ASCII before generating a header ID which is useful for language like Vietnamese (fixed GH#35, requested by Kỳ Anh) • The quotation mark entity &quot; now gets converted to its character equivalent when entity_output=as_char. • A warning is now output for IALs/ALDs that contain not attribute defintion. • HTML footnote output is changed to use class instead of rel to achieve (X)HTML4/5 compatibility • 3 bug fixes • Fixed GH#38: Encoding problem on 1.9/2.0 due to incompatible encodings – the source string is now converted to UTF-8 before parsing and converted back after converting (reported by Simon Lydell) • Fixed RF#29647: Abbreviations with non-word first character at start of text lead to exception (reported by Stephan Dale) • Fixed RF#29704: ID specified on atx style headers were not always correctly detected (reported by Kyle Barbour) Published on Sunday, 20 January 2013 kramdown 0.14.2 released This release adds the possibility to pre-define link definitions via the new option link_defs. Apart from that one bug was fixed. It is recommened to update to this version. On a side note the kramdown homepage has been updated to show a menu of the available documentation pages when viewing a documentation page. And a documentation page showing all available options has been added. Changes • 1 minor change • New option link_defs for pre-defining link definitions • 1 bug fix • Fixed raised errors on atx headers without text Published on Friday, 30 November 2012 kramdown 0.14.1 released This is just a bug fix release and it is recommened to update to this version. Changes • 3 bug fixes • Only HTML elements that must not contain a body (like <br />) are output in this form, all other elements now use an explicit closing tag (resolves among other things issues with <i>) • Specifying a block IAL before a definition list now works correctly • Fixed bug GH#30: Empty body for a definition in a definition list lead to an exception (reported by Mark Johnson) Published on Sunday, 16 September 2012 kramdown 0.14.0 released First of all please note that this release contains a backwards-incompatible change: The syntax for specifying a code language for a code block or code span has changed. Instead of using lang='CODELANG' one has to use .language-CODELANG now. This change has been introduced to avoid problems because the lang="..." attribute is used by HTML for other purposes than setting the code language. Furthermore using .language-CODELANG is also proposed by HTML5 and it seems to be a good way to achieve the needed functionality. Other changes in this release include the possibility of setting the code language on the starting line of a fenced code block and a way of excluding certain headers from the table of contents by assigning the .no_toc class to them. Changes • 2 major changes • Code language is now specified via .language-CODELANG instead of lang='CODELANG' • Implemented support for setting language on fenced code block starting line (initial patch by Bran) • 1 minor change • Headers with an ID can be prevented from showing in the TOC by assigning the .no_toc class to them (patch by Tim Bates) • 1 bug fix • Numeric instead of symbolic HTML entities are now the default fallback (patch by Gioele Barabucci) Published on Friday, 31 August 2012 kramdown 0.13.8 released This release brings two new options (one for adjusting header levels and the other for enabling/disabling coderay). And the usual bug fixes. Changes • 2 minor changes • New option header_offset for offsetting all header levels (initial patch by Michal Till) • New option enable_coderay for enabling/disabling coderay (initial patch by Bran) • 5 bug fixes • Reserved HTML characters in abbreviation titles are now correctly output (patch by Alex Tomlins) • Similar abbreviations (like CSS and CSS3) are now correctly parsed • Fixed bug RF#29626: Text of mailto-link was sometimes wrongly obfuscated (reported by B Wright) • Fixed known Ruby 1.9.3 problem with RakeTest task (patch by Gioele Barabucci) • Fixed double output of ‘markdown’ attribute on HTML elements in kramdown converter • 1 documentation change • README file is now called README.md and uses kramdown syntax (patch by Bran) Published on Sunday, 03 June 2012 kramdown 0.13.7 released This release, aside from fixing bugs and some other minor changes, adds a new converter for removing HTML tags from an element tree. This means that one can now do kramdown -i html -o remove_html_tags,kramdown my_document.html and get a nice kramdown document from a full HTML document! Changes • 1 major change • 3 minor changes • Updated kramdown binary to support multiple, chained output formats • Added a new option for setting a default coderay highlighting language (requested by Lou Quillio) • Feature request RF#29575: Added support for &shy; soft-hyphen entity (requested by Alexander Groß) • 5 bug fixes • Fixed bug RF#29576: Footnotes in headers resulted in duplicated id attr in TOC (reported by korthaerd) • Multi-line titles in links are now correctly parsed • DOCTYPE declaration is now correctly parsed independent of case • Setting of nil options works now by using the String ‘nil’ • Fixed table-of-content test cases (test went green although the meaning of the test was not satisfied due to copy-paste - d’oh!) • 1 documentation fix • Fixed bug RF#29577: sidebar link to news page was broken for HTML pages in sub directories (reported by korthaerd) Published on Wednesday, 09 May 2012 kramdown 0.13.6 released This is just a bug fix release and it is recommened to update to this version. Changes • 2 bug fixes • Fixed a problem with CDATA sections appearing in MathJax output (reported by Xi Wang, see github commit) • Fixed bug RF#29557: Parsing fails with lists that contain an empty list item (reported by Juan Romero Abelleira) Published on Sunday, 19 February 2012 kramdown 0.13.5 released This is mostly a bug fix release and it is recommened to update to this version. The kramdown homepage has also be updated visually. This should provide a better reading experience for mobile and small-screen devices. Changes • 2 minor changes: • HTML attributes without values are now supported (fixes bug RF#29490 reported by Nat Welch) • HTML attributes names are now always converted to lower case for consistency • 5 bug fixes • Fixed Document#method_missing to accept snake_cased class name (patch by tomykaira) • Fixed problem with missing REXML constant on older Ruby 1.8.6 version (reported by Dave Everitt) • Fixed bug RF#29520: Valid inline math statement does not trigger math block anymore (reported by Gioele Barabucci) • Fixed bug RF#29521: HTML math output is now always XHTML compatible (reported by Gioele Barabucci) • Empty id attributes are now handled better by the HTML and kramdown converters (reported by Jörg Sommer) • 1 documentation fix: • Fixed invalid options statement in example on quick reference page (reported by Jörg Sommer) Published on Friday, 16 December 2011 kramdown 0.13.4 released This is mostly a bug fix release and it is recommened to update to this version. Changes • 1 minor change: • Added a converter that extracts the TOC of a document (requested by Brendan Hay). Note that this is only useful if you use kramdown as a library! • 7 bug fixes • Fixed a typo: It should be --output and not --ouput (patch by postmodern) • Fixed HTML converter to correctly output empty span tags (patch by John Croisant) • Fixed bug RF#29350: Parsing of HTML tags with mismatched case now works • Fixed bug RF#29426: Content of style tags is treated as raw text now • HTML converter now uses rel instead of rev to be HTML5 compatible (patch by Joe Fiorini) • Fixed Ruby 1.9.3 related warnings • Fixed HTML parser to work around an implementation change of Array#delete_if in Ruby 1.9.3 Published on Friday, 06 May 2011 kramdown 0.13.3 released This is just a bug fix release and it is recommened to update to this version. Changes • 1 minor change: • Added support for correctly parsing more HTML5 elements (requested by Bernt Carstenschulz) • 10 bug fixes: • Table line |a|b was parsed as |ab (patch by Masahiro Kitajima) • Table line |a lead to error condition (patch by Masahiro Kitajima) • Added OrderedHash#dup to fix a problem when converting a document more than once (reported by Michael Papile) • Fixed places where the document tree was modified during conversion • Fixed bug in LaTeX image element converter that was introduced in a former release (reported by Michael Franzl) • Fixed problem with block HTML tag being treated as header text • Fixed problem with footnotes in LaTeX tables – now using longtable instead of tabular environment (reported by Michael Franzl) • The style attribute is now used for outputting table cell alignments in HTML instead of using the deprecated col tags • Fixed HTML-to-native conversion of unsupported HTML elements • Fixed kramdown converter to correctly output table cells with attributes • 1 documentation fix: • Some HTML tags were not properly escaped on the quick reference page (reported by Yasin Zähringer) Published on Monday, 21 February 2011 kramdown 0.13.2 released This release just fixes a problem when parsing long paragraphs/blockquotes under Ruby 1.8. Changes • 1 bug fix: • Fixed bug RF#28917: Regexp buffer overflow when parsing long paragraphs or blockquotes under Ruby 1.8 (reported by Michael Fischer) Published on Saturday, 22 January 2011 kramdown 0.13.1 released The focus of this release was bringing kramdown one step closer to the 1.0 release. The API hasn’t changed so this is a drop-in replacement for the previous version of kramdown. If you think that • kramdown is still missing an important syntax found in another Markdown implementation, • the API doesn’t feel right, • or anything else is missing or should be changed for the 1.0 release, please tell us so by writing to [email protected]! Changes • 3 minor changs: • The LaTeX converter now inserts \hypertarget commands for all elements that have an ID set. The normal link syntax can be used to link to such targets (requested by David Doolin) • New option smart_quotes for specifying how smart quotes should be output (requested by Michael Franzl) • Any character except a closing bracket is now valid as link identifier (this makes this part of the kramdown syntax compatible to Markdown syntax) • 10 bug fixes: • Fixed error when parsing unknown named entities (reported by David Doolin) • Added entity definitions for entities &ensp;, &emsp; and &thinsp; (patch by Damien Pollet) • Block HTML line was incorrectly recognized as table line (reported by Piotr Szotkowski) • Fixed bug RF#28809: Empty <a> tags are were output as self-closed tags (reported by Tim Cuthbertson) • Fixed bug RF#28785: Name of default template in documentation for template option was false (reported by Matthew Bennink) • Fixed bug RF#28769: span extension in list item wrongly triggered list item IAL parser (reported by Yann Esposito) • The table row parser has been fixed so that it does not use pipes which appear in <code> tags as cell separators anymore (like it is done with the native code span syntax) • Fixed bug where converting <em> and <strong> tags to native elements was wrongly done • Fixed calculation of cell alignment values when converting HTML tables to native ones, <col/> tags are now correctly used • HTML Tables are now only converted to native tables if all table rows have the same number of columns. • 1 deprecation note: • Removed deprecated option toc_depth – use the option toc_levels instead. Published on Monday, 01 November 2010 kramdown 0.12.0 released Some changes in the last release of kramdown lead to a performance drop. Therefore some performance optimizations have been done resulting in about 15% less created objects (which reduces the garbage collection pressure) and quite a performance gain (this version of kramdown is the faster than any previous version when using Ruby 1.9.2) – see the tests page for detailed information. Aside from the performance optimizations, a Markdown-only parser based on the kramdown parser has been added The “internal” API (which is currently everything except the Kramdown::Document class) has changed again and developers may therefore need to update their extensions! The API changes now allow parsers and converters to be used without a Kramdown::Document class since this class is just provided for convenience. All the needed information is now stored in the element tree itself. Information that has no direct representation as an element is stored in the options of the root element (e.g. abbreviation definitions). More information can be found in the API documentation. The API should now be relatively stable and once kramdown reaches 1.0.0, the final API will only be changed in backwards compatible ways. Changes • 1 major change: • Added Markdown-only parser • 6 minor changes: • Angle brackets can now also be escaped • Pipe characters in code spans that appear in tables do not need to be escaped anymore • New option toc_levels for specifying the header levels used for the table of contents (requested by Rick Frankel, RF#28672) • MathJax instead of jsMath is now used for math output in the HTML converter • New option latex_headers for customizing the header commands for the LaTeX converter • Removed parsing of HTML doctype in HTML parser • 6 bug fixes: • Fixed output of paragraphs starting with a right angle bracket in the kramdown converter • Invalid span IALs are now left alone and not removed anymore • Option entity_output is now respected when outputting a non-breaking space for emtpy table cells in the HTML converter (reported by Matt Neuburg) • Fixed bug where a block IAL before a block HTML element was ignored (reported by Matt Neuburg) • Fixed bug where block IALs were falsely applied to nested elements (reported by Matt Neuburg) • Fixed bug RF#28660: Converting <div><br /></div> from HTML to kramdown resulted in stack trace (reported by Garrett Heaver) • 1 deprecation note: • The option toc_depth is replaced by the new option toc_levels and will be removed in the next version. Published on Friday, 01 October 2010 kramdown 0.11.0 released The biggest change in this release is the implementation of the “lazy syntax” which allows one to not use the correct indent or block marker and still continue a paragraph, blockquote, … The original Markdown syntax allows this and it was requested that kramdown allows this, too. However, the main reason for adding this syntax to kramdown is not to encourage authors to be lazy but to allow kramdown texts to be hard-wrapped by other applications (think, for example, email programs). Therefore you shouldn’t make active use of this feature when creating a kramdown document! Another important, though minor, change is that invalid HTML tags and extensions are not removed anymore. This is done because of the general rule that unrecognized elements are treated as simple text. Note: The “internal” API (which is currently everything except the Kramdown::Document class) has changed and developers may therefore need to update their extensions! Changes • 3 major changes: • Line wrapping a.k.a. “lazy syntax” is now supported (requested by Shawn Van Ittersum) • Link URLs in inline links and link definitions may now also contain spaces, even if not enclosed in angle brackets (requested by Matt Neuburg) • The kramdown converter produces nicer output, using the new option line_width • 9 minor changes: • The HTML converter does not escape the quotation mark in code blocks anymore (requested by Matt Neuburg) • The order of HTML attributes and attributes defined via IALs and ALDs is now preserved (requested by Matt Neuburg) • Syntax highlighting is now supported in code spans when using the HTML converter (requested by Josh Cheek) • Updated nomarkdown extension and converters to support restricting the output to certain or all converters • Colons are now allowed in ID names for ALDs and IALs • Tables and math blocks now have to start and end on block boundaries • The table syntax was relaxed to allow table lines that don’t start with a pipe character (to be more compatible with PHP Markdown Extra tables) • HTML elements <b> and <i> are now converted to <strong> and <em> when using HTML-to-native conversion • The document.html template now uses the text of the first not-nested header as title text • 9 bug fixes: • The LaTeX converter now removes trailing whitespace in footnotes (reported by Michael Franzl) • Fixed bug RF#28429: HTML output of iframe HTML element was invalid (reported by Matthew Riley) • Fixed bug RF#28420: LaTeX converter shouldn’t escape the content of the nomarkdown extension (reported by Bj Wilson) • Fixed bug RF#28469: HTML “document” template did not work (reported by Vofa Ethe) • Fixed bug: HTML/kramdown output of textarea HTML element was invalid (reported by John Muhl) • Invalid or unknown extension tags are now left alone and not removed anymore • Invalid HTML tags are now left alone and not removed anymore • Fixed a minor problem in list parsing which arised due to compact nested list detection • Link/Abbreviation/Footnote definitions as well as extensions, ALDs and block IALs now work correctly as block separators • 1 deprecation note: • The option numeric_entities has been removed Published on Monday, 19 July 2010 kramdown 0.10.0 released This release contains many small changes and improvements as well as many bug fixes, thanks to all the people on the kramdown mailing list! Changes • Minor changes: • The LaTeX converter now also outputs the element attributes on the end tag (requested by Michael Franzl) • New option entity_output for specifying how entities should be output • The underscore in the option names is now replaced with a hyphen for nicer CLI option names • Paragraphs that contain only an image are converted to figures in the LaTeX converter (requested by Michael Franzl) • Added information to the LaTeX converter documentation on how to change the header types and quotation marks • Bug fixes: • LaTeX converter now outputs line breaks correctly (reported by Michael Franzl) • Always outputting the entities zcaron and Zcaron numerically since browser support seems to be non-existing (reported by Eric Sunshine) • Fixed warnings and problems when running under Ruby 1.9.2-rc1 • Fixed problem with smart quote directly after smart quote output in LaTeX converter (reported by Michael Franzl) • Fixed problem in the HTML parser that prevented <body markdown="1"> from being processed correctly (reported by Eric Sunshine) • Blockquotes with multiple child elements are now output with the quotation environment instead of the quote environment by the LaTeX converter (reported by Michael Franzl) • Fixed problem with parsing autolinks when using an encoding different from UTF-8 (reported by Eric Sunshine) • Fixed problem with parsing HTML <a> tag without href attribute (reported by Eric Sunshine) • Deprecation notes: • The option numeric_entities is replaced by the new option entity_output and will be removed in the next version • The method Kramdown::Converter::Html#options_for_element has been removed Published on Wednesday, 23 June 2010 kramdown 0.9.0 released The biggest change in this release is the addition of a kramdown converter. This converter together with the HTML parser enables one to convert an HTML document into a kramdown document. Apart from that there are many other small changes and bug fixes, a full list of which you find below. Changes • Major changes: • New kramdown converter that converts an element tree into a kramdown document • Minor changes: • Added option numeric_entities that defines whether entities are output using their names or their numeric values • Added option toc_depth for specifying how many header levels to include in the table of contents (patch by Alex Marandon) • Ruby 1.9 only: The HTML converter now always tries to convert entities to their character equivalences if possible • Change in HTML parser: conversion of pre and code elements to their native counterpart is only done if they contain no entities (under Ruby 1.9 entities are converted to characters before this check if possible) • The comment extension now produces comment elements that are used by the converters • IALs can now also be assigned to definitions (i.e. dd elements) • Image links may now be specified without alternative text (requested by Rune Myrland, fixes RF#28292) • The HTML parser gained the ability to convert conforming span and div elements to math elements • The LaTeX converter now outputs the element attributes as LaTeX comment for some elements (blockquotes, lists and math environments; requested by Michael Franzl) • Bug fixes: • Fixed problem with list item IALs: the IAL was not recognized when first element was a code block • Fixed ri documentation error on gem installation (patch by Alex Marandon) • Math content is now correctly escaped when using the HTML converter • Fixed html-to-native conversion of tables to only convert conforming tables • Deprecation notes: • The filter_html option has been removed. • The method Kramdown::Converter::Html#options_for_element has been renamed to html_attributes – using the old name is deprecated and the alias will be removed in the next release Published on Tuesday, 08 June 2010 kramdown 0.8.0 released One of the bigger changes in this release is the support for converting HTML tags into native kramdown elements via the new html_to_native option. For example, the HTML tag p is converted to the native paragraph element instead of a generic HTML tag if this option is set to true. This is especially useful for converters that don’t handle generic HTML tags (e.g. the LaTeX converter). This conversion is a feature of the new standalone HTML parser which is used by the kramdown parser for parsing HTML tags. Also note that support for the old extension syntax and custom extensions has been dropped as of this release! And the filter_html option will be removed in the next release because there exist better facilities for performing this kind of task! Changes • Major changes: • New parser for parsing HTML documents • Added the option html_to_native (default: false) which tells the kramdown parser whether to convert HTML tags to native kramdown elements or to generic HTML elements. • Minor changes: • Table header cells are now represented by their own element type • The element type :html_text is not used anymore - it is replaced by the general :text element • HTML comments are now converted to LaTeX comments when using the LaTeX converter • The LaTeX converter can now output the contents of HTML <i> and <b> tags • Bug fixes: • Attributes that have been assigned to the to-be-replaced TOC list are now added correctly on the generated TOC list in the HTML converter • Fixed problem in typographic symbol processing where an entity string instead of an entity element was added • Fixed problem with HTML span parsing: some text was not added to the correct element when the end tag was not found • HTML code and pre tags are now parsed as raw HTML tags • HTML tags inside raw HTML span tags are now parsed correctly as raw HTML tags • The Rakefile can now be used even if the rdoc gem is missing (patch by Ben Armston) • Fixed generation of footnotes in the LaTeX converter (patch by Ben Armston) • Fixed LaTeX converter to support code spans/blocks in footnotes • HTML comments and XML processing instructions are now correctly parsed inside raw HTML tags • HTML script tags are now correctly parsed • Fixed the abbreviation conversion in the LaTeX converter • Empty image links are not output anymore by the LaTeX converter • Deprecation notes: • The old extension syntax and support for custom extensions has been removed. • The filter_html option will be removed in the next release. Published on Friday, 07 May 2010 kramdown 0.7.0 released This release adds syntax support for abbreviations. This means that kramdown is now syntax-wise on par with Maruku and PHP Markdown Extra! Another big change is the extension support: After some discussion on the mailing list (many thanks to Eric Sunshine and Shawn Van Ittersum), the syntax for the extensions has been changed and support for custom extensions will be dropped in a future release. Additionally, the option auto_ids has been moved from being interpreted by the parser to being interpreted by the converters. This means that it is not possible anymore to turn automatic header ID generation on or off for parts of a text. The HTML and LaTeX converters also gained the ability to generate a table of contents. Just add the reference name “toc” to an ordered or unordered list and it will be replaced by the ToC (this is “coincidentally” the same syntax that Maruku uses…). Changes • Major changes: • Added support for PHP Markdown Extra like abbreviations • Added support for span extensions • New syntax for block/span extensions • Added support for generating a table of contents in the HTML and LaTeX converters • Minor changes: • The option auto_ids has been moved from the parser to the converters. • Invalid span IALs are now removed from the output • IALs can now be applied to individual list items by placing the IAL directly after the list item marker • Added an option for prefixing automatically generated IDs with a string • Block IALs can now also be put before a block element • Bug fixes: • Fixed a problem with parsing smart quotes at the beginning of a line (reported by Michael Franzl) • Deprecation notes: • Removed deprecated CLI option -f • The old extension syntax and support for custom extensions will be removed in the next release. Published on Tuesday, 06 April 2010 kramdown 0.6.0 released This release adds syntax support for block and inline LaTeX math (for example: $e^{i\pi}=?$). Aside from that there are the usual small enhancements and bug fixes. Changes • Major changes: • Added syntax support for block and inline LaTeX math • Minor changes: • Added a man page for the kramdown binary • Added a CLI option for selecting the input format and changed the output format option to -o • Small syntax change for list items: the last list item text is now also wrapped in a paragraph tag if all other list items are. • Added documentation on available parsers and converters • Bug fixes: • Fixed problem where clearly invalid email autolinks were permitted (reported by Eric Sunshine) • Added documentation for autolinks (reported by Eric Sunshine) • Fixed performance problem related to emphasis parsing (reported by Chris Heald) • Fixed bug RF#27957: document templates were missing from distribution packages (reported by Alex Bargi) • Fixed problem with LaTeX converter handling HTML elements not correctly • Deprecation notes: • The CLI options -f will be removed in the next release. Published on Monday, 15 February 2010 kramdown 0.5.0 released This release features syntax support for smart quotes in kramdown documents and a new converter for LaTeX output. The kramdown binary has also been enhanced to support setting any option. The additional support for the smart quotes makes this release of kramdown a little bit slower than the previous releases when run under Ruby 1.8. However, a small optimization in the span parser which is not noticable under Ruby 1.8 gives quite a performance boost under Ruby 1.9 (see the graphs on the tests page). Also note that the internals have been restructured slightly. So if you do more than just using the basic Kramdown::Document.new(SOURCE, OPTIONS).to_html you may need to adapt your code. Since the option handling has been revamped, each coderay option must not be set separably! Changes • Major changes: • Enhanced the kramdown binary (it now supports setting the available options) • Added support for ERB templates to wrap the generated output • Added syntax support for smart quotes • Added a converter for LaTeX output • Minor changes: • Some code restructurations • The quotation mark " is not converted to &quot; in normal text anymore. • Bug fixes: • Fixed problem with multibyte encodings under Ruby 1.9 Published on Friday, 22 January 2010 kramdown 0.4.0 released This release features the addition of a simple table syntax and syntax highlighting of code blocks. I think that with these two additions kramdown now supports all the major features regarding parsing and HTML output that Maruku supports. Regarding speed: Simple benchmarks using the Markdown README file (can be found inside this zip file) show that kramdown is currently faster than, for example, the original Markdown.pl, PHP Markdown, PHP Markdown Extra, Python Markdown and Maruku. Changes • Major changes: • Minor changes: • Changed CSS class name kramdown-footnotes to footnotes for better compatibility • Bug fixes: • Regular expression for matching escaped characters now works correctly Published on Sunday, 20 December 2009 kramdown 0.3.0 released The HTML block syntax was changed in this release sothat using raw HTML blocks works more naturally and the rules are easier to remember. This also lead to the creation of a completely new HTML block parser. Apart from that, there have also been some bug fixes. Another important change is that kramdown now also runs under Ruby 1.8.5. Changes • Major changes: • Added a compatibility fix so that kramdown works under Ruby 1.8.5 (requested by Steve Huff) • Complete overhaul of the used block HTML syntax and block HTML parser • Using the same semantics for invalid end tags and unclosed tags in the block and span HTML parser • Bug fixes: • Fixed warnings on Ruby 1.9 • Fixed bug in emphasis parser where emphasis started with an underscore at the beginning of a new line inside a paragraph was not recognized (reported by Eric Sunshine) • Deprecation notes: • The old extension names kdoptions and nokramdown have been removed, only the new names options and nomarkdown will work from now on. Published on Thursday, 03 December 2009 kramdown 0.2.0 released The most important changes in this release are the inclusion of a definition list syntax and the much improved HTML parser. For example, the HTML parser now recognizes the markdown attribute for enabling and disabling syntax processing in an HTML element and it works in many more scenarios. The kramdown syntax is still a bit in in a state of flux but all of the major syntax elements (except a syntax for tables) are now available. The following releases will focus on stability and fixing bugs. kramdown now also passes 16 from the 23 original Markdown test cases and if one looks at the ones that fail one can easily see that this is because of the small changes in the syntax (e.g. converting --- to &mdash). This means that almost all Markdown documents show be correctly parsed by kramdown! Last but not least I want to thank Eric Sunshine for his many helpful comments, suggestions and bug reports! Changes • Major changes: • Definition lists are now supported • Option auto_ids now defaults to true • kramdown syntax (except HTML block lines) is not processed anymore by default in HTML block tags • Added option for enabling/disabling parsing of HTML blocks/spans • Added recognition and usage of the “markdown” attribute for HTML block/span tags • Renamed extensions kdoptions to options and nokramdown to nomarkdown (suggested by Eric Sunshine) • Added support for setting header IDs via the syntax available in PHP Markdown Extra and Maruku • Bug fixes: • Fixed bug that occured when using auto_ids=true and an IAL for assigning an ID to a header • Fixed bug with parsing of autolinks (reported by Eric Sunshine) • Fixed many bugs regarding HTML parsing – HTML parsing should work much better now (reported by Eric Sunshine) • Fixed bug with parsing of horizontal rules which contain tabs • Deprecation notes: • The old extension names kdoptions and nokramdown will be removed in one of the next versions, use the new names options and nomarkdown`. Published on Saturday, 21 November 2009 kramdown 0.1.0 released This is the first release of kramdown, yet-another-Markdown converter for Ruby, with the following features: • Written in pure Ruby, no need to compile an extension (like BlueCloth or rdiscount) • Fast (current impl ~5x faster than Maruku, ~10x faster than BlueFeather, although ~30x slower than native code like rdiscount) • Strict syntax definition (special cases for which the original Markdown page does not account for are explicitly listed and it is shown how kramdown parses them - see the Syntax page) • Supports common Markdown extension (similar to Maruku)
Now we will go all the way back to Plank who proposed that the emission of radiation be in quanta with to solve the problem of Black Body Radiation. So far, in our treatment of atoms, we have not included the possibility to emit or absorb real photonsnor have we worried about the fact that Electric and Magnetic fields are made up of virtual photons. This is really the realm of Quantum Electrodynamics, but we do have the tools to understand what happens as we quantize the EM field. We now have the solution of the Harmonic Oscillator problem using operator methods. Notice that the emission of a quantum of radiation with energy of is like the raising of a Harmonic Oscillator state. Similarly the absorption of a quantum of radiation is like the lowering of a HO state. Plank was already integrating over an infinite number of photon (like HO) states, the same integral we would do if we had an infinite number of Harmonic Oscillator states. Plank was also correctly counting this infinite number of states to get the correct Black Body formula. He did it by considering a cavity with some volume, setting the boundary conditions, then letting the volume go to infinity. This material is covered in Gasiorowicz Chapter 22, in Cohen-Tannoudji et al. Chapter XIII,and briefly in Griffiths Chapter 9. Subsections Jim Branson 2013-04-22
Generate lists in which every sublist has a unique element The problem is defined as follows: Create a function that takes an integer and returns a list of integers, with the following properties: • Given a positive integer input, n, it produces a list containing n integers ≥ 1. • Any sublist of the output must contain at least one unique element, which is different from all other elements from the same sublist. Sublist refers to a contiguous section of the original list; for example, [1,2,3] has sublists [1], [2], [3], [1,2], [2,3], and [1,2,3]. • The list returned must be the lexicographically smallest list possible. There is only one valid such list for every input. The first few are: f(2) = [1,2] 2 numbers used f(3) = [1,2,1] 2 numbers used f(4) = [1,2,1,3] 3 numbers used • Wouldn't [1,2,1] be incorrect because elements 1 are the same? – Timtech Jan 31 '14 at 16:52 • I'm sorry, you're going to have to better define "lexicographically" better over the solution space. – McKay Jan 31 '14 at 16:52 • e.g. why isn't [0,1] better than [1,2] for f(2)? – McKay Jan 31 '14 at 16:54 • @Timtech: No, because the first 1 is in another sublist than the second 1. A sublist is a contiguous section of the original list, so there are three sublists: [1] [1,2] [1] – ProgramFOX Jan 31 '14 at 16:55 • @ProgramFOX and everyone who voted to close this, since this question is tagged as code-golf I think we do have an objective winning criterion? – ace Jan 31 '14 at 22:55 APL, 18 {+⌿~∨⍀⊖(⍵/2)⊤2×⍳⍵} 1 + number of trailing zeros in base 2 of each natural from 1 to N. Example {+⌿~∨⍀⊖(⍵/2)⊤2×⍳⍵} 32 1 2 1 3 1 2 1 4 1 2 1 3 1 2 1 5 1 2 1 3 1 2 1 4 1 2 1 3 1 2 1 6 GolfScript (20 18 chars) {,{.)^2base,}%}:f; This is a simple binary ruler function, A001511. Equivalently {,{).~)&2base,}%}:f; {,{~.~)&2base,}%}:f; {,{).(~&2base,}%}:f; {,{{1&}{2/}/,)}%}:f; Thanks for primo for saving 2 chars. • ).~)& -> .)^ for 2. – primo Jan 31 '14 at 18:37 Sclipting, 26 23 characters 감⓶上가增❷要❶감雙是가不감右⓶增⓶終終丟丟⓶丟終并 This piece of code generates a list of integers. However, if run as a program it will concatenate all the numbers together. As a stand-alone program, the following 25-character program outputs the numbers separated by commas: 감⓶上가增❷要감嚙是가不⓶增⓶終終丟丟⓶丟껀終合鎵 Example output: Input: 4 Output: 1,2,1,3 Input: 10 Output: 1,2,1,3,1,2,1,4,1,2 Python 2.7, 65 characters print([len(bin(2*k).split('1')[-1]) for k in range(1,input()+1)]) The number of trailing zeros in 2, 4, 6, ..., 2n. n&p=n:p++(n+1)&(p++n:p) f n=take n\$1&[] Example runs: λ: f 2 [1,2] λ: f 3 [1,2,1] λ: f 4 [1,2,1,3] λ: f 10 [1,2,1,3,1,2,1,4,1,2] λ: f 38 [1,2,1,3,1,2,1,4,1,2,1,3,1,2,1,5,1,2,1,3,1,2,1,4,1,2,1,3,1,2,1,6,1,2,1,3,1,2] Golfscript - 1 character , I'm pretty sure this meets the criteria, but it does seem weirdly worded. • It seems to me that something like this would work, +1. – Timtech Jan 31 '14 at 16:47 • Excuse me? For input 3 this would generate 1,2,3, which is wrong because 1,2,1 is correct and lexicographically smaller. – Timwi Jan 31 '14 at 16:48 • Timwi's point aside, the convention for questions which ask specifically for a function is that GolfScript answers should define a named block and then clear it from the stack: i.e. the boilerplate is a prefix of { and a suffix of }:f; – Peter Taylor Jan 31 '14 at 17:17 • @PeterTaylor So, when they say "function", and most programming languages get to cut out crap like classes and using directives..., GolfScript has more boilerplate that has to be thrown on? – McKay Jan 31 '14 at 18:07 • And even if actually creating a named block is important, why does it have to be dropped from the stack? – McKay Jan 31 '14 at 18:12
# Adding private key into iOS Keychain I am trying to add a private key into the iOS keychain. The certificate (public key) works fine but the private key refuses... I am totally confused why the following code does not work. First I am checking if the current key (=key in case of that the Keychain is a key/value store) is 'free' in the Keychain. Then I am going to add the private key. CFStringRef labelstring = CFStringCreateWithCString(NULL, [key cStringUsingEncoding:NSUTF8StringEncoding], kCFStringEncodingUTF8); NSArray* keys = [NSArray arrayWithObjects:(__bridge id)kSecClass,kSecAttrLabel,kSecReturnData,kSecAttrAccessible,nil]; NSArray* values = [NSArray arrayWithObjects:(__bridge id)kSecClassKey,labelstring,kCFBooleanTrue,kSecAttrAccessibleWhenUnlocked,nil]; NSMutableDictionary* searchdict = [NSMutableDictionary dictionaryWithObjects:values forKeys:keys]; CFRelease(labelstring); NSMutableDictionary *query = searchdict; CFTypeRef item = NULL; OSStatus error = SecItemCopyMatching((__bridge_retained CFDictionaryRef) query, &item); if (error) { NSLog(@"Error: %ld (statuscode)", error); } if(error != errSecItemNotFound) { SecItemDelete((__bridge_retained CFDictionaryRef) query); } [query setObject:(id)data forKey:(__bridge id)kSecValueData]; OSStatus status = SecItemAdd((__bridge_retained CFDictionaryRef) query, &item); if(status) { NSLog(@"Keychain error occured: %ld (statuscode)", status); return NO; } The debug output is the following: 2012-07-26 15:33:03.772 App[15529:1b03] Error: -25300 (statuscode) 2012-07-26 15:33:11.195 App[15529:1b03] Keychain error occured: -25299 (statuscode) The first error code -25300 represents errSecItemNotFound. So there is no value stored for this key. Then, when I try to add the private key into the Keychain I get -25299 which means errSecDuplicateItem. I do not understand this. Why is this happening? Does anyone have a clue or hint on this? Apple's error codes: errSecSuccess = 0, /* No error. */ errSecUnimplemented = -4, /* Function or operation not implemented. */ errSecParam = -50, /* One or more parameters passed to a function where not valid. */ errSecAllocate = -108, /* Failed to allocate memory. */ errSecNotAvailable = -25291, /* No keychain is available. You may need to restart your computer. */ errSecDuplicateItem = -25299, /* The specified item already exists in the keychain. */ errSecItemNotFound = -25300, /* The specified item could not be found in the keychain. */ errSecInteractionNotAllowed = -25308, /* User interaction is not allowed. */ errSecDecode = -26275, /* Unable to decode the provided data. */ errSecAuthFailed = -25293, /* The user name or passphrase you entered is not correct. */ Update #1: I've figured out that it works only for the first time. Even when data and key is different, after the first time stored into the keychain I cannot store further keys. - I'm facing the exact same issue. First key added using SecItemAdd without a problem, then any consecutive call to SecItemAdd fails with errSecDuplicateItem although despite SecItemCopyMatching returning errSecItemNotFound. Have you found a solution to this yet? –  100grams Jan 8 '13 at 10:47 The following code worked for me: NSMutableDictionary *query = [[NSMutableDictionary alloc] init]; [query setObject:(id)kSecClassKey forKey:(id)kSecClass]; [query setObject:(id)kSecAttrAccessibleWhenUnlocked forKey:(id)kSecAttrAccessible]; [query setObject:[NSNumber numberWithBool:YES] forKey:(id)kSecReturnData]; [query setObject:(id)key forKey:(id)kSecAttrApplicationTag]; //removing item if it exists SecItemDelete((CFDictionaryRef)query); //setting data (private key) [query setObject:(id)data forKey:(id)kSecValueData]; CFTypeRef persistKey; OSStatus status = SecItemAdd((CFDictionaryRef)query, &persistKey); if(status) { NSLog(@"Keychain error occured: %ld (statuscode)", status); return NO; } - It is bad practice to delete a Keychain item only to add an item with the same info back. I don't recall the specific reason why, but I believe it may cause conflicts when attempting to do so. –  Joey Apr 12 '14 at 6:01 I talked to an Apple employee who works on the Keychain at last years WWDC and he told me that in fact they don't offer another way to achieve this by now but they have a private API for that which they'll release soon... –  Chris Apr 13 '14 at 7:55 I don't know what you mean Chris. I had the same problem and was able to fix my code to correctly find the existing item. My problem was I was defining it to be syncable over iCloud when adding it, but didn't include that in the query when searching, so it couldn't find a match. I did not have to delete and add it again. –  Joey Apr 13 '14 at 8:14 can this be used for converting String to SecKeyRef? –  Turowicz Sep 25 '14 at 15:55 @Joey It is bad practice especially on OSX because the user may have moved the key to a different keychain (you can have hundreds of these if you like) and when you delete and recreate it, it always is recreated in the default keychain and then the user has to move it again. It is bad practice on iOS and OS X as any access control set by the system/user or any extra data added from other apps (if the item is shared, which is possible) is lost that way. –  Mecki Oct 9 '14 at 15:20 Sorry but I'll never be able to debug your code. Apple provides some sample code (KeychainItemWrapper) which lets you save one string (I recall). Its a big help dealing with the key chain. There is a gist on the web that is a modified version of that class, but saves and restores a dictionary (archived as a data object, which is what the Apple code does to the string). This lets you save multiple items in one interface to the keychain. The gist is here Keychain for NSDictioanary/data - Thanks, but it need to be stored as kSecClassKey (and the corresponding certificate as kSecClassCertificate). I know Apple provides this sample code for storing user credentials (but only strings) into the keychain. Considering, that one wants to verify a certificate or use the additional protection of the kSecClassKey it cannot get stored using the approach from Apple's sample code or your link. However, I think I have found a solution but have to verify this before I post it here. –  Chris Jul 27 '12 at 5:41 In my experience, the keychain wrapper does not allow multiple items to be saved to the same keychain group. This caused some major frustation but a solution can be found here: stackoverflow.com/questions/11055731/… –  rob Sep 27 '12 at 16:02 That's funny - since I'm doing it with a dictionary in my app and saving email, password, and another bit of context related to the user. But I modified Apple's code a bit not a lot though - you can see it in the link in my answer. This is working code that's in thousands of phones now (not millions :-( ) –  David H Sep 27 '12 at 16:53
• SUJIT SARKAR Articles written in Pramana – Journal of Physics • Emergence of quantum phases for the interacting helical liquid of topological quantum matter Emergence of different interesting and insightful phenomena in different length scales is the heart of quantum many-body system. We present emergence of quantum phases for the interacting helical liquid of topological quantum matter. We also observe that Luttinger liquid parameter plays a significant role to determine different quantum phases. We use three sets of renormalisation group (RG) equations to solve emergent quantum phases for our model Hamiltonian system. Two of them are the quantum Berezinskii–Kosterlitz–Thouless (BKT) equations. We show explicitly from the study of length scale-dependent emergent physics that there is no evidence of Majorana–Ising transition for the two sets of quantum BKT equations, i.e., the system is either in the topological superconducting phase or in the Ising phase. The whole set of RG equation shows the evidence of length scale-dependent Majorana–Ising transition. Emergence of length scale-dependent quantum phases can be observed in topological materials which exhibit fundamentally new physical phenomena with potential applications for novel devices and quantum information technology. • A study of curvature theory for different symmetry classes of Hamiltonian We study and present the results of curvature for different symmetry classes (BDI, AIII and A) of model Hamiltonians and also present the transformation of model Hamiltonian from one distinct symmetry class to the other based on the curvature property. We observe the mirror symmetric curvature for the Hamiltonian with BDI symmetry class but there is no evidence of such behaviour for Hamiltonians of AIII symmetry class. We show the origin of torsion and its consequences on the parameter space of topological phase of the system. We find the evidence of torsion for the Hamiltonian of A symmetry class. We present Serret–Frenet equations for all model Hamiltonians in R$^3$ space. To the best of our knowledge, this is the first application of curvature theory to the model Hamiltonian of different symmetry classes which belong to the topological state of matter. • # Pramana – Journal of Physics Volume 95, 2021 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
# dashed line in pgfplots legend results in incomplete marks When defining the pgfplots legend entries myself, I have a problem when using dashed lines in combination with marks. The mark is also dashed (or dotted, etc.), just as the accompanying line is. Is there a way of making sure that marks are drawn completely when using dashed/dotted/etc. lines? MWE: \documentclass{report} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{axis}[ legend pos=south east, legend entries={Entry 1,Entry 2,Entry 3} ] {x^2}; \end{axis} \end{tikzpicture} \end{document} - use this: \addlegendimage{blue,dashed,mark=oplus,mark options=solid} –  oerpli Aug 2 '13 at 9:30 –  percusse Aug 2 '13 at 9:38 Thanks oerpli, that works like a charm! –  graafderk Aug 2 '13 at 9:50 \addlegendimage{blue,dashed,mark=oplus,mark options=solid}
# Decibel Sound is all around us and can be measured to protect and inform us, as some sounds are not safe. In fact, loud noise can actually damage our hearing. So, the intensity of a sound is measured using decibel. As the human ear is incredibly sensitive, you can hear everything from your fingertip brushing lightly over your skin to a loud thunder clap. As far powe> is concerned, the sound of the thunderclap is approximately 1,000,000,000,000 times more powerful than the smallest audible sound. That makes a huge difference! Formulas to calculate Decibel: 1. When power is given: The most basic form for decibel calculations as given below, $N_{dB}=10log_{10}(\frac{P2}{P1})$ 1. When voltage and current are given: $N_{dB}=20log_{10}(\frac{V2}{V1})$ $N_{dB}=20log_{10}(\frac{I2}{I1})$ Uses: 1. The decibel is mostly applied in acoustics as a unit of sound pressure level. In this, the reference pressure in air is set as the threshold value of perception of a human. There are many comparisons that are used to describe separate levels of pressure. 2. The decibel is used to express the ratios of power or amplitude in arithmetic ratios in electronics. The total gain of decibel can be easily calculated by adding up all the decibel gains. Decibel Meter: Sound can be measured with a device called as decibel meter. It measures and samples sound. Decibel meters are also known as sound-level meters. It can even be accessed on a smartphone through apps. Measuring the sound of an environment with a common device like a smartphone that many people always carry may help protect their ears more often. Decibel Scale: On the decibel scale, 0 dB is the smallest audible sound. A sound 10 times more powerful is 10 dB. A sound 100 times more powerful than near total silence is 20 dB. A sound 1,000 times more powerful than near total silence is 30 dB. A noise level chart showing examples of sounds with dB levels ranging from 0 to 120 decibels. Decibel Sound Source 10 Almost inaudible Normal Breathing 20 Audible Rustling Leaves, Mosquitoes 30 Very quiet Whisper 40 Quiet Stream, Refrigerator humming 50 Limited Sound Quiet office 55 Normal Sound Filtering coffee-maker 60 Fairly Quiet Normal conversation 70 Irritating Vacuum cleaner, Hairdryer 75 Constant Sound Dishwasher 80 Unpleasant City Traffic Noise 85 Loud Lawnmower 90 Extremely Unpleasant Violin 95 Noisy Farm Tractor 100 Extremely Unpleasant Train 105 Even Louder Large drum 110 Extremely Loud Symphony Orchestra 120 Extremely Loud Thunderclap, Boombox Continue learning more about Sound and Decibel with more interesting examples and diagrams @byju’s #### Practise This Question The position-time graph of a body is a straight line parallel to the x axis. What does this imply?
# [tex-live] strange discrepancy in running time of etex between TL2015 and TL2017 jfbu jfbu at free.fr Fri Jul 28 09:15:32 CEST 2017 Le 28 juil. 2017 at 03:26, Reinhard Kotucha <reinhard.kotucha at web.de> : > On 2017-07-27 at 08:58:43 +0200, jfbu wrote: > >> Thanks a lot for trying out, the most likely cause is that the >> >> def\x{3.141592653589793238462.... >> >> line got hard-wrapped somehow so that it occupies multiple lines >> >> (it should be on only one line) >> >> and this means the \x has spaces in it and the \ifx test fails >> because the \Z computed by \fdef\Z {\Machin {1000}} contains >> the first 1000 digits of Pi with no spaces. > > The problem was that the value of \x contained three spurious > characters (numbers are code points): > > 0039: DIGIT NINE > 0032: DIGIT TWO > 0037: DIGIT SEVEN > 0038: DIGIT EIGHT > 0037: DIGIT SEVEN > 0021: EXCLAMATION MARK > 000A: <control> LINE FEED (LF) > 0020: SPACE > 0036: DIGIT SIX > 0036: DIGIT SIX > 0031: DIGIT ONE > 0031: DIGIT ONE > 0031: DIGIT ONE > > I do not know where these characters had been inserted. But if you > send code by mail which contains non-ASCII characters and/or where > linebreaks matter, I strongly recommend to send the file as an email > attachment. It's also advisable to gzip the file because it's marked > as "application/octet-stream" then by your mail client. Thanks for confirming the corruption of \x visiting with Firefox http://tug.org/pipermail/tex-live/2017-July/040488.html I don't see the extra ! CTRL-J SPACE. But certainly I should have sent the file as binary to avoid any such problem with a line of 1000 or so characters. On arXiv of old years, and even on CTAN, one could find many many latex file or dtx files containing "> From" at start of lines. The original had only "From" and the mailer inserted the "> ". People collaborating on a paper send versions by mail, hence it might even be that a very large proportion of math papers in the nineties got corrugated in this way. (I have forgotten now, perhaps it was in fact also related to ftp transfers) > > You can trust \pdfelapsedtime because it just uses system calls but I > don't know which system call is used by pdftex. It seems that it's > based on gettimeofday(2). I have lots of experimenting with \pdfelapsedtime on Mac OS and also a bit on Linux, and it has always stricken me as quite fluctuating even when used for durations of tens of seconds. One such source is definitely deep in the CPU management, because on my laptop I have a specific phenomenon which I do not observe on a Mac desktop regarding computation times when running into minutes: for very lengthy things (5mns+) my laptop *slows down* in comparison to the desktop or the Linux machine > > Luatex provides os.clock() and os.gettimeofday(). > > os.clock() counts CPU cycles of the current process with a resolution > of 10 ms. It disregards CPU cycles used by sub-processes or other > processes running at the same time. > > os.gettimeofday(), as its name implies, just returns the current time > and is less reliable when a cronjob is running or Emacs creates an > auto-backup. The resolution is system dependent (1µs on Linux and > 500µs on Windows). Don't be confused by the many decimal digits. > Most of them are just rounding errors which are almost always > introduced when binary numbers are converted to the decimal system. > > https://www.tug.org/TUGboat/tb28-3/tb90beebe.pdf Thanks for the link I will check it out. > > If you are using \pdfelapsedtime, os.clock(), or os.gettimeofday(), it > doesn't matter whether any files are cached already, at least if your > input file resets the counter at the beginning and you avoid \input. > IMO it's best to use os.clock() on LuaTeX for benchmarks despite its > low resolution. The problem you reported is OS/X specific but in most > cases a resolution of 10 ms is sufficient on a Raspberry Pi. > > If you are using time(1), which depends on gettimeofday(2), you have > to run the script several times because other processes might run in > the background. It's also advisable to install a system monitor like > xosview. When I need to be a bit serious and not only get a general impression I indeed try to run the test in a controlled environment: - turn off the wifi, hence all internet - kill all apps beyond Terminal - run on house electric power, not on batteries In my experience on Mac OSes, \pdfelapsedtime has some significant fluctuations when I use it multiple times in the same TeX job, Some years ago I was usually simply sending \the\pdfelapsedtime to PDF output and I noticed I should do a \noindent before \pdfresettimer to get things a bit stabilized between multiple uses of \pdfelapsedtime in same TeX run nowadays I work more likely with log or terminal output, and when I want to get serious I do not use Emacs AUCTeX, because I noticed that it significantly slows down compilation time, presumably from its on-the-fly parsing of log output, when this log output is voluminous (which in LaTeX it always is to some extent) With "time", I get relatively stable and coherent results with the occasional extravagancy when the system did something strange like sending to the NSA or to my ISP provider all my private data, or perhaps my computer is talking with Apple servers or whatever Google Analytics. But globally "time" has always given me more coherent results than "\pdfelapsedtime". Best, Jean-Francois
Godfrey E. J. Miller Consultant | Princeton Consultants, Inc. Physicist | arXiv Ph.D. | University of Pennsylvania The whole point of science is to understand the mysteries you see around you. - Joan Feynman I think I can safely say that nobody understands quantum mechanics. - Richard Feynman $$\Huge \sigma \equiv 2 \pi = 6.283185 \ldots$$
# Derek wishes to retire in 15 years, at which time he wants to have accumulated enough money to... ## Question: Derek wishes to retire in 15 years, at which time he wants to have accumulated enough money to receive an annual annuity of $31,000 for 20 years after retirement. During the period before retirement, he can earn 12% annually, while after retirement he can earn 14% on his money. What annual contributions to the retirement fund will allow him to receive the$31,000 annuity? ## Future Value and Present Value of Annuity: The future value of an annuity is the lump sum at a future date that is equivalent to the series of payments. The present value, on the other hand, is the equivalent lump sum amount measured at current day's dollars. The annual contribution is \$ 5,507.47. The annual contribution is such that the future value of the contributions after 15 years, is equal to the value of the future withdrawals measured at the end of 15 years. We can use the following formula to compute the present value of an annuity with periodic payment {eq}M {/eq} for {eq}T{/eq} periods, given periodic return {eq}r{/eq}: • {eq}\displaystyle \frac{M(1 - (1 + r)^{-T})}{r} {/eq} Applying this formula, we can find the annual contribution (denoted by {eq}M{/eq} ) by solving the following equation: • {eq}\dfrac{M*((1 + 12\%)^{15} - 1 )}{12\%} = \dfrac{31,000*(1 - (1 + 14\%)^{-20})}{14\%} {/eq} • {eq}37.27971466 * M = 205317.0471 {/eq} • {eq}M = 5,507.47 {/eq}
Detect if directed cycle is clockwise or counterclockwise in 3D [closed] I need to check if cycle given by $(x_1, y_1, z_1), (x_2, y_2, z_2), (x_3, y_3, z_3)$ is clockwise or counterclockwise. I have found this answer: Detecting whether directed cycle is clockwise or counterclockwise but I don't have a clue how to make it work for 3 dimensional space. - Clockwise and anticlockwise don't mean anything in three dimensions. – Robin Chapman Dec 6 2010 at 10:22 To extend Robin's remark, you need a specified direction $u$ (perhaps the $z$-direction). Then you can determine if the cycle is cw or ccw with respect to $u$. – Joseph O'Rourke Dec 6 2010 at 11:18
## Valuations in the language of Topos theory “Useful as it is under everyday circumstances to say that the world exists “out there” independent of us, that view can no longer be upheld. There is a strange sense in which this is a “participating universe” Wheeler (1983). The above statement reveals the radical difference that exists between the view of the world given by Quantum Mechanics and the view given by Classical Physics. In fact, the existence of an “objective external world”, which is postulated by Classical Physics, seems to be rejected by Quantum Mechanics. The cause of this interpretative differences between the two theories can be traced back to the different algebras used to relate propositions0 . Precisely: propositions in Classical Physics form a Boolean algebra, while propositions in Quantum Mechanics form a non-Boolean algebra. This feature of Quantum Mechanics entails that properties can not be said to be possessed by a system, denying, in such a way, the existence of an independent outside world”. An attempt to give Quantum Mechanics the status of a realist1 theory is given by the hidden variable theories, which postulate the following: • in any state$\vec{\psi}$observables A posses an objectively existing value” • values of observables A are determined by $\vec{\psi}$and hidden variables. However these theories where disproved by the Kochen-Specker Theorem and Bell inequalities, both of which show that properties are not possessed by a Quantum System, therefore rejecting the first assumption of the Hidden variable theories. In particular, the Kochen-Specker theorem asserts that it is impossible to evaluate propositions regarding values possessed by physical entities represented by projection operators, such that their truth values belong to the set $\{1,2\}$therefore depriving of meaning any statement regarding a state of affairs of a system, since, generally speaking, a statement is said to be meaningful if its validity can be assessed. The Bell inequalities go further and show the impossibility of a local2 realist interpretation of Quantum Mechanics. Do we then have to accept that Quantum theory is a non-realist theory and therefore regard any statement about states of affair of a system as meaningless?. Or is there a way of reformulating the Kochen-Specker theorem and the Bell inequalities so as to give Quantum theory a realist flavor? To this question C.J.Isham of Imperial College London and J. Butterfield of All Souls’ College Oxford answer in a negative way. In fact in a series of papers Isham, Butterfield 1998, Isham, Butterfield 1999, Isham, Butterfield, Hamilton 1999 Isham, Butterfield 1999, Isham, Butterfield 1999 they analyse the possibility to retain some realist flavor in Quantum Theory by changing the logical structure with which propositions about the values of physical quantities are handled. In particular they introduce a new kind of valuation for quantum quantities which is defined on all operators, so that it will be possible to assign truth values to Quantum Proposition. This new valuation is defined using Topos Theory, and it is such that truth values become multi-valued and contextual, in agreement with the mathematical formalism of Quantum Theory. The idea behind the definition of these new valuations is given by the realization that, although the Kochen-Specker Theorem prohibits to assign truth values to propositions, nevertheless it allows the possibility of assigning truth values to generalized propositions. Therefore, by adopting generalized propositions as the domain of applicability of the valuation function, we obtain a situation in which it is meaningful to assign truth values. The advantage of this approach is that the logic of Quantum propositions it proposes is distributive, therefore, it can be used as a deductive system of reasoning. Moreover, unlike any other multi- valued type logic, it enables to define the logical connectives in an unambiguous way, and such that the metalanguage3 /object-language4 distinction is not violated. For anyone that is interested in the topic, apart from the references I mentioned above, a non -technical summery of the work done by Isham and Butterfield can be found at Isham 2004 . An easier introduction can instead be found at My Master Dissertation 2005 A brief account of the subject can be found at talk which is a pdf version of a talk I gave at the Eleventh Marcel Grossmann Meeting on General Relativity in Berlin. For a more technical account see my more recent talk . Recently it has been shown in Doering ,Isham 2011 that it is possible to regard probabilities are truth values in an appropriate topos. This definition of probabilities allows for a non instrumentalist interpretation of the latter. Details can be found in the section A Topos Representation of Probabilities 0 Propositions are defined as statements regarding properties of a given system 1 A realist theory is a theory in which the following conditions are satisfied 1) propositions are related through a Boolean algebra 2) propositions can always be assessed to be either true or false. 2 Locality means that, given a composite system, the value of a physical quantity of an individual constituent of the system is independent from what is measured on any other constituent. 3 Metalanguage is the language used to make statements about another language. 4 Object-language is the language being studied through the Metalanguage.
# Connexions You are here: Home » Content » Calculating the mechanical advantage of a hydraulic system ### Lenses What is a lens? #### Definition of a lens ##### Lenses A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust. ##### What is in a lens? Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content. ##### Who can create a lens? Any individual member, a community, or a respected organization. ##### What are tags? Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens. #### In these lenses • GETSenPhaseTech This module is included inLens: Siyavula: Technology (Gr. 7-9) By: SiyavulaAs a part of collection: "Technology Grade 9" Collection Review Status: In Review Click the "GETSenPhaseTech" link to see all content selected in this lens. Click the tag icon to display tags associated with this content. ### Recently Viewed This feature requires Javascript to be enabled. ### Tags (What is a tag?) These tags come from the endorsement, affiliation, and other lenses that include this content. # Calculating the mechanical advantage of a hydraulic system Module by: Siyavula Uploaders. E-mail the author ## CALCULATE THE MECHANICAL ADVANTAGE OF A HYDRAULIC SYSTEM ACTIVITY 1: To calculate the mechanical advantage of a hydraulic system [LO 2.3] An example of a simple hydraulic system is a hydraulic lift which is used to lift motor-cars. The system has a mechanical advantage of both power input and distance output. The system consists of two pistons of different sizes, connected by a reservoir that is filled with a hydraulic liquid such as oil or water. A smaller input force on the small piston results in a greater output force on the large piston so that there is a mechanical advantage. The input force is called the effort and the output force is called the load. The advantage is made possible by two characteristics of liquids, namely that they cannot be compressed and that they distribute pressure equally. This principle is called Pascal’s principle. The pressure at piston A is equal to the pressure at piston B. Pressure is calculated as force per area. Pressure cylinder A = Pressure cylinder B F orce A Area A = Force B Area B Pressure cylinder A = Pressure cylinder B F orce A Area A = Force B Area B alignl { stack { size 12{"Pressure cylinder " size 11{A=}"Pressure cylinder " size 11{B}} {} # { { size 11{F ital "orce"A}} over { size 11{"Area"A}} } = { { size 11{"Force"B}} over { size 11{"Area"B}} } {} } } {} (1) The following formula can be used to calculate the mechanical advantage: Mechanical force advantage = load ( output force ) effort ( input force ) Mechanical force advantage = load ( output force ) effort ( input force ) size 12{"Mechanical force advantage " size 11{ {}= { {"load " $$"output force"$$ } over {"effort " $$"input force"$$ } } }} {} (2) In the syringes the piston with the large diameter will have a smaller distance out­put, and the piston with the small diameter will have a larger distance output. The relationship of distance output is determined by the mechanical force advantage. Example: The motor-car in the above example weighs 5 000 N. The small piston, A, has an area of 1cm². The small piston moves across a distance of 100 cm. (a) Determine the input force. According to Pascal’s principle: Pressure cylinder A = Pressure cylinder B Force A Area A = Force B Area B Force A 1 = 5 000 N 100 Force A = 50 N Pressure cylinder A = Pressure cylinder B Force A Area A = Force B Area B Force A 1 = 5 000 N 100 Force A = 50 N alignl { stack { size 12{ size 11{"Pressure cylinder ""A "="Pressure cylinder "B}} {} # {} # { { size 11{"Force A"} } over { size 11{"Area A"} } } size 11{ {}= { {"Force B"} over {"Area B"} } } {} # {} # { { size 11{"Force A"} } over { size 11{1} } } size 11{ {}= { {"5 000 N"} over {"100"} } } {} # {} # size 11{"Force A "="50 N"} {} } } {} (3) The area at cylinder B is 100 times bigger. Therefore the power at cylinder A is 100 times smaller. (b) Determine the mechanical force advantage. MA = load effort = 5 000 50 = 100 MA = load effort = 5 000 50 = 100 alignl { stack { size 12{ size 11{"MA"= { {"load"} over {"effort"} } }} {} # {} # size 11{ {}= { {"5 000"} over {"50"} } } {} # {} # size 11{ {}="100"} {} } } {} (4) (c) Determine the distance that the large piston will move. MA = 100. Therefore if the small piston moves 100 cm, the large piston will move 1 cm. 1. A little boy receives a Jack-in-the-Box toy from his grandmother (see illustration). 1.1 Calculate the amount of force, in Newton, that the little boy needs to make the Jack-in-the-Box weighing 100 g shoot out, when the area at cylinder A is 2cm² and the area at cylinder B is 1cm². 1.2 Calculate the mechanical advantage in question 1.1 1.3 Calculate the distance that piston A must move to make the Jack-in-the-Box shoot out 3 cm. 1.4 Would it be more advantageous to change around the two pistons, A and B? 2. You have to make a pair of hydraulic pliers, as indicated in the sketch. To enable you to do this, you are given two cylinders with pistons of 2 cm and 1 cm respectively. The maximum distance that the larger piston can move in the cylinder is 3 cm. A force of 1N is applied to move the moving jaws of the pliers over a distance of 3 cm and to clamp the jaws of the pliers. 2.1 Which of the two pistons are you going to place in position A for a minimum force input? Explain your answer. 2.2 How far will the piston at cylinder A move to clamp the jaws? ## Assessment LO 2 TECHNOLOGICAL KNOWLEDGE AND UNDERSTANDINGThe learner will be able to understand and apply relevant technological knowledge ethically and responsibly. We know this when the learner: systems and control:2.3 demonstrates knowledge and understanding of interacting mechanical systems and sub-systems by practical analysis and represents them using systems diagrams:gear systemsbelt drive or pulley systems with more than one stage;mechanical control mechanisms (e.g. ratchet and pawl, cleats);pneumatic or hydraulic systems that use restrictorsone-way valves;systems where mechanical, electrical, or pneumatic or hydraulic systems are combined. ## Memorandum ACTIVITY 1 1.1 FORCE = 2 N 1.2 MA=1/2 1.3 1,5 cm 1.4 No, the small piston/plunger will move the furthest and enable Jack to make the highest jump 2.1 The piston/plunger with a diameter of 1 cm 2.2 1,5 cm ## Content actions PDF | EPUB (?) ### What is an EPUB file? EPUB is an electronic book format that can be read on a variety of mobile devices. My Favorites (?) 'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'. | A lens I own (?) #### Definition of a lens ##### Lenses A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust. ##### What is in a lens? Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content. ##### Who can create a lens? Any individual member, a community, or a respected organization. ##### What are tags? Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens. | External bookmarks
## AntiMatter 4 years ago How does external torque produce angular acceleration about the center-of-mass? 1. kirill $dL/dt=r \times F$ where L is an angular momentum $L=I \omega$ and $I$ is moment of inertia, $\omega$ is an angular velocity, $\tau = r \times F$ is a torque. So, if you differentiate $\omega$ you get angular acceleration. $I$ is a matrix in general. 2. panos $\tau = r \times F = I \alpha$ where $I$ is is moment of inertia, where $\alpha$ is angular acceleration
# Plotting a smooth function [duplicate] I want to plot the graph of the function given by: f(x)=abs(x - x^3/6 - sin(x)) for -1<= x <=1 but a noise (near x-axis) curve is obtained: The code is the following: \documentclass[11pt,border=1mm]{standalone} \usepackage[portuguese, shorthands=off]{babel} \usepackage[utf8]{inputenc} \usepackage{tikz,pgfplots} \begin{document} \begin{tikzpicture} \begin{axis}[ xmin = -1, xmax = 1, width = 10cm] {abs(x-x^3/6-sin(deg(x)))}; \end{axis} \end{tikzpicture} \end{document} and the output produced is the following: How can I fix this? • Please extend your example to a minimal working example (MWE). Sep 24 at 12:59 • Done, as you suggested. Sep 24 at 13:02 • Since the Taylor's series approximation is $\sin(x) = x - \frac{x^3}{3!} + \frac{x^5){5!}-...$ you are bound to get some round off error, and pgfmath in particular is prone to round off errors. Sep 24 at 13:22 • Since you have some responses below that seem to answer your question, please consider marking one of them as ‘Accepted’ by clicking on the tickmark below their vote count (see How do you accept an answer?). This shows which answer helped you most, and it assigns reputation points to the author of the answer (and to you!). It's part of this site's idea to identify good questions and answers through upvotes and acceptance of answers. Sep 29 at 11:49 There was a similar question not so far in the past. If was already guessed the root cause of the "noise" is TeXs calculation engine. So the solution is to use another calculation engine. Here I show the comparison of TeX and Lua. In my answer to the similar question you can also find the engines gnuplot and l3/xfp. % used PGFPlots v1.18.1 \documentclass[border=5pt]{standalone} \usepackage{pgfplots} % use this compat level or higher to make use of the Lua calculation engine \pgfplotsset{compat=1.12} \begin{document} \begin{tikzpicture} \begin{axis}[ cycle multiindex* list={ color\nextlist [1 of]mark list\nextlist }, xmin=-1, xmax=1, domain=-1:1, samples=51, smooth, % no markers, mark size=1pt, ] % using TeX as calculation engine % using Lua as calculation engine \end{axis} \end{tikzpicture} \end{document} • do I need to use LuaLaTeX to fully gain the advantage of the LUA calculation engine or is the same result possible if I use PDFLaTeX to compile this document? Sep 25 at 7:38 • @Lukas, (of course) you need to compile with LuaLaTeX to make use of Lua stuff ... ;) Sep 25 at 18:19 • I was just a little puzzled because it compiled with pdflatex without an error :D thanks for the information! Sep 25 at 18:28 • @Lukas, yes it does. If you want to know why, see the comments in my quoted answer. Sep 26 at 5:01 As others have explained, you're relying on pgfmath to do calculations and that is prone to round off errors. It's great for the sort of calculations that come up for most people but you're working with a more complicated function. The answer then is to use a more appropriate tool, a computer algebra system, to do the calculation. This is possible with the sagetex package, found here on CTAN. This package lets you farm out the calculations to open source CAS Sage instead of using pgfmath. The result will be accurate calculations which can be used in your plot. \documentclass[11pt,border=1mm]{standalone} \usepackage{sagetex} \usepackage[usenames,dvipsnames]{xcolor} \usepackage{pgfplots} \pgfplotsset{compat=1.16} \begin{document} \begin{sagesilent} LowerX = -1 UpperX = 1 LowerY = -.001 UpperY = .009 step = .001 t = var('t') g(x)= abs(x-x^3/6-sin(x)) x_coords = [t for t in srange(LowerX,UpperX,step)] y_coords = [g(t).n(digits=6) for t in srange(LowerX,UpperX,step)] output = r"" output += r"\begin{tikzpicture}[scale=1.0]" output += r"\begin{axis}[xmin=%f,xmax=%f,ymin= %f,ymax=%f,width=10cm]"%(LowerX,UpperX,LowerY, UpperY) output += r"\addplot[thin, blue, unbounded coords=jump] coordinates {" for i in range(0,len(x_coords)-1): if (y_coords[i])<LowerY or (y_coords[i])>UpperY: output += r"(%f , inf) "%(x_coords[i]) else: output += r"(%f , %f) "%(x_coords[i],y_coords[i]) output += r"};" output += r"\end{axis}" output += r"\end{tikzpicture}" \end{sagesilent} \sagestr{output} \end{document} The code, running in Cocalc, is shown below: Sage is not part of your LaTeX distribution so this will not work on your machine unless you either 1. download the program to your machine and get it to work with your LaTeX distribution (which can be troublesome) or 2. open a free Cocalc account which gives you access to Sage over the internet. Sage also gives you access to Python which you can then use as well. See, for example, how the Cantor function is plotted using sagetex. Search this site for sagetex and you will see how it can be used for more complex mathematical problems, such as finding a transpose of a matrix. Update: One of difficult problems with beginner is to compile any single Asymptote code(s), thus, I have shared a way which I used beside http://asymptote.ualberta.ca/, here is it. While waiting for a TikZ/PGF answer, see a runnable code with Asymptote. One of features of Asymptote: ... inspired by MetaPost, with a much cleaner, powerful C++-like programming syntax and IEEE floating-point numerics; ... import graph; size(350,300,false); // The boolean "false" is important. real f(real x){ return abs(x-x^3/6-sin(x));} guide F=graph(f,-1,1,500); // domain -1,1 // samples 500 draw(F,1bp+blue); limits((-1,-1e-3),(1,9*1e-3)); // See page 113 in the documentation xaxis("$x$",BottomTop,LeftTicks(Size=4,Ticks=uniform(-1,1,10))); yaxis("$y$",LeftRight,RightTicks(ticklabel=new string(real x){ return format("%.3f",1e+3*x);}, Size=4,Ticks=1e-3*uniform(0,8,4))); labelx("$.10^{-3}$",(-1,9*1e-3),N+0.7E); // See page 104 in the documentation • Nice! it can be an example on the Asymptote gallery on 2D graphs Sep 25 at 19:25 Looks like tikz has some numerical limitations, here. E.g. increasing the samples gives even more errors, like shown below, while decreasing is better, but not what you want. However, if you switch to PSTricks, numerics seems to be better. Here is some starting code, with remarks below the next drawing. \documentclass[11pt,border=1mm]{article} %\documentclass[11pt,border=1mm]{standalone} %\usepackage[portuguese, shorthands=off]{babel} \usepackage[utf8]{inputenc} %\usepackage{tikz,pgfplots} \usepackage{pst-plot} \begin{document} \begin{pspicture} \end{pspicture} \end{document} REMARKS: 1. For simplicity I left out grid or axes. Not too difficult to add, but too difficult for me right now ;-) 2. f(x) looks a bit odd ... think of entering your equation term by term on an old HP-calculator in so called polish notation (values operation => result, repeat). For simplification I multiplied by 1000 at the end. 3. During compile make sure you advise your Latex-compiler to go via dvips ... else it won't create a pdf. 4. The generated .ps and .pdf are vector graphics, i.e. with less trouble from the numerical side, also during display. If the line looks blurred, increase the lines linewidth during plotting: it's an artifact from your computers display system. 5. If you need more samples, a simple-stupid way is to let x run from -10 to +10 AND divide it by 10 inside \psplot. There may be better ways to do it. 6. In your final document you could use \includegraphics to show any graphic, which you created as a separate .pdf with tikz OR PSTricks. This (too long for a comment) is an auxiliary for the above Asymptote answer. The question is interesting in the sense that the options [smooth], [samples] of the plot command in TikZ do not work as expected due to computation limit of pgfmath. So Asymptote is one of suitable choices. The following illustrates the sine function, the 3rd-order Taylor polynomial, and the 3rd-order Taylor remainder. Thus, visually the approximation T_3(x) of sin(x) is only good in a vicinity of the origin where the remainder is almost horizontal. // http://asymptote.ualberta.ca/ unitsize(1cm); size(8cm); import graph; usepackage("amsmath"); pen p=gray+opacity(.5); draw((-2pi-1,0)--(2pi+1,0),p); draw((0,-4)--(0,5),p); // graph of sin(.) path F=graph(sin,-2pi,2pi); draw(Label("$y=\sin x$",EndPoint,N),F,blue); // graph of the 3rd-order Taylor polynomial real T3(real x){ return (x-x^3/6);} guide G=graph(T3,-1.1pi,1.1pi); draw(Label(scale(.8)*"$T_3(x)=x-\dfrac{x^3}{6}$",EndPoint,E),G,darkcyan); // graph of the 3rd-order Taylor remainder real r(real x){ return abs(x-x^3/6-sin(x));} guide R=graph(r,-1.15pi,1.15pi,500); draw(Label(scale(.8)*"$r_3(x)=\left|x-\dfrac{x^3}{6}-\sin x\right|$",BeginPoint),R,magenta); shipout(bbox(5mm,invisible));