text
stringlengths
100
500k
subset
stringclasses
4 values
Probability that the range includes the mean in a sample of $n=4$ from a normal distribution? If we select one random sample with 4 elements from a normal distribution, and we denote the minimum value among the sample with $a$, and denote the maximum value among the sample with $b$, what is the probability that distance between $a$ and $b$ (i.e. the range) includes the true population mean? Could anyone help me how to solve this problem from the 2012 local-math-contest? The short solution of this question: $7/8$. The first value is below the median. The first value is above the median. Because the median and mean are the same in a normal distribution, there is a 50% chance that the first value lies below and a 50% chance that it lies above the median. For the second value, we have again two possibilities: either it lies above or below the median. Again, the chances are 50/50. We are only interested in the paths where either all 4 values fall below or above the median. These paths are marked with green arrows in the graphic. The other paths are omitted for clarity. As we go down, we multiply the probabilities. At the bottom, we add the probabilities. Can you solve it from here? I would solve the inverse: what is the probability that the range does not include the mean, then subtract that from 1. So that is the sum of the 2 cases: a is greater than the mean; b is less than the mean. Calculate the probability of these 2 cases (mutually exclusive) using the distribution of the min(max) from n=4, and subtract the 2 values (should be the same) from 1. Instead of using the formula for order statistics you can also use the fact that the median and the mean are the same for the normal and therefore there is a 50% chance of each point being above(below) the mean, so what is the probability of all 4 values being greater(less) than the mean? Not the answer you're looking for? Browse other questions tagged probability self-study normal-distribution order-statistics or ask your own question. What's the probability that the next test will exceed the previous maximum? How can we get a normal distribution as $n \to \infty$ if the range of values of our random variable is bounded?
CommonCrawl
We present direct numerical simulations of Taylor-Couette flow with grooved walls up to inner cylinder Reynolds number of $Re_i=3.76\times10^4$, corresponding to Taylor number of $Ta=2.15\times10^9$. The simulations are performed at a fixed radius ratio $\eta=r_i/r_o=0.714$. The grooves are axisymmetric V-shaped obstacles attached to the wall with a tip angle of $90^\circ$. Results are compared with the smooth wall case in order to investigate the effects of the grooved walls. In particular, we focus on the effective scaling laws for torque, boundary layers and flow structures. With increasing $Ta$, the boundary layer thickness finally becomes smaller than the groove height. When this happens, the plumes are ejected from tips of the grooves and a secondary circulation between the grooves is formed. This is associated with a sharp increase of the torque and thus the effective scaling law for the torque becomes much steeper. Further increasing $Ta$ does not result in an additional slope increases. Instead, the effective scaling law saturates to the same ``ultimate'' regime effective exponents seen for smooth walls.
CommonCrawl
Near-fill with 3x1 long triominos, how to do a different void square than the center square? It's rather easy to fill a $7 \times 7$ board with 16 long triominos, leaving the center square void: see the picture below. But if I want to move the void square in another position, where else could I place it? Any tromino must cover one square of each color. There are 16 blue squares, 16 yellow squares, but 17 red squares, so a red must be the uncovered one. This is true for the reverse coloring as well, which gives our final result: the only possible squares are the corners, the center of the edges, and the center. black one is the empty space. it could be one of the center squares on the first/last row/column. Not the answer you're looking for? Browse other questions tagged dissection tiling polyomino packing or ask your own question.
CommonCrawl
Monotone circuit complexity of matroids? Call a monotone boolean function $f$ a matroid function if its minterms are bases of some matroid. I am interested in monotone circuit complexity of such functions, even when we "tie hands" of these circuits as follows. Take a monotone boolean circuit and replace Or gates by Sum gates, and And gates by Product gates. The resulting monotone arithmetic circuit will (syntactically) produce some multivariate polynomial. Say that a circuit is read-$r$ times circuit if every variable in this produced polynomial has degree at most $r$. So let $B_r(f)$ denote the minimum number of gates in a read-$r$ times monotone circuit computing $f$, and $B(f)$ be the minimum of $B_r(f)$ over all $r\geq 1$. Question: Are there explicit matroid functions $f$ known with $B_r(f)$ large? At least for $r=1$. Motivation: Every monotone boolean function $f$ defines a natural minimization problem: given an assignment of nonnegative weights to the variables, compute the minimum weight of a minterm of $f$; the weight of a minterm is the sum of weights of its variables. One can show that $B_r(f)$ is a lower bound on the number of operations used by any "pure" dynamic programming algorithm approximating the minimization problem on $f$ within the factor $r$; a DP algorithm is "pure" if it can be implemented as $(\min,+)$ circuit. Now, keep our hats: by Edmonds-Rado theorem, for every matroid function $f$, the minimization problem on $f$ can be solved by the standard greedy algorithm, even exactly, for all input weightings! So, the measure $B_r(f)$ reflects the approximation weakness of pure DP algorithms when compared with greedy algorithms. My question is whether we know explicit examples, at least for $B_r(f)$ with bounded $r$. Of course, we have several lower bounds for monotone circuits, even for unbounded $r$. But do any of them works also for matroid functions? Note that for $r=1$, the hands of circuits are extremely tied: boolean functions computed at inputs of an And gate cannot then share a common variable. So, at least for $r=1$, we don't need the entire power of Razborov's method of approximations:lower bounds for read-once monotone circuits should come much easier. [ADDED 16.04.2016] I just realized that the case $r=1$ is indeed easy. If a boolean function $f$ is homogeneous (all minterms have the same cardinality) then $B_1(f)$ is at least the monotone arithmetic $(+,\times)$ circuit complexity of $f$. [Read-once circuits cannot use the idempotence $x^2=x$.] Let now $h$ be the spanning tree function: minterms are spanning trees of $K_n$. This is a matroid function (graphic matroid). Jerrum and Snir have shown that the $(+,\times)$ complexity of $h$, and hence, also $B_1(h)$ is exponential in $n$. On the other hand, as a boolean function, $h$ is just the graph connectivity function CONN, and a well-known pure DP algorithm of Floyd-Warshall yields $B(h)=O(n^3)$. So, my question actually asks what happens for $r>1$, the first interesting case being $r=2$. Browse other questions tagged cc.complexity-theory circuit-complexity lower-bounds dynamic-programming or ask your own question. Improved lower bound on monotone circuit complexity of perfect matching? Monotone arithmetic circuit complexity of elementary symmetric polynomials?
CommonCrawl
We propose a principled method for gradient-based regularization of the critic of GAN-like models trained by adversarially optimizing the kernel of a Maximum Mean Discrepancy (MMD). We show that controlling the gradient of the critic is vital to having a sensible loss function, and devise a method to enforce exact, analytical gradient constraints at no additional cost compared to existing approximate techniques based on additive regularizers. The new loss function is provably continuous, and experiments show that it stabilizes and accelerates training, giving image generation models that outperform state-of-the art methods on $160 \times 160$ CelebA and $64 \times 64$ unconditional ImageNet.
CommonCrawl
Let $G$ be a topological group such that its homology~$H(G)$ with coefficients in a principal ideal domain~$R$ is an exterior algebra, generated in odd degrees. We show that the singular cochain functor carries the duality between $G$-spaces and spaces over~$BG$ to the Koszul duality between modules up to homotopy over $H(G)$~and~$H^*(BG)$. This gives in particular a Cartan-type model for the equivariant cohomology of a $G$-space with coefficients in~$R$. As another corollary, we obtain a multiplicative quasi-isomorphism~$C^*(BG)\to H^*(BG)$. A key step in the proof is to show that a differential Hopf algebra is formal in the category of $A_\infty$~algebras provided that it is free over~$R$ and its homology an exterior algebra. Full text: dvi.gz 36 k, dvi 115 k, ps.gz 571 k, pdf 216 k.
CommonCrawl
Abstract: A closed expression of the Euclidean Wilson-loop functionals is derived for pure Yang-Mills continuum theories with gauge groups $SU(N)$ and $U(1)$ and space-time topologies $\Rl^1\times\Rl^1$ and $\Rl^1\times S^1$. (For the $U(1)$ theory, we also consider the $S^1\times S^1$ topology.) The treatment is rigorous, manifestly gauge invariant, manifestly invariant under area preserving diffeomorphisms and handles all (piecewise analytic) loops in one stroke. Equivalence between the resulting Euclidean theory and and the Hamiltonian framework is then established. Finally, an extension of the Osterwalder-Schrader axioms for gauge theories is proposed. These axioms are satisfied for the present model.
CommonCrawl
It is known that the sum of every two narrow operators on $L_1$ is narrow, however the same is false for $L_p$ with $1 < p < \infty$. The present paper continues numerous investigations of the kind. Firstly, we study narrowness of a linear and orthogonally additive operators on Kothe function spaces and Riesz spaces at a fixed point. Theorem 1 asserts that, for every Kothe Banach space $E$ on a finite atomless measure space there exist continuous linear operators $S,T: E \to E$ which are narrow at some fixed point but the sum $S+T$ is not narrow at the same point. Secondly, we introduce and study uniformly narrow pairs of operators $S,T: E \to X$, that is, for every $e \in E$ and every $\varepsilon > 0$ there exists a decomposition $e = e' + e''$ to disjoint elements such that $\|S(e') - S(e'')\| < \varepsilon$ and $\|T(e') - T(e'')\| < \varepsilon$. The standard tool in the literature to prove the narrowness of the sum of two narrow operators $S+T$ is to show that the pair $S,T$ is uniformly narrow. We study the question of whether every pair of narrow operators with narrow sum is uniformly narrow. Having no counterexample, we prove several theorems showing that the answer is affirmative for some partial cases.
CommonCrawl
Mirko and Slavko are playing a new game, "Trojke" (Triplets, in English). First they use a chalk to draw an $N \times N$ square grid on the road. Then they write letters into some of the squares. No letter is written more than once in the grid. The game consists of trying to find three letters on a line as fast as possible. Three letters are considered to be on the same line if there is a line going through the centre of each of the three squares. After a while it gets harder to find new triplets. Mirko and Slavko need a program that counts all the triplets, so that they know if the game is over or they need to search further. The first line contains an integer $N$ ($3 \le N \le 100$), the dimension of the grid. Each of the $N$ following lines contains $N$ characters describing the grid – uppercase letters and the character '.' which marks an empty square. Output the number of triples on a single line.
CommonCrawl
New methods are proposed for the numerical evaluation of $f(\A)$ or $f(\A) b$, where $f(\A)$ is a function such as $\sqrt \A$ or $\log (\A)$ with singularities in $(-\infty,0\kern .7pt ]$ and $\A$ is a matrix with eigenvalues on or near $(0,\infty)$. The methods are based on combining contour integrals evaluated by the periodic trapezoid rule with conformal maps involving Jacobi elliptic functions. The convergence is geometric, so that the computation of $f(\A)b$ is typically reduced to one or two dozen linear system solves.
CommonCrawl
Dual subgradient method - no analytical relation between primal and dual variables. where $\alpha_i$ is the step size and $[\cdot]_+=max(\cdot,0)$. My question is in some cases, it is hard or impossible to find $x(\lambda)$, i.e., no analytical expression for $x(\lambda)$ (This happens a lot when you have multiple nonlinear constraints). In these cases, how does subgradient method work to find the optimal dual $\lambda^*$. If subgradient method fails, any other efficient method to find $\lambda^*$. Any suggestion or reference is appreciated. Browse other questions tagged convex-optimization lagrange-multiplier duality-theorems subgradient or ask your own question. Strong duality: When does the optimal primal variable coincide with the primal variable giving the dual function. Dual subgradient method - can we solve approximation of dual? Is there an equivalence between subgradient and stochastic gradient?
CommonCrawl
Volume 34, Number 5 (2006), 2387-2412. In this paper we derive one- and two-sample multivariate empirical Bayes statistics (the MB-statistics) to rank genes in order of interest from longitudinal replicated developmental microarray time course experiments. We first use conjugate priors to develop our one-sample multivariate empirical Bayes framework for the null hypothesis that the expected temporal profile stays at 0. This leads to our one-sample MB-statistic and a one-sample T̃2-statistic, a variant of the one-sample Hotelling T2-statistic. Both the MB-statistic and T̃2-statistic can be used to rank genes in the order of evidence of nonzero mean, incorporating the correlation structure across time points, moderation and replication. We also derive the corresponding MB-statistics and T̃2-statistics for the one-sample problem where the null hypothesis states that the expected temporal profile is constant, and for the two-sample problem where the null hypothesis is that two expected temporal profiles are the same. Ann. Statist., Volume 34, Number 5 (2006), 2387-2412. Aitchison, J. and Dunsmore, I. R. (1975). Statistical Prediction Analysis. Cambridge Univ. Press. Baldi, P. and Long, A. D. (2001). A Bayesian framework for the analysis of microarray expression data: Regularized $t$-test and statistical inferences of gene changes. Bioinformatics 17 509--519. Bar-Joseph, Z., Gerber, G., Simon, I., Gifford, D. K. and Jaakkola, T. S. (2003). Comparing the continuous representation of time-series expression profiles to identify differentially expressed genes. Proc. Natl. Acad. Sci. USA 100 10,146--10,151. Bickel, P. J. and Doksum, K. A. (2001). Mathematical Statistics: Basic Ideas and Selected Topics, 2nd ed. 1. Prentice Hall, Upper Saddle River, NJ. Bolstad, B., Irizarry, R., \^Astrand, M. and Speed, T. (2003). A comparison of normalization methods for high density oligonucleotide array data based on bias and variance. Bioinformatics 19 185--193. Broberg, P. (2003). Statistical methods for ranking differentially expressed genes. Genome Biology 4 R41. Cho, R., Campbell, M., Winzeler, E., Steinmetz, L., Conway, A., Wodicka, L., Wolfsberg, T., Gabrielian, A., Landsman, D., Lockhart, D. and Davis, R. (1998). A genome-wide transcriptional analysis of the mitotic cell cycle. Molecular Cell 2 65--73. Cho, R., Huang, M., Campbell, M., Dong, H., Steinmetz, L., Sapinoso, L., Hampton, G., Elledge, S., Davis, R. and Lockhart, D. (2001). Transcriptional regulation and function during the human cell cycle. Nature Genetics 27 48--54. Chu, S., DeRisi, J., Eisen, M., Mulholland, J., Botstein, D., Brown, P. O. and Herskowitz, I. (1998). The transcriptional program of sporulation in budding yeast. Science 282 699--705. Diggle, P. J. (1990). Time Series: A Biostatistical Introduction. Oxford Univ. Press, New York. Diggle, P. J., Heagerty, P., Liang, K.-Y. and Zeger, S. L. (2002). Analysis of Longitudinal Data, 2nd ed. Oxford Univ. Press, New York. Dudoit, S., Fridlyand, J. and Speed, T. (2002). Comparison of discrimination methods for the classification of tumors using gene expression data. J. Amer. Statist. Assoc. 97 77--87. Dudoit, S., Yang, Y. H., Callow, M. and Speed, T. (2002). Statistical methods for identifying differentially expressed genes in replicated cDNA microarray experiments. Statist. Sinica 12 111--139. Efron, B., Tibshirani, R., Storey, J. D. and Tusher, V. (2001). Empirical Bayes analysis of a microarray experiment. J. Amer. Statist. Assoc. 96 1151--1160. Guo, X., Qi, H., Verfaillie, C. M. and Pan, W. (2003). Statistical significance analysis of longitudinal gene expression data. Bioinformatics 19 1628--1635. Gupta, A. and Nagar, D. (2000). Matrix Variate Distributions. Chapman and Hall/CRC, Boca Raton, FL. Hong, F. and Li, H. (2006). Functional hierarchical models for identifying genes with different time-course expression profiles. Biometrics 62 534--544. Irizarry, R. A., Bolstad, B. M., Collin, F., Cope, L. M., Hobbs, B. and Speed, T. P. (2003). Summaries of Affymetrix GeneChip probe level data. Nucleic Acids Res. 31 e15. Kendziorski, C., Newton, M., Lan, H. and Gould, M. (2003). On parametric empirical Bayes methods for comparing multiple groups using replicated gene expression profiles. Statistics in Medicine 22 3899--3914. Lönnstedt, I. and Speed, T. P. (2002). Replicated microarray data. Statist. Sinica 12 31--46. Mardia, K., Kent, J. and Bibby, J. (1979). Multivariate Analysis. Academic Press, New York. Park, T., Yi, S.-G., Lee, S., Lee, S. Y., Yoo, D.-H., Ahn, J.-I. and Lee, Y.-S. (2003). Statistical tests for identifying differentially expressed genes in time-course microarray experiments. Bioinformatics 19 694--703. Reiner, A., Yekutieli, D. and Benjamini, Y. (2003). Identifying differentially expressed genes using false discovery rate controlling procedures. Bioinformatics 19 368--375. Smyth, G. K. (2004). Linear models and empirical Bayes methods for assessing differential expression in microarray experiments. Stat. Appl. Genet. Mol. Biol. 3 article 3. Spellman, P. T., Sherlock, G., Zhang, M. Q., Iyer, V. R., Anders, K., Eisen, M. B., Brown, P. O., Botstein, D. and Futcher, B. (1998). Comprehensive identification of cell cycle-regulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization. Molecular Biology of the Cell 9 3273--3297. Storch, K.-F., Lipan, O., Leykin, I., Viswanathan, N., Davis, F. C., Wong, W. H. and Weitz, C. J. (2002). Extensive and divergent circadian gene expression in liver and heart. Nature 417 78--83. Storey, J., Xiao, W., Leek, J. T., Tompkins, R. G. and Davis, R. W. (2005). Significance analysis of time course microarray experiments. Proc. Natl. Acad. Sci. USA 102 12,837--12,842. Tai, Y. C. (2005). Multivariate empirical Bayes models for replicated microarray time course data. Ph.D. dissertation, Div. Biostatistics, Univ. California, Berkeley. Tai, Y. C. and Speed, T. P. (2005). Statistical analysis of microarray time course data. In DNA Microarrays (U. Nuber, ed.) Chapter 20. Chapman and Hall/CRC, New York. Tai, Y. C. and Speed, T. P. (2005). Longitudinal microarray time course $\mathitMB$-statistic for multiple biological conditions. Dept. Statistics, Univ. California, Berkeley. In preparation. Tai, Y. C. and Speed, T. P. (2005). Cross-sectional microarray time course $\mathitMB$-statistic. Dept. Statistics, Univ. California, Berkeley. In preparation. Tamayo, P., Slonim, D., Mesirov, J., Zhu, Q., Kitareewan, S., Dmitrovsky, E., Lander, E. S. and Golub, T. R. (1999). Interpreting patterns of gene expression with self-organizing maps: Methods and application to hematopoietic differentiation. Proc. Natl. Acad. Sci. USA 96 2907--2912. Tusher, V. G., Tibshirani, R. and Chu, G. (2001). Significance analysis of microarrays applied to the ionizing radiation response. Proc. Natl. Acad. Sci. USA 98 5116--5121. Wen, X., Fuhrman, S., Michaels, G. S., Carr, D. B., Smith, S., Barker, J. L. and Somogyi, R. (1998). Large-scale temporal gene expression mapping of central nervous system development. Proc. Natl. Acad. Sci. USA 95 334--339. Wildermuth, M. C., Tai, Y. C., Dewdney, J., Denoux, C., Hather, G., Speed, T. P. and Ausubel, F. M. (2006). Application of $\widetildeT^2$ statistic to temporal global Arabidopsis expression data reveals known and novel salicylate-impacted processes. To appear. Yuan, M. and Kendziorski, C. (2006). Hidden Markov models for microarray time course data in multiple biological conditions (with discussion). J. Amer. Statist. Assoc. 101 1323--1340.
CommonCrawl
Given a factorization sequence f, create the enumerated sequence containing the same pairs of primes and exponents. Any ideas how can I rewrite this line in Sage? Short answer: For an element f of a finite field F with 2^K elements, one can use for instance its integer representation, then take its binary digits, padded to K digits (at least). For instance, here are three random elements of F, written as sequences. Let us try to reproduce the above in a dialog with the sage interpreter. Magma is a foreign language for me, but somehow i suppose that F.1 is the class of $x$ if we model the field with $2^8$ elements as $\mathbb F_2[x]$ modulo the polynomial that can be extracted from the line where we print F.1^8 . So let us first construct the field in sage.
CommonCrawl
I thoroughly enjoyed reading through Peter Norvig's extraordinarily clean and nice solutions to the Advent of Code challenge last year. Inspired by his clean, literate programming style and the convenience of jupyter notebook demonstrations, I will look at several of these challenges in my own jupyter notebooks. My background and intentions aren't the same as Peter Norvig's: his expertise dwarfs mine. And timezones are not kind to those of us in the UK, and thus I won't be competing for a position on the leaderboards. These are to be fun. And sometimes there are tidbits of math that want to come out of the challenges. Enough of that. Let's dive into the first day. Given a sequence of digits, find the sum of those digits which match the following digit. The sequence is presumed circular, so the first digit may match the last digit. This would probably be done the fastest by looping through the sequence. "Sum of digits which match following digit, and first digit if it matches last digit" They provide a few test cases which we use to test our method against. For fun, this is a oneline version. For more fun, this is a regex version. Regardless of which one we use, we find the answer. I wonder: is there any sort of time difference between these? I don't know why the oneliner appears to be must slower than the first version. Does this stay true for longer sequences? Yes, the difference in speed is real between the oneline version of the loop and the longform version of the loop. And the regex version and oneline versions appear to converge. I wonder why? If there are $n$ (random) digits, then we expect the sum of the digits which match the subsequent digit to be $0.45 n$. In this case, there are $10^7$ digits, and we should expect the sum to be $0.45 \cdot 10^7 = 4.5 \cdot 10^6$. How close are we? That's really, really close. How does this apply to the Advent of Code Day 1 problem? Today's problem is about 10 percent more than what we might expect to occur by chance. That's still pretty close, though. For the second part of the problem, we are tasked with finding the sum of those digits which match the digits half-way away from the string. This only makes sense on even length strings. It's easy enough to modify the loop to do this. "Sum of digits which match the digit sep digits later" The one-liner can be similarly written. What about the regex? We want to identify a digit, skip sep - 1 digits, and then check to see if the subsequent digit matches. In principle, we need to worry about wrapping around the string. But we notice that not wrapping around misses exactly half of the matches, so we just double the non-wrapped answer. This leads to the following. I don't think I've ever defined a regex "function" in this way. I don't particularly like how it interacts with python's string formatting. It is interesting to note that the expected value is the same as in the consecutive digit case. This is because the probability that two randomly chosen digits agree has nothing to do with the location of the digits. One random digit is as good as another. I will instead note that a similar calculation as above shows that the expected value depends also on the base involved. We arrived at the value $n \times 9/20 = n \times (10 – 1)/2 \cdot 10$ for an $n$ digit number written in base $10$. This increases as the base increases, and tends towards $n/2$. This entry was posted in Expository, Programming, Python and tagged advent of code, jupyter, python. Bookmark the permalink. This is a good introduction to coding in a computer language. It's a little complex, so practice and patience are in order.
CommonCrawl
Phillip and Lara are saying hello from the Netherlands, where the topography is very flat. Most of the land is barely above sea level! Since they're having so much fun on their vacation, Phillip and Lara are already thinking about where to go for their next trip. Lara loves to climb mountains, but Phillip loves to scuba dive. Phillip and Lara might not agree on what to do, but they do agree on where to go. They both want to get as far away from sea level as possible. Wherever that place is, that's where they'll go. Lara suggests Mount Everest, the point with the highest altitude on Earth, with an elevation of 29,000 feet above sea level. Phillip proposes a trip to visit the Mariana Trench, the deepest place on Earth, around negative 36,000 feet, or 36,000 feet below sea level. Let's analyze the facts and help Phillip and Lara decide where to go on their next trip. We know negative numbers are less than positive numbers. So, negative 36,000 feet, the depth of the Mariana Trench, is less than the elevation of Mount Everest, which reaches a height of 29,000 feet. So does that mean Mount Everest is the place to go? Not necessarily. To solve this problem, we need to know the absolute value of the height and depth of each location. The absolute value of a number is the number's distance from zero, regardless of whether the number is positive or negative. A depth of negative 36,000 feet is 36,000 feet from sea level. So the absolute value of negative 36,000 is 36,000. A height of 29,000 feet is 29,000 feet from sea level. So the absolute value of 29,000 is 29,000. Absolute values are always positive numbers. Always! The absolute value of negative 36,000 is greater than the absolute value of 29,000 because 36,000 is greater than 29,000. So he lowest point of the Mariana Trench is farther from sea level than is Mount Everest's peak. We have a winner! The absolute value of a positive number is simply equal to the number. The absolute value of a negative number is equal to the opposite of that number, which is always positive, and the absolute value of zero is zero. What's the negative of the absolute value of a positive number? It's a negative number. What's the negative of the absolute value of a negative number? A negative number also. What if there's an expression inside the absolute value bars? You must simplify the expression first then determine the absolute value. If the expression is a big one, remember to use PEMDAS. In the Mariana Trench, Phillip is having the time of his life scuba diving in very deep water, but what about Lara? How about that? She found a mountain to climb, after all. Uh oh. I don't think that's a mountain. The absolute value of a number is its distance from zero. Another way of thinking of absolute value is it's the magnitude of a number – without consideration of its sign. So for a negative number, just drop the sign, and that's the absolute value; for a positive number, the absolute value is the number. We can conclude the absolute value of a number is always a positive number. What about zero? The absolute value of zero is zero. To help you understand absolute value, model problems using a vertical or horizontal number line. With help from the number line, the difference from zero and the absolute value of the difference between numbers will be quite obvious. The symbol for absolute value is a vertical line on either side of the quantity, albeit number or variable. We can credit mathematician Karl Weierstrass with thinking up the symbol for absolute value. How can we use this concept in the real world? There are many applications, but just to name a few: distance, weights and measures, and temperature. If your friend tells you there is a great record store just 4 blocks away, it might be helpful to understand that your friend is telling you the absolute value of the number of blocks, so you'll know to ask which direction before you go off on a wild goose chase. I don't know if you'll ever find the record store, but I absolutely recommend that you watch this video. Represent and solve equations with absolute value. Informative, but unrealistic. Both of those situations are far more lethal than the video suggests. great video helps a lot. Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Introduction to Absolute Value kannst du es wiederholen und üben. Negative numbers are smaller than positive numbers. Absolute values are always positive numbers. could as well the deepest point of the Mariana Trench. about -36000 feet to the deepest point of the Mariana Trench. Note that an absolute value is always a positive number. Therefore, Lara and Philip will spend their next holiday in the Mariana Trench. Decide which is further from sea level. For example $|-3|=3$ as well as $|3|=3$. The absolute value on a number line is the distance of any number to the zero point. Keep in mind that absolute values are always positive numbers. So that's the decision: They will go to the Mariana Trench. Determine the absolute value of the numbers. The absolute value is always positive. So, the negative of an absolute value is always negative. Working with more complicated expressions: First, solve the expression that is between the absolute value bars, and then find the absolute value of the simplified expression. The absolute value of a positive number is simply equal to the number: $|3|=3$. The absolute value of a negative number is equal to the opposite of that number and is always positive: $|-5|=5$. The absolute value of zero is zero: $|0|=0$. What is the negative of an absolute value? So, it's a negative number. $|3+(-5)|$: We have to calculate $|3+(-5)|=|3-5|=|-2|=2$. The number of steps is always positive while the position on the number line can be negative. For example: On this number line you can see two steps to the left, the red arrow, and one to the right, the blue arrow. The position is $-1$ and the number of steps is $|-2|+|1|=2+1=3$. Susan taking three steps to the left and then another step to the left. the number of steps is $|-3|+|-1|=3+1=4$. Seven steps to the right give us the position $7$ and five steps to the left the position $2$. The number of steps can be determined by $|7|+|-5|=7+5=12$. Eight steps to the left give us the position $-8$ and twelve steps to the right the position $4$. The number of steps can be determined by $|-8|+|12|=8+12=20$. Determine the winner of the absolute value game. The absolute value of a positive number is the number itself: $|32|=32$. The absolute value of a negative number is the opposite of the number: $|-64|=64$. The absolute value of zero is zero: $|0|=|0|$. If you have to determine the absolute value of a more complicated expression, first simplify the expression. The absolute value of a positive number is the number itself: $|32|=32$. The absolute value of a negative number is the opposite of the number $|-64|=64$. The absolute value of zero is zero: $|0|=|0|$. If we have to determine the absolute value of a more complicated expression, first simplify the expression. For each of the following examples, we will determine the the absolute value. Afterwards we 'll be able to decide who wins the game. $|-23|=23$ and $|46|=46$. So $|-23|<|46|$. Philip wins. $|-46|=46$ and $|23|=23$. So $|-46|>|23|$. Lara wins. $|-23|=23$ as well as $|23|=23$. So $|-23|=|23|$. Draw. $|46|46$ as well as $|-46|=46$. So $|46|=|-46|$. Draw. $|23+46|=|69|=69$ and $|23-46|=|-23|=23$. So $|23+46|>|23-46|$. Lara wins. $|-23-46|=|-69|=69$ and $|46-23|=|23|=23$. So $|-23-46|>|46-23|$. Lara wins. $|-23+46\times2|=|-23+92|=|69|=69$ and $|3\times (-23)|=|-69|=69$. So $|-23+46\times2|=|3\times (-23)|$. Draw. $|2\times(-23+46)|=|2\times(-23)|=|-46|=46$ and $|2|\times(|-23|+|46|)=2\times(23+46)=2\times 69=138$. So $|2\times(-23+46)|<|2|\times(|-23|+|46|)$. Philip wins. Lara wins three times while Philip only wins twice. So he has to pay for lunch today. Decide which terms have the same absolute value. The absolute value of a negative number is the opposite of the number. First, determine the value of the expression between the absolute value bars.
CommonCrawl
The schema of category theory used without mentioning "category theory"? I have seen these types of schemas often in contexts where the word "category theory" is not mentioned at all. For example, where $X$ is $R^d$ and $Y$ is $M$ (a manifold). Now I am not at all knowledgeable about category theory, and I only stumbled upon this scheme by accident, realizing I've seen them many times before. Does this mean that I have been "working" with category theory all along without knowing it? Are all these types of schemes actual implicit applications of category theory, or are they also used in "non-category-theory-ways"? Yes, in the sense that some of the basic ideas that category theory studies has filtered down into common mathematical parlance and practice. However, category theory is about the in-depth study of categories. Just using categories and related notions doesn't mean you're studying category theory any more than talking about sets of elements means you're studying set theory. The picture you write down is usually called a "commutative diagram" (or just a diagram, if its commutativity has yet to be proven). The scheme you write down says, somewhat tautologically, that $g \circ f = g \circ f$. If the diagonal had been labelled $h$, the diagram would say that $g \circ f = h$. This comes up in lots of contexts: $X,Y,Z$ could be groups, and $f,g,h$ could be group homomorphisms. Or $X,Y,Z$ could be topological spaces, and $f,g,h$ could be continuous mappings. Or $X,Y,Z$ could be nodes in a graph, and $f,g,h$ could be paths. Or $X,Y,Z$ could be elements of a poset, and an arrow $X \to Y$ could mean that $X \leq Y$. Category theory is all about recognising the similarities between these situations, and abstracting away from them: a category is (more or less) just some objects (denoted $X,Y,Z,\ldots$) together with some arrows between objects (denoted $f: X \to Y, g: Y \to Z,\ldots$) which can be composed if they match (e.g. $g \circ f: X \to Z$). In that sense, a diagram is really the essence of category theory: categories are about equations between different compositions of arrows, and a diagram is a visual way of representing such an equation. Does this mean that I have been "working" with category theory all along without knowing it? One of the principle of category theory is to do "once for all" what you are doing systematically in each field of mathematics. Not the answer you're looking for? Browse other questions tagged notation category-theory or ask your own question. How to understand the definition of sets in homotopy type theory and the role of univalence? Is it possible to formulate category theory without set theory? What should one know about abstract sets and structural foundations? What does the category of RDF models look like in Institution Theory?
CommonCrawl
Recovering a finite group's structure from the order of its elements. What number appears most often in an $n \times n$ multiplication table? What is the optimal strategy in the "Factor Game"? What arrangement of unit cubes minimizes surface area? Does this Banach manifold admit a Riemannian metric? Should I design my exams to have time-pressure or not? Why do inequalities flip signs? How can we help students learn how to read their textbook? Should we say that fractions "are" or "represent" numbers? How to handle the situation when you made a stupid mistake in front of the class? Why is the Frankl conjecture hard? How did the refereeing system of Gösta Mittag-Leffler's Acta Mathematica function from 1882 to (about) 1918? Who named it the Snake Lemma? How does a mathematician choose on which problem to work? How many positive integers from 1-1000 have 5 divisors? Are there irreducible polynomials with all zeros on two concentric circles? Has the Fundamental Theorem of Algebra been proved using just fixed point theory? Good, simple examples of induction?
CommonCrawl
Quicktime,Webm,Ogg Over the winter break 2013, while spending some days at the cape with the family, we visited the Province town puzzle shop on New Years Eve. I bought there the Meffert's great challenge as well as Smooth operator, a maze puzzle. More about the second later. The Meffert challence consists of a cube in which the corners stay stationary but turn. Four of the vertices correspond gears with 8 teeth attached, while the other four vertices have 5 teeth attached. There are two modes. In the closed mode, everything is linked. In the open mode, two halfs can move independently. Let G denote the group of one side, then G x G is the group when the cube is open. It is easy to see that the group G is Abelian and cyclic and equal to C15. So, in both cases, it appears to be a very simple puzzle, because the group just consists of two loops. But wait. Being confident and arrogant, I started to pay more attention to the food channel competitions while playing with it a bit more. Surprise: if the closing and separating of the cube halfs were allowed in more general situation, the group became bigger and appears now to have (84 * 54)/2 = 1280000 elements. Still nothing comparable with the Rubik cube but still challenging. The main idea for any Rubik cube type puzzle is to look at the stabilizer of the group. For example, fix one side then look for moves which fix this side. Every kid naturally figures this out also without mathematical training. Mathematicians associate it with the name Schreier. In High school, it took me 3 weeks to figure out the Rubik cube myself in high school and I had no group theory exposure yet. In a computer science course at ETH, we assigned to the students the problem to program an algorithm to find the solution of the Rubik cube. Not to feed in a given solution but have the computer figure out one. The topic came up again in a mathematical and quite theoretical software course given by Erwin Engeler, and a Nachdiplom Vorlesung course by Gilbert Baumslag, which really had pulled me in again into group theory even so I had been hooked with dynamical systems already. The Meffert grear puzzle is for me solved in the sense that I can bring it back to the original position with enough time, but it is not yet finished: I do not have a deterministic path to arrange the last two cubes. Since there are only 40 possibilities at the end, I solve it at the moment in a random way: scramble it completely, then solve and see. This needs still an hour. Fortune cookie in Mews. Apropos randomness: while waiting for the food on the brink of New year in the restaurant "Muws" in Province town, we had fortune cookies on the plates which included little toys. One was a classic metal puzzle in the form of a bent nail which must be thousands of years old at least. Back to the simple maze with two simple loops: in the restaurant we started to wonder how long it would take to solve this Monte Carlo type. Just shake the connected puzzle long enough until it falls apart. To our surprise it was possible. It worked twice with a couple of dozen shakes. If you make the experiment, the puzzle must be small so that you can shake it in the hand. Also, the solution path should not be too tight. If you have to slide along, it is too tight. What happens when shaking is that we do a random walk in an open subset of the group $G \times G$ intersected with a large sphere. If two points are connected, the random walker will reach from one point to the other, but it might take a long time. This depends on the geometry and on the smallness of the connection path. Shaking it and getting the nails hooked together did never work. It is obvious that the probability of achieving this is much smaller because the volume of the 12 dimensional region which needs to be transversed by the random walk is much smaller. A wild guess would be that one could put the two separate nails in a shaking box, wait a few years until it is solved. Thats how the first improbably combinations of complicated organic molecules could have occurred on a time scale of billion of years. As a graduate student, I had participated in a contest in the Swiss town of Bern. A "Rubik cube" type puzzle, the "master ball" had to be solved competitively in front of a larger audience. The task had been to race doing a specific transposition in that group. Having been trained at ETH in algebra and theoretical computer science also been a course assistant in a course using computer algebra, I had used the computer algebra system Cayley (now Magma) to find a solution and assigned a Sun workstation to tackle the problem. After a few hours, it came up with a solution which consisted of several dozen moves. I brought this solution to the competition: after all the contestants had been introduced, we went on to race who would solve the puzzle first. The fastest solver was a farmer and cheese-maker from Emmental. Without computers and without any knowledge in group theory, he had the best understanding to walk around in that non-Abelian group. This event happened before the movie "Good Will Hunting" appeared and is no fiction.
CommonCrawl
If zero is an eigenvalue are dimensions lost? This is likely a silly question so sorry in advance. However, I am wondering if I am right in thinking that if zero is an eigenvalue, then some dimension must be lost. My understanding is that eigenvectors are the vectors that are only scaled when some matrix A is applied. Then, eigenvalues are the amount those vectors are scaled. So, therefore is it true that if an eigenvalue is zero, that vector has been sent to the origin and therefore the dimension of the image is smaller than the original dimension? Would that not also imply that any vector in the kernel of a transformation is an eigenvector? Browse other questions tagged linear-algebra eigenvalues-eigenvectors diagonalization or ask your own question. Are there eigenvectors, eigenvalues, and characteristic functions for non-linear vector fields? any non-zero vector in V can be expressed as a linear combination of eigenvectors for the eigenvalues 1 and −1. What does a single eigenvector and eigenvalue for a $2 \times 2$ matrix represent geometrically? If multiple eigenvectors can correspond to the same eigenvalue, then how will we prove the linear independence of eigenvectors?
CommonCrawl
A natural number that is the result of multiplication of $n$ by some natural number; hence a number divisible by $n$ without remainder (cf. Division). A number $n$ divisible by each of the numbers $a,b,\ldots,k$ is called a common multiple of these numbers. Among all common multiples of two or more numbers, one (distinct from zero) is the smallest (the lowest or least common multiple) and the others are then multiples of the lowest common multiple. If the greatest common divisor $d$ of two numbers $a$ and $b$ is known, the lowest common multiple $m$ is found from the formula $m = ab/d$. See also Divisibility in rings. This page was last modified on 2 November 2016, at 22:23.
CommonCrawl
I understand that the NNG formula relates $Q$, $I_3$, and $Y$ and can be derived in QCD; does this unambiguously predict the electric charge ratios without making assumptions about the definitions of isospin and hypercharge? If so, this is unintuitive to me! Why does a particle carrying $SU(3)$ color charge care what charge it has under the totally separate electroweak $U(1)\times SU(2)~$ symmetries? If not, is there a name for the "problem" of explaining the charge ratios? There is a nontrivial relation between the electric charge and the strong business, namely that there are instantons which will cause proton decay. So it is not completely true that there are no relations--- the requirement of anomaly cancellation requires that the proton decay process conserve charge, and so relates the total charge on the proton to the total charge on the electron. The U(1) numbers are completely crazy. The only sensible explanation is that they come from an SU(5) GUT (or SO(10) or E6 or some higher version of the SU(5) idea). The reduction of charges from SU(5) is explained in this answer: Is there a concise-but-thorough statement of the Standard Model? This gives the 1,2,3,6 ratios of the hypercharge assignments in nature, and completely explains the crazy quark charges. It is also an automatic way of ensuring anomaly cancellation. This, and approximate coupling constant unification, are the two strongest bits of evidence for a GUT at a scale of $10^16$ GeV or thereabouts. Not the answer you're looking for? Browse other questions tagged particle-physics standard-model quantum-chromodynamics physical-constants color-charge or ask your own question. Why do quarks have a fractional charge? Why is it that protons and electrons have exactly the same but opposite charge? Color confinement and integer electric charge? Difference between Quark-Gluon Plasma and Color-Glass Condensate? Why is there a linear relationship between charge and isospin? Why is quark flavor just a SU(N) group? What's the difference between Quark Colors and Quark Flavours? Why do the right-handed up quark and down quark not form an $SU(2)$ doublet? Do color changing interactions between up and down quark exist?
CommonCrawl
There is a street of length $x$ whose positions are numbered $0,1,\ldots,n$. Initially there are no traffic lights, but $n$ sets of traffic lights are added to the street one after another. Your task is to calculate the length of the longest passage without traffic lights after each addition. The first input line contains two integers $x$ and $n$: the length of the street and the number of sets of traffic lights. Then, the next line contains $n$ integers $p_1,p_2,\ldots,p_n$: the position of each set of traffic lights. Each position is distinct. Print the length of the longest passage without traffic lights after each addition.
CommonCrawl
Of the seven millenium prize problems, Poincaré conjecture is the only solved one. Conjectured by French mathematician Henri Poincaré (who, by the way, went through the same University as I did!!!), it was solved by Russian mathematician Gregori Perelman in 2003. This article explains what the conjecture says. I wrote an article on P versus NP, which is another of the seven millenium prize problems. Poincaré conjecture is a problem of topology, which is a subfield of mathematics which studies properties of connectedness of spaces. In particular, topology aims at continuously deforming spaces into simpler better-known spaces. For instance, in the animation from Wikipedia on the right, a mug is continuously deformed into a doughnut. This continuous deformation is called a homeomorphism. So a homeomorphism is a continuous mapping between two spaces? Not exactly. In order to have the topological properties preserved, topologists also want homeomorphisms to be invertible, and that the inversion is continuous too. This means that, not only do we need to be able to deform continuously the mug into a doughnut, but we also need to be able to invert continuously the doughnut back into a mug. In more precise terms, a homeomorphism is a continuous one-to-one mapping between two topological sets whose inverse is continuous too. If you want to know more about the basics of topology, like what it means for a mapping to be continuous, you should read my article on basic topology. But you don't need the rigorous definition to follow the major ideas of this article. Visually, it's very simple. Basically, homeomorphisms involve stretchings and contractions, but forbid cuts and pastes. This means, for instance, that disconnected spaces cannot be connected by a homeomorphisms. Below are several examples of homeomorphic spaces. Notice that the two spaces above are not homeomorphic to the two spaces below. This can proved by remarking that the two spaces above have one connected boundary. On the other hand, the two spaces below have a disconnected boundary. Since borders cannot be cut nor pasted, there is no way of deforming these of the above spaces into these of the below spaces. This definitely enables to tell non-homeomorphic spaces apart. But two spaces with equal number of connected components, and whose boundaries are also made of the same number of connected components, are not necessarily homeomorphic. To better understand this, Poincaré focused on connected spaces with no boundary. Also he only considered bounded spaces, as opposed to spaces which go to infinity like a plane. Technically, these spaces are defined as compact spaces with a dimension n such that, if you zoom enough around any of point, the space looks like a n-dimensional vector space. They are called closed n-manifolds. A 1-manifold is a curve, while a 2-manifold is a surface. I'm not dwelling on the concept of compactness, although it is a fundamental concept in topology. For our purpose, because we only work on finite dimensional spaces, compactness corresponds to boundedness. This means that the space is in a limited volume. Sure. In fact, the examples I'm going to give will be essential to discuss Poincaré conjecture. First is the simplest closed 2-manifold, namely, the sphere. If you zoom in on a sphere, the surface of the sphere looks like a plane, which is the 2-dimensional vector space. A sphere is thus a 2-manifold. Plus, it is compact since it is bounded. The other important closed 2-manifold is the torus, which is basically the surface of the doughnut I mentioned earlier. What's important to note is that, even though the torus and the sphere are both closed 2-manifolds, they are not homeomorphic! This is an awesome question to leads me to Poincaré's key insight. Poincaré's key insight was to consider loops made on a figure. This study of loops is called homotopy. Listen to myself explaining this idea in this extract from a Trek through 20th Century Mathematics. So a loop is a path on the surface which goes back to the starting point? Yes! Exactly. But one thing to notice is that some loops are essentially equivalent. This is the case whenever one can be deformed continuously into the other. We say that such loops are homotopic. For instance, in the following figure, the purple loop displays the continuous deformation of the blue loop into the green loop. Thus, the blue and green loops are homotopic. In fact, you can even notice that these loops can be deformed into a point. They are thus homotopic to points. We say that these loops are contractible. They sort of represent the zero of homotopy. Well, it depends on the structure of the space we study. In the holed disk on the right, the blue and green loops cannot be deformed into one another. They are not homotopic. In particular, the blue loop is contractible, while the green loop isn't. So what does homeomorphism have to do with homotopy? Could you translate in understandable words? Sure! Consider a disk and a polygon, and a homeomorphism between them. Let a green and a blue loop of the disk. Each point of each loop is thus uniquely mapped with a point of the polygon. Overall, each loop of the disk is uniquely mapped with a loop of the polygon. Thus, the green and blue loops of the disk are mapped with a green and a blue loop of the polygon. When I say that homeomorphism preserves homotopy, I mean that the green and blue loops of the disk are homotopic if and only if their images in the polygon are homotopic. Let's prove it! Assume that, in the disk, the green and blue loops are homotopic. Then there are purple loops which are intermediate loops from the blue to the green loop. Each purple loop of the disk is mapped with a purple loop of the polygon. The obtained purple loops of the polygon now form a transition from the blue to the green loop in the polygon. Thus, the image of homotopic loops by the homeomorphism are homotopic too. A similar reasoning shows that homotopic loops of the polygon are associated with homotopic loops of the disk by the inverse homeomorphism. We only need to prove that there are loops on a torus which do not correspond to any loop on a sphere. The key argument is that all loops on a sphere are contractible, while the torus has the two non-contractible blue and green loops drawn below. If there were a homeomorphism, then the image of the blue and green loops in the sphere would be non-contractible, which contradicts the fact that loops on a sphere are all contractible. Thus, the sphere and the torus are not homeomorphic. Surfaces in 3 dimensions are hard to visualize indeed. Fortunately, there is a much simpler way to describe two-dimensional surfaces using planar representations and… glue. Yes! You have probably learned at school that you could make a cube using a net like the one on the right, and by then gluing the sides in the right way. This right way is represented by colors on the edge. Same colors must be glued together. We can describe loops on the cube by describing the loops on the net. Isn't this planar representation much simpler to visualize? Yes! But what about the sphere and the torus? Are there nets for these? So are you going to show us that all loops are contractible on a sphere with the map above? No. Although common to us all, the map drawn above is a bit too complicated in terms of gluing, as there are four sides with different properties. Rather, let's work on the most simple representation of the sphere, which is a disk centered on the North pole. The edge of the disk corresponds to the South pole, and should thus be all glued together. By the way, this representation is great to visualize the fact that the shortest path from Toronto to Beijing does go through the North pole. Find out more about map making with Scott's article on non-Euclidean geometry. OK… How do we now deduce that all loops are contractible on a sphere? All the points of the edge of the disk are one single point. Thus, a path disappearing at one end and reappearing at another end can be deformed into a path going along the edge, as done below. The deformed loop can then be contracted to a point. I'm sort of convinced for this particular loop. But what about the general case of any loop? Consider any loop on the sphere. We can deform it such that it never crosses the South pole. This deformed loop doesn't cross the edge of the disk. It can then thus be contracted to a point by a homothety centered on the North pole. You can learn about homothety with my article on symmetries. Hehe… The torus is actually a square with opposite sides glued together, exactly like in one of History's greatest video game, PAC-MAN. If this is missing of your culture, I feel sorry for you! But I'll explain it nevertheless. PAC-MAN is played on a screen. Whenever you go out of the screen on the right, you reappear on the left. Similarly, when you go out out of the screen on the top, you reappear on the bottom. This shows that a torus equals $\mathbb R^2/\mathbb Z^2$, or, equivalently, $\mathbb S\times \mathbb S$, where $\mathbb S = \mathbb R/\mathbb Z$ is the one-dimensional circle. Amusingly, this also indicates that tori are much simpler than spheres. So what are non-contractible loops like in the square representation? Why would these loops be non-contractible? Let's consider the green loop. The reasoning for the blue one will be similar. OK… So why is the green loop non-contractible? That's because it crosses the top edge. In fact, for any continuous deformation of the green loop, we can track the intersection of the deformed loop with the top edge, which proves that it cannot disappear. Similarly, we can draw a horizontal line in the middle of the square, and any deformation of the loop would also have to cross this horizontal line. Thus, any deformation of the green loop must intersect both the top edge and the middle horizontal line. It can therefore not be contracted to a point. This proves that the green loop is not contractible. So the torus has non-contractible loops while the sphere doesn't, and this proves that they are not homeomorphic? Exactly! More generally, we now know that any space with non-contractible loop is not homeomorphic to a sphere. Theorem: Any closed simply-connected surface is homeomorphic to a sphere. How did he prove it? Unfortunately, the only proofs I know, which involve the classification of 2-dimensional closed surfaces with technics of cutting and gluing, would be too long to explain here. These are nice proofs with no really technical difficulty. You can check it out in this great lecture by Norman Wildberger. If you know a simpler proof which could be presented here, please tell me! But this is not Poincaré conjecture, is it? Theorem: Any closed simply connected n-manifold is homeomorphic to a n-sphere. That's the one! A n-sphere is defined as the set of points at distance 1 to the origin in a (n+1)-dimensional normed vector space. So how do we represent these n-spheres? Hummm… Tough one. I give up for high dimensions, but I can provide a representation for dimension 3. A 3-sphere is the interior of the 2-sphere whose boundary is glued as a single point, in a similar way that we have described the 2-sphere with the interior of a 1-sphere. The interior of a n-sphere is known as a (n+1)-ball. So, in other words, a 3-sphere is a 3-ball whose boundary is glued into a single point. It does! One construction consists in noting that a point on the 3-sphere lying in the 4 dimensional space is nearly known by 3 of its coordinates. In fact, if the 3 coordinates form a vector which belongs to a 3-ball, there is exactly 2 possible values for the last coordinate, which sort of correspond to the North and the South hemi-3-sphere. And if the 3 coordinates belong to the 2-sphere, then the 4th coordinate is uniquely defined. This corresponds to a sort of 2-dimension equator which belongs to both hemi-3-sphere. As a result, the 3-sphere can be thought as the union of two 3-balls which are to be glued along their boundaries. Check this nice animation by Patrick Massot where the two hemi-3-sphere of the 3-sphere are displayed separately. Patrick Massot wrote this great article on Poincaré conjecture on images.math.cnrs.fr and poincare.fr. It's in French though… But if you can read French, you should definitely check it out (as well as other articles of these two websites!). I'd like to personally thank him for sending me these videos and allowing me to include them here. Note that the center of the second 3-ball is inverted into the boundary of the final 3-ball. The final 3-ball can then be resized such that his radius equals 1. We eventually obtain a 3-ball whose boundary should be all glued together as one single point. The homeomorphism from the 3-ball to the sphere can be written as a mapping of a point (x1, x2, x3) with (x1, x2, x3, sgn(z) sqrt(|z|)), where z = 1-2(x12+x22+x32). The value of the 4th coordinate thus goes from 1 to -1 as we go from the center of the 3-ball to its edge. Yes! In fact, this construction is more general and works for any n-sphere. Any n-sphere is thus a n-ball whose boundary is a single point. By deforming loops such that they never reach this point, we can then prove that any loop is homotopic to a loop which belongs to the n-ball, and can thus be contracted using a mere homothety. This proves that any n-sphere is simply connected. This is good, but Poincaré conjecture is about the reciprocal, right? Yes, but this is a much more difficult problem. In fact, in 1904, Poincaré himself wrote: this question would take us too far (cette question nous entraînerait trop loin). He was definitely right! It took mathematicians a century to finally solve it entirely. Progresses have been made steadily, as the theorem was first proven in the 1960s for dimensions 7 and higher. Amusingly, the smaller the dimension, the harder the problem. In 1982, dimension 4 was solved, and there was only the problem in dimension 3 left. Yes. And this came as a surprise, as he methods were definitely not conventional in topology. Topologists mainly use technics of cutting and pasting like those I've presented in this article. But Perelman's method rather consisted in using the dynamics of heated objects to prove Poincaré conjecture. The dynamics of heated objects? Yes. The idea is that, as objects get heated, they become rounder and rounder. More curved portions get straightened quicker. Eventually, they become spheres. By using this idea to deform the geometry of spaces, Perelman proved that all 3-manifolds could be deformed into one of the fundamental classes of closed 3-manifolds. In particular, if the initial 3-manifold is simply connected, it gets deformed into the 3-sphere. Which proves the Poincaré conjecture. We're getting to topics which are way beyond my expertise. If you can, please write about this! Note though that you can find nice animations on this page, which, unfortunately, I haven't been able to include here. The first video represents the 3-sphere as the union of 2 3-balls glued together (the left hand side of the figure above). The third one illustrates the deformation of a manifold with Perelman's heating technics. Poincaré conjecture has been a cornerstone for the classification of closed manifolds. While closed surfaces have been classified mainly by Bernhard Riemann, the classification of closed 3-manifolds had a deep connection with Poincaré conjecture. In 1982, William Thurston, probably the greatest topologist of the 20th century, conjectured that the geometry of any closed 3-manifold could be classified as one of 8 possible geometries. Perelman's proof, along with Poincaré conjecture, answered Thurston's conjecture positively. There is a lot more to be said in topology of closed manifolds. For one thing, it has strong connections with graph theory and polyhedra, as you can read it in my article on the utilities problem. Also, I plan on writing an article soon on the amazing non-orientable surfaces, which involve the Möbius band, the projective plane or the Klein bottle. Stay tuned! Who Cares About Poincaré? on Slate.
CommonCrawl
Mehta, Prashant P and Ramasarma, T and Kurup, Ramakrishna CK (1990) Investigations on the mechanism of the hypocholesterolemic action of 1-ethoxysilatrane. In: Molecular and Cellular Biochemistry, 97 (1). pp. 75-85. Intraperitoneal administration of the nontoxic silicon compound, 1-ethoxysilatrane, to the rat did not cause proliferation of hepatic mitochondria or of endoplasmic reticulum, nor did it affect mitochondrial oxidative phosphorylation. The activities of cholesterol 7 $\alpha$-hydroxylase in hepatic microsomes and of cholesterol oxidase in mitochondria respectively were unaffected by silatrane treatment. The rate of release of bile, whose composition remained unchanged, also was not increased in silatrane-treated animals. The results indicated that the compound did not affect the pathway of cholesterol degradation. A progressive decrease in the activity of hepatic microsomal 3-hydroxy-3-methylglutaryl coenzyme A (HMGCoA) reductase was observed on administration of the compound over a period of three weeks. Consistent with this, cholesterol biosynthesis in liver as measured by incorporation of radioactive precursors, acetate and water but not mevalonate, was significantly decreased in silatrane-treated animals. However, enzyme-linked immunosorbant assay revealed that the concentration of HMGCoA reductase protein was not decreased by the treatment indicating that inactivated enzyme was also present in such microsomes. Addition of silatrane to microsomes in the assay system did not cause inhibition indicating that the inactivation is by an indirect mechanism. It is concluded that the hypocholesterolemic action of the compound rested entirely on the inhibition of cholesterol biosynthesis in vivo by inactivation of the rate-limiting enzyme HMGCoA reductase.
CommonCrawl
Is there a method to construct the signal as if it was taken in Nyquist more, without degradation of signal quality? After all, we have more samples than needed, so I do not expect any degradation, right? Is there are some loss, how much is it, what factors it depends on? 4.Is the technique used in MATLAB lossy? For the moment restrict the discussion to 1D signals. Sampling operation can be performed as uniformly or non-uniformly. The uniform sampling is the most obvious, simplest and the preferred form, unless otherwise stated. Nonuniform sampling can be performed in a number of ways, periodically or aperiodically, sampling instant given by a formula or being completely random. Depending on the nonuniform sampling form, the reconstruction criterions may change. For example when you have a formula based nonuniform sampling instant $t_n$, you can (if possible) find a kernel $\phi_n(t)$ for the reconstruction integral that provides exact (perfect) recovery of a continuous time signal $x(t)$ from its nonuniform samples $x_n$. In this context the $sinc()$ kernel is a special case of the generalized sampling theorem WKS (Whittaker-Kotelnikov-Shannon) with sampling instants given by $t = n T_s$. This was historically called as the Nyquist sampling, who most successfully emplyed it into the earliest digital pulse communication systems. The primary requirement that Nyquist (uniform) sampling theorem would provide exact reconstruction of a bandlimited baseband signal $x(t)$ from its uniform samples is that the sampling rate satisfy the Nyquist criterion; $F_s \geq 2 F_c$ where $F_c$ is the signal's bandwidth. Now for the non-uniform sampling strategies, the most common restriction for perfect signal recovery is that on the average the Nyquist rate associated with the bandlimited signal is maintained. So when there is a deficiency of samples at some short interval $I_1$ then there must be an excess of samples at another closeby short interval $I_2$ so that the average Nyquist rate preserved. The above condition can be stated as the local Nyquist rate criterion. Note that unlike uniform sampling case, for nonuniform sampling strategies and exlcusively true for the random sampling strategy, the sampling times $t_n$ in addition to the samples $x_n$ should be known in order to reconstruct the signal perfectly. You can look for Whittaker's, Yen's, Shannon's papers for classical view on the nonuniform sampling of signals. If you assume the signal was strictly bandlimited to below some Nyquist frequency, then it can be decomposed into some number (N) of DFT basis vectors over the sampling aperture (although that decomposition will include rectangular windowing artifacts if the signal wasn't integer periodic in the aperture width). If you have enough sample points (M >= N) of that signal, then this becomes a problem of fitting M equations to N unknowns. IIRC, the farther the sample points are from being equally spaced over the aperture, the more sensitive any computed solution might be to noise and numerical issues. Once deconstructed into DFT basis vectors, any other sample points of that strictly bandlimited signal can be interpolated using a summation of the resultant complex exponential coefficients. Not the answer you're looking for? Browse other questions tagged sampling nyquist reconstruction or ask your own question.
CommonCrawl
In this paper, we prove some common fixed point theorems for finite number of discontinuous, non-compatible mapping on non-complete intuitionistic fuzzy metric spaces and obtain the example. Our research improve, extend and generalize several known results in intuitionistic fuzzy metric spaces. Jungck, G., Rhoades, B.E., 1998. Fixed point for set valued functions without continuity, Ind. J. Pure Appl. Math. 29(3), 227-238. Kramosil,J., Michalek J., 1975. Fuzzy metric and statistical metric spaces. Kybernetica 11, 326-334. Park, J.H., Park, J.S., Kwun, Y.C., 2006. A common fixed point theorem in the intuitionistic fuzzy metric space. Advances in Natural Comput. Data Mining(Proc. 2nd ICNC and 3rd FSKD), 293-300. Park, J.H., Park, J.S., Kwun, Y.C., 2007. Fixed point theorems in intuitionistic fuzzy metric space(I). JP J. fixed point Theory & Appl. 2(1), 79-89. Park, J.S., On some results for five mappings using compatibility of type($\alpha$) in a fuzzy metric space, inpress. Park, J.S., Kim, S.Y., 1999. A fixed point theorem in a fuzzy metric space. F.J.M.S. 1(6), 927-934. Park, J.S., Kwun, Y.C., 2007. Some fixed point theorems in the intuitionistic fuzzy metric spaces. F.J.M.S. 24(2) 227-239. Park, J.S., Kwun, Y.C., Park, J.H., 2005. A fixed point theorem in the intuitionistic fuzzy metric spaces. F.J.M.S. 16(2), 137-149. Park, J.S., Kwun, Y.C., Park, J.H., Some results and example for compatible maps of type($\beta$) on the intuitionistic fuzzy metric spaces, inpress. Schweizer, B., Sklar, A., 1960. Statistical metric spaces. Pacific J. Math. 10, 314-334. Sessa, S., 1982. On a weak commutativity condition of mappings in fixed point considerations, Publ. Inst. Math. 32(46), 149-153. Sharma, S., Desphaude, B., Tiwari, R., 2008. Common fixed point theorems for finite number of mappings without continuity and compatibility in Menger spaces, J. Korea Soc. Math. Educ. Ser. B: Pure Appl. Math. 15(2), 135-151.
CommonCrawl
Let $M$ denote the open Möbius strip, and $\pi:M\to\mathbb S^1$ be the $\mathbb R^1$-bundle. Prove that Whintney sum $\pi:M\oplus M\to\mathbb S^1$ is the trivial $\mathbb R^2$-bundle. I have no idea. Any advice is helpful. Thank you. Not the answer you're looking for? Browse other questions tagged vector-bundles klein-bottle mobius-band or ask your own question. Is there a nice picture of the direct sum of two Möbius bundles? Why is Klein bottle non-orientable? Why can one discriminate between the trivial $S^1$ line bundle and the Möbius strip by knowing the fibre transformation group? How to find two inequivalent ,but weakly equivalent bundles? Where did this useful matrix decomposition come from for Nodal Analysis?
CommonCrawl
Graphical calculators have become popular among high school students. They allow functions to be plotted on screen with minimal efforts by the students. These calculators generally do not possess very fast processors. In this problem, you are asked to implement a method to speed up the plotting of a polynomial. Given a polynomial $p(x) = a_ n x^ n + ... + a_1 x + a_0$ of degree $n$, we wish to plot this polynomial at the $m$ integer points $x = 0, 1, \ldots , m-1$. A straightforward evaluation at these points requires $mn$ multiplications and $mn$ additions. One way to speed up the computation is to make use of results computed previously. For example, if $p(x) = a_1 x + a_0$ and $p(i)$ has already been computed, then $p(i+1) = p(i) + a_1$. Thus, each successive value of $p(x)$ can be computed with one addition each. For example, if $p(x) = a_1 x + a_0$, we can initialize $C_0 = a_0$ and $C_1 = a_1$. Your task is to compute the constants $C_0, C_1, \ldots , C_ n$ for the above pseudocode to give the correct values for $p(i)$ at $i = 0, \ldots , m-1$. The input consists of one case specified on a single line. The first integer is $n$, where $1 \leq n \leq 6$. This is followed by $n+1$ integer coefficients $a_ n, \ldots , a_1, a_0$. You may assume that $|a_ i| \leq 50$ for all $i$, and $a_ n \neq 0$. Print the integers $C_0$, $C_1$, …, $C_ n$, separated by spaces.
CommonCrawl
Acme has released a brand new safe, secured with electronic 10-button keypad with the digits 0 through 9, with an X-length combination required to unlock. However, due to laziness, the keypad's programmer decides that, instead of requiring a new attempt each time, the safe will only consider the last $x$ button presses. So, with $x=2$, if I were to press $1234$, the safe would evaluate whether $12$, $23$, and $34$ were valid combinations, while a traditional keypad safe would only evaluate $12$ and $34$. For all values $x$, the worst case would be to try all combinations in serial, resulting in $10^x$ combinations of $x$ button presses, or $x \times 10^x$ presses. With $x=4$, we'd end up pressing this keypad up to $40,000$ times! What is the best-case number of button presses to attempt all possible combinations for a combination of length $x$, and what is the list of button presses for $x=2$? a De Bruijn sequence. Basically, it's a sequence $B(k,n)$ that contains all sequences of length $n$ made of $k$ different characters. $10^x + (x - 1)$. A De Bruijn sequence is cyclic (end connects back to start) with length $k^n$, so we just need to add the starting $x - 1$ keypresses to the end. 00102030405060708091121314151617181922324252627282933435363738394454647484955657585966768697787988990, as generated by the algorithm on Wikipedia. With various appropriately chosen seeds, this will create a set of loops that cover the space, which unfortunately must then be spliced together for optimal overage. The Frank Ruskey algorithm is much nicer in many ways. Not the answer you're looking for? Browse other questions tagged mathematics pattern or ask your own question.
CommonCrawl
How do I configure PyCharm so I can edit and run Sage scripts? When I tried selecting Sage's python in C:\Program Files\SageMath 8.0\runtime\opt\sagemath-8.0\local\bin PyCharm said 'cannot setup SDK'. I am using SageMath 8 on Windows. [Edit] Rephrased the question because it is partly solved (and is pretty difficult), asking new question for the remainder so it is more clear. python.exe is just a symbolic link (in Cygwin, which is something native Windows software doesn't know about). For starters you'd have to link to the real executable (the one in the same path) named python2.7.exe. Beyond that I don't know what other problems you might encounter though. I'm downloading and installing PyCharm for myself to see how far I can get, but I suspect it's a wild goose chase. Alright, I was able to get at least something vaguely working. I'll write up my findings in an answer below. setx Path "%Path%;C:\Program Files\SageMath 8.1\runtime\opt\sagemath-8.1\local\lib;C:\Program Files\SageMath 8.1\runtime\opt\sagemath-8.1\local\bin;C:\Program Files\SageMath 8.1\runtime\bin;C:\Program Files\SageMath 8.1\runtime\lib\lapack" There are likely some others that should be set as well, but that was the bare minimum I found necessary to get Sage's Python interpreter working happily in PyCharm. In the settings page select "Project Intepreter"--this is where you can add additional interpreters for use across projects. You might have something different shown here for an already existing interpreter. If you didn't set the Path environment variable correctly, then you'll get an error here (it might show "Permission error" or something like that). This is because Windows uses the Path environment variable to search for DLLs, and the correct paths need to be added in order to find the Cygwin DLL, among others. Otherwise the interpreter executable can't even start. If this does happen you can still proceed with adding that interpreter, but it won't work until you set the necessary environment variables. Finally, PyCharm also allows you to set environment variables to run the interpreter with on a per-project basis in a couple differentl places. You can configure environment variables in the "Run" configurations, and then again (separately, unfortunately) in the settings for the Console. This is actually what I did initially--I set Path, as well as SAGE_ROOT etc. just in PyCharm and that worked as well. But I found it simpler to just set the environment variables permanently in my environment. Then things "just worked". Be aware that this is still just the normal Python interpreter, not the Sage interpreter, so it doesn't know how to run .sage files, nor is it aware of Sage syntax. But you should be able to from sage.all import * and use Sage objects in plain Python. To get that all working might be a little bit tricky, and I think should be tackled as a separate question. PyCharm now recognises the Sage python, thanks! Indeed, maybe this question should be about getting PyCharm to recognise the Sage python and the next question about the problems I encounter when actually trying to from sage.all import *.I will ask it soon! This is a linux machine answering. The "answer" is possibly not the solution, but i need space to insert information, a comment is not enough. Such information is hidden in the dark, when pycharm or an other IDE hits the issue. * Do not do anything with other copies of Sage on your system. My blind answer is to make the above first run from the command line in Windows. Then start pycharm against the right windows environment variables. In pycharm, or an other IDE, the python interpreter is still python27, not sage. To run sage inside, one has to do the above, possibly having to declare the libraries somewhere. In pycharm, this may be done using Settings > Project Structure . But my pycharm works without such a declaration. print ("...reducible polynom mod %s :: X^4-2 = %s -> next prime..." in order to get the first prime $p$ for which $X^4-2$ is irreducible in $\mathbb F_p[X]$. I could run the above code in my pycharm project. but in pycharm I get no module named sage.all. (I selected my system wide python2.7 as interpreter). That also happens when trying to import sage from python2 in the normal command prompt. Is there any corresponding entry in the Windows site-packages installation? in the windows command line window ( alias cmd ) . Oh thanks that is very interesting, because that means that you indeed managed to get the module installed in Python. The module is not in C:\Python27\Lib\site-packages for me (obviously) but only regular python packages. So I guess I need to figure out how to get the module there, and then PyCharm will be able to find it as a module. Any clue how you got it there? Does, by any chance, your python2 site-packages/sage contain packages from algebras to typeset? I have those, and for every package an all.pyc as well, in the directory C:\Program Files\SageMath 8.0\runtime\opt\sagemath-8.0\local\lib\python2.7\site-packages\sage. I think this is the directory of Sage's python2? After sage -sh then printenv SAGE_ROOT is correct, so that should be okay right? I also tried setx PYTHONPATH "C:/Program Files/SageMath 8.0/runtime/opt/sagemath-8.0/local/lib/python2.7/site-packages/sage" /M (sets the system environment variable) in the hope that it would make python find the packages, but no luck. A few years ago I managed to add a package (which included c libraries) to python and I think I did that by just pointing my PYTHONPATH to it, but can't remember exactly. I know Sage is made for linux. But it's not for nothing that Sage was made available for Windows this summer. I think to make Sage more generally accepted it would help much if it would be available for Windows and it would have a proper IDE to work with. At least in my university it would, and I would very much like to spread the word of Sage, also to Windows! Found out that in my C:/Program Files/SageMath 8.0/runtime/opt/sagemath-8.0/local/lib/python2.7/site-packages/sage/env.py on line 88 it makes use of os.uname, only available on unix systems. There must be much more of these things, so I do not think this will ever work on Windows?
CommonCrawl
Simulating a w.s.s. Random Sequence (Chap. Causal iff $G_Z(z)$ analytic on and outside unit circle. $S_X(f)$ is real, then zeros/poles are in conjugate pairs; $S_X(f)$ is nonnegative, then $S_X(z)$ zeros/poles have even order; $R_X(0) <\infty$, then there are no real poles. Causal iff $H(f)$ analytic on and below real line. $X(u,t)$ being real implies poles/zeros of $S_X(f)$ are symmetric about the origin.
CommonCrawl
After fiddling with other converters I ended up writing my own to do what I needed. I currently use this in my backend toolchain on xyne.archlinux.ca to insert formulae automatically. I know about texvc but I found it to be poorly documented and very annoying to use. tex2png is a relatively simple bash script with a few convenience options that should be easy to tweak if necessary. He, this is funky; I recently needed to create a hell lot of png's of single equations and more or less used the approach you took. @xyne: Can you provide an example of how to use the command? I can't seem to get it to do anything. I'm not xyne but 'tex2png -c "$\alpha$"' works for me. OK. Your example works for me, too. But why doesn't 'tex2png -c "a + b"'? OK, so why do I get no output with tex2png -c '"$a+b$"'? I don't know what you are doing, but I do get an output. Use single quotes if you pass the tex string on the command line. If you use double-quotes, bash will try to interpolate "$a" as a variable, e.g. You can also use STDIN to pass it tex. Just invoke it "tex2png" then type in what you want. Press ctrl+d to end the input. Thanks, Xyne. I think I'll find a lot of use for this utility. Thanks again Xyne! Haven't tried it yet, but looks promising like always. Good job! I didn't fully register that bit when I read your post. It only hit me after I hit my own stumbling block while trying to embed plain text. I don't understand .sty files at all so there might be a much better way to do it, but for now I've implemented a workaround using "\mbox". It seems to handle everything as expected. I've also added a new option (-i) to specify inline tex. In combination with the aforementioned workaround and the preview package, dvipng's depth and height output should be correct for everything.
CommonCrawl
Authors who wish to participate in the conference will create documents consisting of a complete description of their ideas and applicable research results. The maximum page count for the manuscript is 5, including all figures, references, and acknowledgements. Submit the paper and copyright form electronically via [TO BE ADDED]. This paper submission must be submitted in final, publishable form before the submission deadline listed below. Check the PCS 2018 website for the status of your paper. Paper submissions will be reviewed by experts selected by the conference committee for their demonstrated knowledge of particular topics. The progress and results of the review process will be posted on this website, and authors will also be notified of the review results by email. The review process is being conducted entirely online. To make the review process easy for the reviewers, and to assure that the paper submissions will be readable through the online review system, we ask that authors submit paper documents that are formatted according to the Paper Kit instructions included here. Papers may be no longer than 5 pages, including all text, figures, references, and acknowledgements. Papers must be submitted by the deadline date. There will be no exceptions. Accepted papers MUST be presented at the conference by one of the authors. One of the authors MUST register for the conference at one of the non-student rates offered, and MUST register before the deadline given for author registration. Failure to register before the deadline will result in automatic withdrawal of your paper from the conference proceedings and program. A single registration may cover up to four (4) papers. PCS 2018 requires that each accepted paper be presented by one of the authors in-person at the conference site according to the schedule published. Any paper accepted into the technical program, but not presented on-site will be withdrawn from the official proceedings archived on IEEE Xplore. Please make sure to put the conference name (PCS 2018) and the paper number that is assigned to you on all correspondence. 1) The LaTeX Template, Microsoft Word Template, and PDF Sample files do not have the exact margins or measurements as those described in the paper kit. What are the correct measurements? 1-Ans) The Paper Kit description should be considered the final word. Because of software version differences, installed font differences, and other system-specific issues, the final PDF or Postscript file that you create from the given templates may not exactly match the sample manuscript found in the paper kit. The measurements given in these templates and in the official Paper Kit description are not intended to be followed with extreme precision. However, the general structure and layout of the document should be substantially the same as the templates and sample manuscript. This means: the title should appear at the top of the first page, the author list should appear beneath the title, the first paragraph of the document should be the abstract section, the document should be in two-column format with reasonable margins and column spacing, and the font size should be no smaller than 9pt. 2) I need more time to complete my manuscript; I cannot complete it by the published deadline. Can I have an extension? 2-Ans) The published manuscript submission deadline was selected so that submitted manuscripts may receive sufficient and thorough reviews and so that presenting authors of accepted papers will have sufficient time to arrange for travel to the event site. By granting an extension, the rest of the development of the technical program would be delayed. The deadline for submission of manuscripts is known well in advance, thus, no extension will be granted for any reason. 3) My manuscript has authors from more than 2 affiliations; My manuscript has several authors. But the LaTeX template supports only 2 authors. How should I list multiple authors in the heading of my manuscript? 3-Ans) There are several formats commonly used for formatting author lists of 3 or more authors or where there are 3 or more different affiliations for authors. The preferred method is to list the author names with identifying marks (superscript numbers, for example) and then a legend below the name list with the respective affiliation descriptions. There is an example of this in the LaTeX template file. Be sure that the author list does not exceed the margins of the page. An example is provided here. 4) How will I know if my submission is valid for review? 4-Ans) All submitted manuscripts will be inspected for general adherence to the paper kit guidelines (i.e. page count limits, page margins, font problems) and submission procedure (i.e. the title on the uploaded file matches the title typed into the web submission form, the author list on the uploaded file matches the author list typed into the web submission form, etc.). Authors designated as "contact author" will be notified by email only if any problems are found. The status of your submission can be checked online at any time using the assigned paper number and an access code. 5) Why did my submission fail document inspection? Can I try again? The author list shown on the uploaded document file does not match the author list typed into the online form. These two lists MUST MATCH EXACTLY in author names and the order in which these names appear on the uploaded document. Page numbers appear on the uploaded file. Do not include page numbers in the submitted manuscript. The author list on the uploaded manuscript is blank. Unless explicitly specified otherwise, the review process is not "double-blind". The submitted manuscript should be in publish-ready format. 6) How can I withdraw or cancel my submission? 6-Ans) Send an email to general support email address requesting the withdrawal of the manuscript. This email MUST include the assigned paper ID and should include in the cc: line all of the authors currently listed on the manuscript. If the latter is not done, then a note will be sent to all authors requesting confirmation of the request as withdrawal of a manuscript can only be done on the agreement of all authors. 7) I recently discovered that I am required to acknowledge the sponsor of my research in order to receive funding, but the deadline for submitting the final manuscript has passed. What should I do? 7-Ans) The deadlines for final manuscript submission are firm and are chosen to allow sufficient time for the preparation and production of the conference proceedings in time for distribution at the event. Be sure to check with financial sponsors before the final manuscript submission deadline concerning this potential requirement. LENGTH: Papers may be no longer than 5 pages, including all text, figures, and references. This is the maximum number of pages that will be accepted, including all figures, tables, and references. Any documents that exceed the page limit will be rejected. LANGUAGE: All proposals must be in English. All text and figures must be contained in a 178 mm x 229 mm (7 inch x 9 inch) image area. The left margin must be 19 mm (0.75 inch). The top margin must be between 19mm (0.75 inch) and 25 mm (1.0 inch). Text should appear in two columns, each 86 mm (3.39 inch) wide with 6 mm (0.24 inch) space between columns. On the first page, the top 64 mm (2.5 inch) of both columns is reserved for the title, author(s), and affiliation(s). These items should be centered across both columns, between the paper title and the columnar content. The paper abstract should appear at the top of the left-hand column of text, and should be no more than 80 mm (3.125") in length. Leave at least one empty line between the end of the abstract and the beginning of the main text. Face: To achieve the best viewing experience for the review process and conference proceedings, we strongly encourage authors to use Times-Roman or Computer Modern fonts. If a font face is used that is not recognized by the submission system, your proposal will not be reproduced correctly. Size: Please use 10pt font size (this is the default value set in the templates) throughout the paper, including figure captions. Very exceptionally, 9pt may be used, but this is not encouraged. In 9-point type font, capital letters are 2 mm high. For 9-point type font, there should be no more than 3.2 lines/cm (8 lines/inch) vertically. This is a minimum spacing; 2.75 lines/cm (7 lines/inch) will make the proposal much more readable. Larger type sizes require correspondingly larger vertical spacing. TITLE: The paper title must appear in boldface letters and should be in ALL CAPITALS. Do not use LaTeX math notation ($x_y$) in the title; the title must be representable in the Unicode character set. Also try to avoid uncommon acronyms in the title. AUTHOR LIST: The authors' name(s) and affiliation(s) appear below the title in capital and lower case letters. PCS does not perform blind reviews, so be sure to include the author list in your submitted paper. Proposals with multiple authors and affiliations may require two or more lines for this information. The order of the authors on the document should exactly match in number and order the authors typed into the online submission form. An example you find here. ABSTRACT: Each paper should contain an abstract of 100 to 150 words that appears at the beginning of the document. Use the same text that is submitted electronically along with the author contact information. BODY: Major headings appear in small-caps text, centered in the column, with uppercase roman-numeral numbering. Subheadings appear in capital and lower case italics. They start at the left margin of the column on a separate line. Sub-subheadings are discouraged, but if they must be used, they should appear in capital and lower case, and start at the left margin on a separate line. They may be underlined or in italics. REFERENCES: List and number all bibliographical references at the end of the paper. The references can be numbered in alphabetic order or in order of appearance in the document. When referring to them in the text, type the corresponding reference number in square brackets as shown at the end of this sentence . ILLUSTRATIONS & COLOR: Illustrations must appear within the designated margins. They may span the two columns. If possible, position illustrations at the top of columns, rather than in the middle or at the bottom. Caption and number every illustration. While you may use color in your illustrations, be sure that your images appear clearly when printed in black and white (the electronic, conference-distributed proceedings and the IEEE Xplore proceedings will retain the colors in your document). PAGE NUMBERS: Do not put page numbers on your document. Appropriate page numbers will be added to accepted papers when the conference proceedings are assembled. IEEEtran LaTeX style file with margin, page layout, font, etc. definitions. LaTeX template file, an example of using the "IEEEtran.cls" file above. PDF generated from the LaTeX template file. Sample Fig1.pdf file, as referenced in the LaTeX template. Word 97/2000 Sample, a template of correct formatting and font use. PDF generated from the MS Word template file. We recommend that you use the Word file or LaTeX files to produce your document, since they have been set up to meet the formatting guidelines listed above. When using these files, double-check the paper size in your page setup to make sure you are using the letter-size paper layout (8.5" X 11") or A4 paper layout (210mm X 297mm). The LaTeX environment files specify suitable margins, page layout, text, and a bibliography style. In particular, with LaTeX, there are cases where the top-margin of the resulting Postscript or PDF file does not meet the specified parameters. In this case, you may need to add this in your .tex file. The spacing of the top margin is not critical, as the page contents will be adjusted on the proceedings. The critical dimensions are the actual width and height of the page content. The 'IEEE Requirements for PDF Documents' MUST be followed EXACTLY. The conference is required to ensure that documents follow this specification. must have ALL FONTS embedded and subset. ALL FONTS MUST be embedded in the PDF or PostScript file. There is no guarantee that the viewers of the paper (reviewers and those who view the electronic proceedings after publication) have the same fonts used in the document. If fonts are not embedded in the submission, you will be contacted by CMS and asked to submit a file that has all fonts embedded. Please refer to your PDF or PS file generation utility's user guide to find out how to embed all fonts. Generating a PostScript file is straightforward for all LaTeX packages we are aware of. When preparing the proposal under LaTeX, it is preferable to use scalable fonts such as Type I, Computer Modern. However, quite good results can be obtained with the fonts defined in the style file recommended above (spconf.sty). PDF files with Postscript Type 3 fonts are highly discouraged. PDF and PostScript files utilizing Type 3 fonts are typically produced by the LaTeX system and are lower-resolution bitmapped versions of the letters and figures. It is possible to perform a few simple changes to the configuration or command-line to produce files that use PostScript Type 1 fonts, which are a vector representation of the letters and figures. An excellent set of instructions is found at: Creating quality Adobe PDF files from TeX with DVIPS. For most installations of LaTeX, you can cause dvips to output Type 1 fonts instead of Type 3 fonts by including -Ppdf option to dvips. The resulting Postscript file will reference the Type 1 Computer Modern fonts, rather than embedding the bitmapped Type 3 versions, which cause problems with printers. You may also need to tell dvips to force letter sized paper with the option: -t letter. Some LaTeX installations also include pdflatex, which produces acceptable PDF files as well. Authors will be permitted to submit a document file up to 5 MB (megabytes) in size. To request an exception, contact the paper submission technical support at: [email protected]. The filename of the document file should be the first author's last name, followed by the appropriate extension (.pdf). For example, if the first author's name is Johan Smith, you would submit your file as "smith.pdf". The paper submission process will append the filename with a unique identifier when it is stored on our system, so multiple submissions with the same name will not overwrite each other and will be distinguishable. To submit your document and author information, go to the 'Paper Submission' link on the PCS 2018 homepage. The submission system will present an entry form to allow you to enter the paper title, abstract text, review category, and author contact information. ALL authors must be entered in the online form, and must appear in the online form in the same order in which the authors appear on the PDF. After you submit this information, the system will display a page with the data that you entered so that you may verify its accuracy. If you need to change the data to fix a mistake, you may use the back button on your browser to return to the information entry form. Once you approve of the data that you have entered, you may choose your document file for upload at the bottom of the verification page. When you click on the button labeled 'Continue' at the bottom of this page, the page will check the filename extension to make sure it matches the submission criteria, then your browser will upload your file to our server. Depending on the size of your file and your internet connection speed, this upload may take a few minutes. At the end of a successful upload, you will see a confirmation page displaying the paper number that is assigned to you, and and email message will be sent to the corresponding authors' email addresses to confirm that the file has been uploaded. If you do not see the confirmation page after uploading your file, we may not have successfully received your file upload. If you encounter trouble, contact the paper submission support at: [email protected]. All submissions must have an IEEE Electronic Copyright form submitted before the manuscript is allowed to be delivered to reviewers. If the submission is not accepted, the copyright transfer form becomes null and void. The electronic copyright form is digitally linked to your submission; if you revise/update your paper's title or author list, the copyright form will still apply. There is no need to submit a new copyright form. The confirmation page that is displayed after uploading your final, camera-ready document file will also have a link to the IEEE Electronic Copyright Form (eCF) system. That system will guide you through a series of questions to determine the type of copyright form required for your manuscript and will electronically record your signature. You will have the opportunity to download a PDF version of your electronically-signed copyright form, and both the IEEE eCF system and the PCS 2018 system will send you a confirmation of the receipt of the properly signed form. Your submitted paper will be visually inspected by our submission system staff to assure that the document is readable and meets all formatting requirements to be included in a visually pleasing and consistent proceedings publication for PCS 2018. If our submission inspectors encounter errors with your submitted file, they will contact you to resolve the issue. If your paper passes inspection, it will be entered into the review process. A committee of reviewers selected by the conference committee will review the documents and rate them according to quality, relevance, and correctness. The conference technical committee will use these reviews to determine which papers will be accepted for presentation in the conference. The result of the technical committee's decision will be communicated to the submitting authors by email, along with any reviewer comments, if any. Authors will be notified of paper acceptance or non-acceptance by email as close as possible to the published author notification date. The notification email will include comments from the reviewers. The conference cannot guarantee that all of the reviewers will provide the level of comment desired by you. However, reviewers are encouraged to submit as detailed comments as possible. Because of the short amount of time between paper acceptance decisions and the beginning of the publication process, PCS 2018 is not able to allow for a two-way discourse between the authors and the reviewers of a paper. If there appears to be a logistical error in the reviewer comments, such as the reviewer commenting on the wrong paper, etc., please contact PCS 2018 at [email protected]. Limited revisions to accepted papers will be allowed. In general, changes should be limited to areas in which improvement was recommended by the reviewers. Changes to the title and author list are not allowed except for extraordinary circumstances. The time period allowed for revision to accepted papers is very short and the schedule will be held strictly, so if you decide to make revisions to your paper, be sure it is finished during the paper revision time period. Be sure that at least one author registers to attend the conference using the online registration system available through the conference website. Each paper must have at least one author registered, with the payment received by the author registration deadline (see above) to avoid being withdrawn from the conference. "Copyright 2018 IEEE. Published in the 2018 33rd Picture Coding Workshop (PCS 2018), scheduled for 24-27 June 2018 in San Francisco, United States. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact: Manager, Copyrights and Permissions / IEEE Service Center / 445 Hoes Lane / P.O. Box 1331 / Piscataway, NJ 08855-1331, USA. Telephone: + Intl. 908-562-3966." If you post an electronic version of an accepted paper, you must provide the IEEE with the electronic address (URL, FTP address, etc.) of the posting. Presentation time is critical: each paper is allocated 15 minutes for oral sessions. We recommend that presentation of your slides should take about 13-14 minutes, leaving 1-2 minutes for introduction, summary, and questions from the audience. To achieve appropriate timing, organize your slides or viewgraphs around the points you intend to make, using no more than one slide per minute. A reasonable strategy is to allocate about 2 minutes per slide when there are equations or important key points to make, and one minute per slide when the content is less complex. Slides attract and hold attention, and reinforce what you say - provided you keep them simple and easy to read. Plan on covering at most 6 points per slide, covered by 6 to 12 spoken sentences and no more than about two spoken minutes. Make sure each of your key points is easy to explain with aid of the material on your slides. Do not read directly from the slide during your presentation. You shouldn't need to prepare a written speech, although it is often a good idea to prepare the opening and closing sentences in advance. It is very important that you rehearse your presentation in front of an audience before you give your presentation at PCS. Presenters must be sufficiently familiar with the material being presented to answer detailed questions from the audience. In addition, the presenter must contact the Session Chair in advance of the presenter's session. A computer-driven slideshow for use with a data projector is recommended for your talk at PCS. All presentation rooms will be equipped with a computer, a data projector, a microphone (for large rooms), a lectern, and a pointing device. Poster sessions are a good medium for authors to present papers and meet with interested attendees for in-depth technical discussions. In addition, attendees find the poster sessions a good way to sample many papers in parallel sessions. Thus it is important that you display your message clearly and noticeably to attract people who might have an interest in your paper. Your poster should cover the key points of your work. It need not, and should not, attempt to include all the details; you can describe them in person to people who are interested. The ideal poster is designed to attract attention, provide a brief overview of your work, and initiate discussion. Carefully and completely prepare your poster well in advance of the conference. Try tacking up the poster before you leave for the conference to see what it will look like and to make sure that you have all of the necessary pieces. Poster board dimensions will be posted here when confirmed. The title of your poster should appear at the top in CAPITAL letters about 25mm high. Below the title put the author(s)' name(s) and affiliation(s). The flow of your poster should be from the top left to the bottom right. Use arrows to lead your viewer through the poster. Use color for highlighting and to make your poster more attractive. Use pictures, diagrams, cartoons, figures, etc., rather than text wherever possible. Try to state your main result in 6 lines or less, in lettering about 15mm high so that people can read the poster from a distance. The smallest text on your poster should be at least 9mm high, and the important points should be in a larger size. Use a sans-serif font (such as "cmss" in the Computer Modern family or the "Helvetica" PostScript font) to make the print easier to read from a distance. Make your poster as self-explanatory as possible. This will save your efforts for technical discussions. There will not be any summaries given at the beginning of the poster sessions at PCS 2018, so authors need not prepare any slides for their poster presentations. You may bring additional battery-operated audio or visual aids to enhance your presentation. Prepare a short presentation of about 5 or 10 minutes that you can periodically give to those assembled around your poster throughout the 2 hour poster session. If possible, more than one author should attend the session to aid in presentations and discussions, and to provide the presenters with the chance to rest or briefly view other posters. PLEASE NOTE, THAT THE POSTER SIZE SHOULD BE A0! IEEE websites place cookies on your computer to give you the best user experience. By using our websites, you agree to the placement of these cookies.
CommonCrawl
what is the largest real orthogonal design in $n$ variables? A real orthogonal design in $n$ variables is an $m \times n$ matrix with entries from the set $\pm x_1,\pm x_2,\cdots,\pm x_n$ that satisfies : $$ A A^T = (x_1^2 + x_2^2 + \cdots x_n^2) I_m $$ Normally the matrix is taken to be square so that $m=n$; in this case it is known that such a design exists iff $n=2,4$, or $8$. My question is : what is the largest possible $m$ for a given $n$? By experimentation, if $n$ is a power of two and $n>8$, (which is the case I'm most interested in), I can always find matrices with $m=(n/2)+1$ rows; is this the best possible? Browse other questions tagged co.combinatorics combinatorial-designs or ask your own question. Statistical computation in matrix. Rows before columns? riddle..
CommonCrawl
· Paper Proposals: Authors who wish to participate in the conference need to register with the EDAS on-line submission system and then use it to submit their paper proposals. o Technical Paper (up to 6 pages), The technical paper submission must be completed on or before the "Submission" deadline listed below. Review Process:Technical paper proposals will be reviewed, using the online review system, by experts selected by the conference Technical Program Committee for their demonstrated knowledge of relevant topics. The results of the review process will be posted on this website, and authors will also be notified by email. Review results will be ready by the "Authors Notifications" date shown below. Final Papers: Authors of accepted technical paper will prepare a final version of their paper and will submit it using the EDAS online submission system. The version of the final technical paper will be substantially the same as the technical paper proposal but will take into account reviewers' comments. The final paper must be completed and submitted on or before the "Final Submissions and Registrations" deadline listed below. · Registration:The final version of an accepted paper will appear in the conference proceedings provided that at least one of the authors registers. This must be carried out on or before the "Final Submissions and Registrations" deadline shown below. To make the review process easy for reviewers, and to assure that paper proposals and final papers are readable through the online review system, authors are asked to submit proposals that are formatted according to the instructions given below. Paper proposal document formatting is identical to final paper formatting. Paper proposals not conforming to the required format will be rejected without review. Similarly, final papers not conforming to the required format will not appear in the proceedings. Authors are required to complete the procedures in the following list before the specified deadlines. Detailed guidelines for each of these procedures are provided below. This is the maximum number of pages that will be accepted, including all figures, tables, and references. Although not encouraged, authors can have papers longer than 6 pages, but not exceeding 8 pages. Each of the extra two pages will incur a fee. Any paper that exceeds the 8-page limit will be rejected without review. LANGUAGE: All papers must be in English. · Papers should be formatted for standard A4 size (210 x 297 mm) paper. · All printed material including text, illustrations, and charts, must be kept within the print area. · The top, bottom, left, and right margins and the space between the two columns must be as set in the templates and not changed. TYPEFACE: To achieve the best viewing experience for the review process and the conference proceedings Times-Roman font must be used. If a font face is used that is not recognized by the submission system, your paper will not be reproduced correctly. Use font sizes as used in the template. TITLE: The title should be centered and in 24-points size. Do not use LaTeX math notation (e.g., $x_y$) in the title; the title must be representable in the Unicode character set. Also try to avoid uncommon acronyms in the title. AUTHOR LIST: The authors' name(s) should appear below the title with capital and small letters. The authors' affiliation(s) should appear below the names with capital and small letters. The order of the authors on the document should exactly match in number and order the authors typed into the online submission form. ABSTRACT: Each paper should contain an abstract of 75 to 150 words that appears at the beginning of the paper. Use the same text that is submitted electronically during the on-line submission process. · Major headings appear centered in the column. Subheadings appear in italic capital and small letters. They start at the left margin of the column. · All text must be fully justified with single-line spacing. All paragraphs within a section should be indented. · Illustrations must appear within the designated margins. They may span the two columns. If possible, position illustrations at the top or bottom of columns, rather than in the middle. · Caption and number every illustration. Figures and tables should be numbered consecutively and separately from each other. The illustration number should be an Arabic number for figures and a Roman number for tables followed by a period, e.g. Figure 1. or TABLE I. The caption itself should not be in bold and should be centered below the figure or above the table. · All halftone illustrations must be clear in black and white. Color illustrations will appear in the electronic version of the proceedings, but the printed version will be produced in black and white. Therefore, make sure that your illustrations are acceptable when printed in black and white. EQUATIONS: Number equations consecutively with Arabic numbers in parentheses placed at the right hand margin of each column. REFERENCES: List all references at the end of the paper. The references should be numbered in order of appearance in the document. Refer to IEEE conference proceedings template for more details. FOOTNOTES: Use footnotes sparingly (or not at all) and place them at the bottom of the column on the page on which they are referenced. Use 8-point type, single-spaced. To help your readers, avoid using footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). PAGINATION: Please do not paginate your proposal. We will add appropriate page numbers to accepted papers when the conference proceedings are assembled. · must have ALL FONTS embedded and subset. It particular, it is extremely important that ALL FONTS ARE EMBEDDED in the PDF file. There is no guarantee that the viewers of the paper (reviewers and those who view the proceedings electronically after publication) have the same fonts used in the document. If fonts are not embedded in the submission, you will be contacted and asked to submit a file that has all fonts embedded. Please refer to your PDF file generation utility's user guide to find out how to embed all fonts. Filename: The filename of the document file is not important since the submission system will rename the file but please make sure that you use .pdf extension for the file. After you submit the paper, please make a note of your paper number and use it in all your correspondence. File Size Limit: Authors will be permitted to submit a document file up to 4 MB (megabyte) in size. To request an exception, please contact the conference contact persons listed above. Gathering the required information: When you have your document file ready, gather the following information before entering the submission system: Document file, Name, affiliation, address, and e-mail address of each author, Paper title, Text file containing paper abstract in ASCII text format (for copying and pasting into web page form). · If you have used the EDAS system previously, for this or any other conference, then you already have an account on the system. Thus, there is no need to create a new account. Simply use your existing account. Your username will be your email. If you cannot remember your password then ask EDAS to email it to you by following the link EDAS PASSWORD REMINDER. · If you are a new user of the EDAS system then create an new account by following the link NEW EDAS ACCOUNT. Please make a note of your username and password as you will need them to make more submissions, edit existing submissions, update personal information, and submit final version of papers. When you submit your proposal, you will be asked to enter the paper title, keywords, abstract text, subject category, and authors' contact information. You will also be asked to upload the file containing your proposal. Depending on the size of your file and your internet connection speed, the file upload may take a few minutes. If all necessary information has been entered, the system will display a short message giving you the ID number of your paper. You will also receive an e-mail notification with the details of your submission. If you do not see the confirmation page after uploading your file, you may not have successfully completed your file upload. If you encounter trouble, please contact the conference contact persons listed above. Review Process: Your submitted paper proposal will be checked for errors and you will be notified if you need to re-submit your proposal. If your submission passes inspection, it will be entered into the review process. Depending on the subject of your proposal, the Technical Program Chairs will assign your technical paper to a committee of reviewers (not fewer than three reviewers) for their demonstrated knowledge in the subject of your proposal. The reviewers will review your proposal and will rate it according to quality, relevance, originality, and clarity of presentation. The conference Technical Program Committee will use these reviews to determine which papers will be accepted for presentation during the conference. Review Results: The Technical Program Committee's decision will be posted on the website by the "Authors Notifications" deadline shown above. Authors can login using their online username and password and check the status of their paper and the reviewer comments. The review result, along with reviewer comments if any, will also be communicated to the submitting authors by email. Revising Accepted Papers: If your paper is accepted by the review process for presentation and publication at AEECT 2013, you should prepare your final paper for submission. This will be substantially the same as the technical paper proposal but must take into account reviewers' comments. The AEECT 2013 Technical Program Committee reserves the right to reject a final paper if the reviewers' comments are not adequately addressed. Final Paper Formatting: When preparing your final document, use the same formatting specifications described above. Final papers not conforming with the required format will not appear in the proceedings. AEECT 2013 papers will be included in IEEE Xplore®. Therefore, all final paper files must adhere to the IEEE Xplore PDF specifications for compatibility. If your final paper file is not IEEE Xplore-compliant then it will be automatically withdrawn from presentation and publication. More details on creating IEEE Xplore-compliant PDF files can be found under "File Format" discussed in "Step 1: Prepare a properly formatted proposal" above. In order to facilitate the process of creating and verifying PDF files, AEECT 2013 has registered for use of the IEEE tool: IEEE PDF eXpress Plus TM. This is a free service to IEEE conferences, allowing their authors to make IEEE Xplore-compliant PDFs (Conversion function) or to check PDFs that authors have made themselves for IEEE Xplore-compliance (PDF Check function). b. Enter 31497XP for the Conference ID, your email address, and choose a new password. Continue to enter information as prompted. c. Fill the needed information to Create Account and submit. Even if you have used those information previously for an account, you can use them again. d. A message declaring an IEEE PDF express plus account is created will appear. Click on Continue button which will open your account home page and showing the title status. 2. Click on "Create New Title". 3. Enter a title information and click on "Submit file for Checking or Converting�. 4. You can then upload source file to convert it pdf or upload a pdf file for checking. 5. It is recommended to create a new account if you are returning later to use IEEE PDF express plus site. c. Click "Request Technical Help" through your account. c. Click "Request a Manual Conversion" through your account. a. Log back into your PDF eXpress Plus account and approve your PDF for collection. Note: Uploading a paper to IEEE PDF eXpress Plus is NOT the same as submitting the final paper for publication. You will still need to submit the checked PDF of your final paper through EDAS. Please refer to the AEECT 2013 online final paper submission guidelines below. Final Paper Submission: You are required to submit the IEEE Xplore-compliant PDF file of your final paper by the "Final Submissions and Registrations" deadline shown above. NO EXTENSIONS WILL BE GRANTED BEYOND THE DEADLINE. Kindly note that one of the authors must register (see Step 8) to enable submission of the final paper. Failure to meet the deadline will result in an automatic withdrawal of your paper for presentation and publication. 1. Go to the on-line submission system EDAS and log-in using your EDAS account (this should be the same account you have used to submit the initial paper proposal). Your username will be your email. If you cannot remember your password then ask EDAS to email it to you by following the link EDAS PASSWORD REMINDER. 2. Click on the "My Papers" menu item at the top of the page. This will produce a list of all your papers on the EDAS system. 3. Click on the title of the paper that you want to submit. This will take you to the individual paper page that contains all details of the paper. o Make sure that ALL paper authors are included on the author list on EDAS. If you need to add/delete an author then click on the "Add Author" icon within the "Authors" field. o Make sure that the order of authors on EDAS is correct. If you need to change the order of authors then click on the "Move Author Up" and "Move Author Down" icons within the "Authors" field. o Make sure that all authors update their profile (affiliation, email, country, etc.) on the EDAS system. To achieve this, the author needs to log-in to his/her EDAS account and click on the "My Profile" menu at the top of the page. 5. Make sure that all paper information are stored correctly on EDAS and matches the information on the PDF file. Paper information (title, abstract, keywords) that will appear in conference publications (e.g. book of abstracts, proceedings, CD-ROM, etc.) will be taken from the EDAS system and not from the submitted PDF file. If you need to modify paper information then click on the "Edit" icon next to the "Title" / "Abstract" / "Keywords" field. 6. Click on the "Upload Manuscript" icon within the "Final Manuscript" field. This will take you to a new page. Click on the "Browse" button and browse to your IEEE Xplore-compliant final PDF file. Click on the "Upload Manuscript" button to upload the selected file to the system. Depending on the size of your file and your internet connection speed, the file upload may take a few minutes. If the file is uploaded successfully then a confirmation message will be displayed. You will also receive an e-mail confirmation with the details of your submission. If you do not see the confirmation page after uploading your file, you may not have successfully completed your file upload. If you encounter trouble, please contact the conference. Final Paper Inspection: Similar to the proposal submission, your final document will be checked to ensure that it meets all formatting and compatibility requirements to be included in a visually pleasing and IEEE Xplore-compliant proceedings. If we encounter errors in the appearance or compatibility of your document file, you will be contacted by email. Copyright Form: Every AEECT 2013 paper accepted for presentation and publication MUST have attached to it an IEEE Copyright transfer form. You are required to submit the IEEE Copyright transfer form by the "Final Submissions and Registrations" deadline shown above. NO EXTENSIONS WILL BE GRANTED BEYOND THE DEADLINE. Failure to submit the IEEE Copyright transfer form by the deadline will result in an automatic withdrawal of your paper for presentation and publication. 1. Go to the on-line submission system EDAS and log-in using your EDAS account (this should be the same account you have used to submit the initial and final papers). Your username will be your email. If you cannot remember your password then ask EDAS to email it to you by following the link EDAS PASSWORD REMINDER. 3. Click on the title of the paper that you want to submit the copyright form for. This will take you to the individual paper page that contains all details of the paper. 4. Click on the "Record Copyright Form" icon within the "Copyright form" field. This will take you to a new page. Click on the "Copyright submission" button. This will take you to an IEEE Electronic Copyright Form page generated specifically for your paper. Follow the instructions there to submit an electronically signed copyright transfer form to IEEE. Note that once you perform this step successfully you will not need to fax or mail a paper copyright form. Author Registration: The final version of your accepted paper will appear in the conference proceedings provided that at least one of the authors registers. This must be carried out on or before the "Final Submissions and Registrations" deadline shown above. Registration fees and Instructions for registration are available under the Registration section . Prepare a 15-minute presentation and bring is on a Flash memory. The presentation computer has PowerPoint and PDF Reader. 3. Click on the title of the paper that you want to submit the presentation for. This will take you to the individual paper page that contains all details of the paper. 4. Click on the "Upload Presentation" icon within the "Presentation" field. This will take you to a new page. 5. Browse for your presentation file and click "Upload presentation� button.
CommonCrawl
Let n $\in$ N and f: [$0, \infty$) $\to$ R be defined by $f(x) = x^n$ for all $x \ge 0$. Prove that f is an increasing function. Therefore the function is increasing. Any help with the later part of the problem would be helpful. Want to show that $a^n<b^n$, i .e. strictly increasing. The proof can be done much shorter. Be $\epsilon > 0$. Now there must be for all $n$ a solution $x^n < (x+\epsilon)^n$. As $\epsilon$ is by definition always larger than 0 this statement is for all $x$ and $n$ true. Not the answer you're looking for? Browse other questions tagged analysis functions proof-writing or ask your own question. Prove the inverse of a strictly increasing function is differentiable. Verifying Proof of decreasing functions and function composition increasing. If $f$ and $g$ are continuously increasing functions, then prove that $(f∘g)$ is also increasing. How to prove or disprove surjectivity of a complex function? Given that $\log_ab$ = $\log_ba$, and that $a,b \ne 1$ and $a \ne b$, find $b$ in terms of $a$.
CommonCrawl
There is an ongoing thread on MMP-9 at the ALSTDI forum: http://www.als.net/forum/yaf_postst53505_MMP9-why-motor-neurons.aspx (requires registration on the said forum). Z Cavdar, S Ozbal, A Celik, Bu Ergur, E Guneli, C Ural, T Camsari and Ga Guner. The effects of alpha-lipoic acid on MMP-2 and MMP-9 activities in a rat renal ischemia and re-perfusion model.. Biotechnic & histochemistry : official publication of the Biological Stain Commission 89(4):304–14, May 2014. Abstract Abstract Matrix metalloproteinases (MMPs) are enzymes that are responsible for degradation of extracellular matrix (ECM); they are involved in the pathogenesis of ischemia-re-perfusion (I-R) injury. We investigated the possible preventive effect of alpha-lipoic acid (LA) in a renal I-R injury model in rats by assessing its reducing effect on the expression and activation of MMP-2 and MMP-9 induced by I-R. Rats were assigned to four groups: control, sham-operated, I-R (saline, i.p.) and I-R+ LA (100 mg/kg, i.p.). After a right nephrectomy, I-R was induced by clamping the left renal pedicle for 1 h, followed by 6 h re-perfusion. In the sham group, a right nephrectomy was performed and left renal pedicles were dissected without clamping and the entire left kidney was excised after 6 h. LA pretreatment was started 30 min prior to induction of ischemia. Injury to tubules was evaluated using light and electron microscopy. The expressions of MMP-2 and MMP-9 were determined by immunohistochemistry and their activities were analyzed by gelatin zymography. Serum creatinine was measured using a quantitative kit based on the Jaffe colorimetric technique. Malondialdehyde (MDA) and glutathione (GSH) were analyzed using high performance liquid chromatography. Tissue inhibitor of metalloproteinase (TIMP)-2 and TIMP-1 were assessed using enzyme-linked immunosorbent assay (ELISA). I-R caused tubular dilatation and brush border loss. LA decreased both renal dysfunction and abnormal levels of MDA and GSH during I-R. Moreover, LA decreased significantly both MMP-2 and MMP-9 expressions and activations during I-R. TIMP-1 and TIMP-2 levels were increased significantly by LA administration. LA modulated increased MMP-2 and MMP-9 activities and decreased TIMP-1 and TIMP-2 levels during renal I-R. M T Kato, A Bolanho, B L Zarella, T Salo, L Tjäderhane and M A R Buzalaf. Sodium fluoride inhibits MMP-2 and MMP-9.. Journal of dental research 93(1):74–7, January 2014. Abstract The importance of fluoride (F) in preventing dental caries by favorably interfering in the demineralization-remineralization processes is well-established, but its ability to inhibit matrix metalloproteinases (MMPs), which could also help to prevent dentin caries, has not been investigated. This study assessed the ability of F to inhibit salivary and purified human gelatinases MMPs-2 and -9. Saliva was collected from 10 healthy individuals. Pooled saliva was centrifuged, and supernatants were incubated for 1 hr at 37°C and subjected to zymography. Sodium fluoride (50-275 ppm F) was added to the incubation buffer. The reversibility of the inhibition of MMPs-2 and -9 by NaF was tested by the addition of NaF (250-5,000 ppm F) to the incubation buffer, after which an additional incubation was performed in the absence of F. F decreased the activities of pro- and active forms of salivary and purified human MMPs in a dose-response manner. Purified gelatinases were completely inhibited by 200 ppm F (IC50 = 100 and 75 ppm F for MMPs-2 and -9, respectively), and salivary MMP-9 by 275 ppm F (IC50 = 200 ppm F). Inhibition was partially reversible at 250-1,500 ppm F, but was irreversible at 5,000 ppm F. This is the first study to describe the ability of NaF to inhibit MMPs completely. Artem Kaplan, Krista J Spiller, Christopher Towne, Kevin C Kanning, Ginn T Choe, Adam Geber, Turgay Akay, Patrick Aebischer and Christopher E Henderson. Neuronal matrix metalloproteinase-9 is a determinant of selective neurodegeneration.. Neuron 81(2):333–48, January 2014. Abstract Selective neuronal loss is the hallmark of neurodegenerative diseases. In patients with amyotrophic lateral sclerosis (ALS), most motor neurons die but those innervating extraocular, pelvic sphincter, and slow limb muscles exhibit selective resistance. We identified 18 genes that show >10-fold differential expression between resistant and vulnerable motor neurons. One of these, matrix metalloproteinase-9 (MMP-9), is expressed only by fast motor neurons, which are selectively vulnerable. In ALS model mice expressing mutant superoxide dismutase (SOD1), reduction of MMP-9 function using gene ablation, viral gene therapy, or pharmacological inhibition significantly delayed muscle denervation. In the presence of mutant SOD1, MMP-9 expressed by fast motor neurons themselves enhances activation of ER stress and is sufficient to trigger axonal die-back. These findings define MMP-9 as a candidate therapeutic target for ALS. The molecular basis of neuronal diversity thus provides significant insights into mechanisms of selective vulnerability to neurodegeneration. Mayank Chaturvedi and Leszek Kaczmarek. Mmp-9 inhibition: a therapeutic strategy in ischemic stroke.. Molecular neurobiology 49(1):563–73, 2014. Abstract Ischemic stroke is a leading cause of disability worldwide. In cerebral ischemia there is an enhanced expression of matrix metallo-proteinase-9 (MMP-9), which has been associated with various complications including excitotoxicity, neuronal damage, apoptosis, blood-brain barrier (BBB) opening leading to cerebral edema, and hemorrhagic transformation. Moreover, the tissue plasminogen activator (tPA), which is the only US-FDA approved treatment of ischemic stroke, has a brief 3 to 4 h time window and it has been proposed that detrimental effects of tPA beyond the 3 h since the onset of stroke are derived from its ability to activate MMP-9 that in turn contributes to the breakdown of BBB. Therefore, the available literature suggests that MMP-9 inhibition can be of therapeutic importance in ischemic stroke. Hence, combination therapies of MMP-9 inhibitor along with tPA can be beneficial in ischemic stroke. In this review we will discuss the current status of various strategies which have shown neuroprotection and extension of thrombolytic window by directly or indirectly inhibiting MMP-9 activity. In the introductory part of the review, we briefly provide an overview on ischemic stroke, commonly used models of ischemic stroke and a role of MMP-9 in ischemia. In next part, the literature is organized as various approaches which have proven neuroprotective effects through direct or indirect decrease in MMP-9 activity, namely, using biotherapeutics, involving MMP-9 gene inhibition using viral vectors; using endogenous inhibitor of MMP-9, repurposing of old drugs such as minocycline, new chemical entities like DP-b99, and finally other approaches like therapeutic hypothermia. Deep Sankar Rudra, Uttam Pal, Nakul Chandra Maiti, Russel J Reiter and Snehasikta Swarnakar. Melatonin inhibits matrix metalloproteinase-9 activity by binding to its active site.. Journal of pineal research 54(4):398–405, May 2013. Abstract The zinc-dependent matrix metalloproteinases (MMPs) are key enzymes associated with extracellular matrix (ECM) remodeling; they play critical roles under both physiological and pathological conditions. MMP-9 activity is linked to many pathological processes, including rheumatoid arthritis, atherosclerosis, gastric ulcer, tumor growth, and cancer metastasis. Specific inhibition of MMP-9 activity may be a promising target for therapy for diseases characterized by dysregulated ECM turnover. Potent MMP-9 inhibitors including an indole scaffold were recently reported in an X-ray crystallographic study. Herein, we addressed whether melatonin, a secretory product of pineal gland, has an inhibitory effect on MMP-9 function. Gelatin zymographic analysis showed a significant reduction in pro- and active MMP-9 activity in vitro in a dose- and time-dependent manner. In addition, a human gastric adenocarcinoma cell line (AGS) exhibited a reduced (\~50%) MMP-9 expression when incubated with melatonin, supporting an inhibitory effect of melatonin on MMP-9. Atomic-level interaction between melatonin and MMP-9 was probed with computational chemistry tools. Melatonin docked into the active site cleft of MMP-9 and interacted with key catalytic site residues including the three histidines that form the coordination complex with the catalytic zinc as well as proline 421 and alanine 191. We hypothesize that under physiological conditions, tight binding of melatonin in the active site might be involved in reducing the catalytic activity of MMP-9. This finding could provide a novel approach to physical docking of biomolecules to the catalytic site of MMPs, which inhibits this protease, to arrest MMP-9-mediated inflammatory signals. Delphine Stephan, Oualid Sbai, Jing Wen, Pierre-Olivier Couraud, Chaim Putterman, Michel Khrestchatisky and Sophie Desplat-Jégo. TWEAK/Fn14 pathway modulates properties of a human microvascular endothelial cell model of blood brain barrier.. Journal of neuroinflammation 10:9, January 2013. Abstract BACKGROUND: The TNF ligand family member TWEAK exists as membrane and soluble forms and is involved in the regulation of various human inflammatory pathologies, through binding to its main receptor, Fn14. We have shown that the soluble form of TWEAK has a pro-neuroinflammatory effect in an animal model of multiple sclerosis and we further demonstrated that blocking TWEAK activity during the recruitment phase of immune cells across the blood brain barrier (BBB) was protective in this model. It is now well established that endothelial cells in the periphery and astrocytes in the central nervous system (CNS) are targets of TWEAK. Moreover, it has been shown by others that, when injected into mice brains, TWEAK disrupts the architecture of the BBB and induces expression of matrix metalloproteinase-9 (MMP-9) in the brain. Nevertheless, the mechanisms involved in such conditions are complex and remain to be explored, especially because there is a lack of data concerning the TWEAK/Fn14 pathway in microvascular cerebral endothelial cells. METHODS: In this study, we used human cerebral microvascular endothelial cell (HCMEC) cultures as an in vitro model of the BBB to study the effects of soluble TWEAK on the properties and the integrity of the BBB model. RESULTS: We showed that soluble TWEAK induces an inflammatory profile on HCMECs, especially by promoting secretion of cytokines, by modulating production and activation of MMP-9, and by expression of cell adhesion molecules. We also demonstrated that these effects of TWEAK are associated with increased permeability of the HCMEC monolayer in the in vitro BBB model. CONCLUSIONS: Taken together, the data suggest a role for soluble TWEAK in BBB inflammation and in the promotion of BBB interactions with immune cells. These results support the contention that the TWEAK/Fn14 pathway could contribute at least to the endothelial steps of neuroinflammation. Xianghua He, Lifang Zhang, Xiaoli Yao, Jing Hu, Lihua Yu, Hua Jia, Ran An, Zhuolin Liu and Yanming Xu. Association studies of MMP-9 in Parkinson's disease and amyotrophic lateral sclerosis.. PloS one 8(9):e73777, 2013. M Hemshekhar, M Sebastin Santhosh, K Sunitha, R M Thushara, K Kemparaju, K S Rangappa and K S Girish. A dietary colorant crocin mitigates arthritis and associated secondary complications by modulating cartilage deteriorating enzymes, inflammatory mediators and antioxidant status.. Biochimie 94(12):2723–33, December 2012. Abstract Articular cartilage degeneration and inflammation are the hallmark of progressive arthritis and is the leading cause of disability in 10-15% of middle aged individuals across the world. Cartilage and synovium are mainly degraded by either enzymatic or non-enzymatic ways. Matrix metalloproteinases (MMPs), hyaluronidases (HAases) and aggrecanases are the enzymatic mediators and inflammatory cytokines and reactive oxygen species being non-enzymatic mediators. In addition, MMPs and HAases generated end-products act as inflammation inducers via CD44 and TLR-4 receptors involved NF-$\kappa$B pathway. Although several drugs have been used to treat arthritis, numerous reports describe the side effects of these drugs that may turn fatal. On this account several medicinal plants and their isolated molecules have been involved in modern medicine strategies to fight against arthritis. In view of this, the present study investigated the antiarthritic potentiality of Crocin, a dietary colorant carotenoid isolated from stigma of Crocus sativus. Crocin effectively neutralized the augmented serum levels of enzymatic (MMP-13, MMP-3 and MMP-9 and HAases) and non-enzymatic (TNF-$\alpha$, IL-1$\beta$, NF-$\kappa$B, IL-6, COX-2, PGE(2) and ROS) inflammatory mediators. Further, Crocin re-established the arthritis altered antioxidant status of the system (GSH, SOD, CAT and GST). It also protected the bone resorption by inhibiting the elevated levels of bone joint exoglycosidases, cathepsin-D and tartrate resistant acid phosphatases. Taken together, Crocin revitalized the arthritis induced cartilage and bone deterioration along with inflammation and oxidative damage that could be accredited to its antioxidant nature. Thus, Crocin could be an effective antiarthritic agent which can equally nullify the arthritis associated secondary complication. Hongli Li, Hao Xu and Baogui Sun. Lipopolysaccharide regulates MMP-9 expression through TLR4/NF-$\kappa$B signaling in human arterial smooth muscle cells.. Molecular medicine reports 6(4):774–8, October 2012. Abstract Matrix metalloproteinases (MMPs) are critical to vascular smooth muscle cell migration in vivo. The dysregulation of MMPs is involved in the pathogenesis of abnormal arterial remodeling, aneurysm formation and atherosclerotic plaque instability. It has been confirmed that lipopolysaccharides (LPS) constitute a strong risk factor for the development of atherosclerosis. In this study, we aimed to determine a potential mechanism of LPS on MMP-9 expression in human arterial smooth muscle cells (HASMCs). RT-PCR analysis was used to detect MMP-9 mRNA expression and western blot analysis was performed to examine MMP-9 protein expression. An electrophoretic mobility shift assay was also employed to determine NF-$\kappa$B binding activity. Results showed that LPS induced MMP-9 mRNA and protein expression in HASMCs in a TLR4-dependent manner. Notably, upon blocking the NF-$\kappa$B binding with pyrrolidine dithiocarbamate, it was demonstrated that the expression of MMP-9 by LPS occurs through TLR4/NF-$\kappa$B pathways. It was concluded that LPS induced MMP-9 expression through the TLR4/NF-$\kappa$B pathway. Thus, the TLR4/NF-$\kappa$B pathway may be involved in the pathogenesis of atherosclerosis. Rich Everson and Jason S Hauptman. From the bench to the bedside: Brain-machine interfaces in spinal cord injury, the blood-brain barrier, and neurodegeneration, using the hippocampus to improve cognition, metabolism, and epilepsy, and understanding axonal death.. Surgical neurology international 3(1):108, January 2012. Ross Vlahos, Peter A B Wark, Gary P Anderson and Steven Bozinovski. Glucocorticosteroids differentially regulate MMP-9 and neutrophil elastase in COPD.. PloS one 7(3):e33277, January 2012. Abstract BACKGROUND: Chronic Obstructive Pulmonary Disease (COPD) is currently the fifth leading cause of death worldwide. Neutrophilic inflammation is prominent, worsened during infective exacerbations and is refractory to glucocorticosteroids (GCs). Deregulated neutrophilic inflammation can cause excessive matrix degradation through proteinase release. Gelatinase and azurophilic granules within neutrophils are a major source of matrix metalloproteinase (MMP)-9 and neutrophil elastase (NE), respectively, which are elevated in COPD. METHODS: Secreted MMP-9 and NE activity in BALF were stratified according to GOLD severity stages. The regulation of secreted NE and MMP-9 in isolated blood neutrophils was investigated using a pharmacological approach. In vivo release of MMP-9 and NE in mice exposed to cigarette smoke (CS) and/or the TLR agonist lipopolysaccharide (LPS) in the presence of dexamethasone (Dex) was investigated. RESULTS: Neutrophil activation as assessed by NE release was increased in severe COPD (36-fold, GOLD II vs. IV). MMP-9 levels (8-fold) and activity (21-fold) were also elevated in severe COPD, and this activity was strongly associated with BALF neutrophils (r = 0.92, p<0.001), but not macrophages (r = 0.48, p = 0.13). In vitro, release of NE and MMP-9 from fMLP stimulated blood neutrophils was insensitive to Dex and attenuated by the PI3K inhibitor, wortmannin. In vivo, GC resistant neutrophil activation (NE release) was only seen in mice exposed to CS and LPS. In addition, GC refractory MMP-9 expression was only associated with neutrophil activation. CONCLUSIONS: As neutrophils become activated with increasing COPD severity, they become an important source of NE and MMP-9 activity, which secrete proteinases independently of TIMPs. Furthermore, as NE and MMP-9 release was resistant to GC, targeting of the PI3K pathway may offer an alternative pathway to combating this proteinase imbalance in severe COPD. Hui-Hsin Wang, Hsi-Lung Hsieh and Chuen-Mao Yang. Nitric oxide production by endothelin-1 enhances astrocytic migration via the tyrosine nitration of matrix metalloproteinase-9.. Journal of cellular physiology 226(9):2244–56, September 2011. Abstract The deleterious effects of endothelin-1 (ET-1) in the central nervous system (CNS) include disturbance of water homeostasis and blood-brain barrier (BBB) integrity. In the CNS, ischemic injury elicits ET-1 release from astrocytes, behaving through G-protein coupled ET receptors. These considerations raise the question of whether ET-1 influences cellular functions of astrocytes, the major cell type that provides structural and functional support for neurons. Uncontrolled nitric oxide (NO) production has been implicated in sterile brain insults, neuroinflammation, and neurodegenerative diseases, which involve astrocyte activation and neuronal death. However, the detailed mechanisms of ET-1 action related to NO release on rat brain astrocytes (RBA-1) remain unknown. In this study, we demonstrate that exposure of astrocytes to ET-1 results in the inducible nitric oxide synthase (iNOS) up-regulation, NO production, and matrix metalloproteinase-9 (MMP-9) activation in astrocytes. The data obtained with Western blot, reverse transcription-PCR (RT-PCR), and immunofluorescent staining analyses showed that ET-1-induced iNOS expression and NO production were mediated through an ET(B)-dependent transcriptional activation. Engagement of G(i/o)–and G(q) -coupled ET(B) receptors by ET-1 led to activation of c-Src-dependent phosphoinositide 3-kinase (PI3K)/Akt and p42/p44 mitogen-activated protein kinase (MAPK) and then activated transcription factor nuclear factor-$\kappa$B (NF-$\kappa$B). The activated NF-$\kappa$B was translocated into nucleus and thereby promoted iNOS gene transcription. Ultimately, NO production stimulated by ET-1 enhanced the migration of astrocytes through the tyrosine nitration of MMP-9. Taken together, these results suggested that in astrocytes, activation of NF-$\kappa$B by ET(B)-dependent c-Src, PI3K/Akt, and p42/p44 MAPK signalings is necessary for ET-1-induced iNOS gene up-regulation. Yuki Morizane, Aristomenis Thanos, Kimio Takeuchi, Yusuke Murakami, Maki Kayama, George Trichonas, Joan Miller, Marc Foretz, Benoit Viollet and Demetrios G Vavvas. AMP-activated protein kinase suppresses matrix metalloproteinase-9 expression in mouse embryonic fibroblasts.. The Journal of biological chemistry 286(18):16030–8, May 2011. Abstract Matrix metalloproteinase-9 (MMP-9) plays a critical role in tissue remodeling under both physiological and pathological conditions. Although MMP-9 expression is low in most cells and is tightly controlled, the mechanism of its regulation is poorly understood. We utilized mouse embryonic fibroblasts (MEFs) that were nullizygous for the catalytic $\alpha$ subunit of AMP-activated protein kinase (AMPK), which is a key regulator of energy homeostasis, to identify AMPK as a suppressor of MMP-9 expression. Total AMPK$\alpha$ deletion significantly elevated MMP-9 expression compared with wild-type (WT) MEFs, whereas single knock-out of the isoforms AMPK$\alpha$1 and AMPK$\alpha$2 caused minimal change in the level of MMP-9 expression. The suppressive role of AMPK on MMP-9 expression was mediated through both its activity and presence. The AMPK activators 5-amino-4-imidazole carboxamide riboside and A769662 suppressed MMP-9 expression in WT MEFs, and AMPK inhibition by the overexpression of dominant negative (DN) AMPK$\alpha$ elevated MMP-9 expression. However, in AMPK$\alpha$(-/-) MEFs transduced with DN AMPK$\alpha$, MMP-9 expression was suppressed. AMPK$\alpha$(-/-) MEFs showed increased phosphorylation of I$\kappa$B$\alpha$, expression of I$\kappa$B$\alpha$ mRNA, nuclear localization of nuclear factor-$\kappa$B (NF-$\kappa$B), and DNA-binding activity of NF-$\kappa$B compared with WT. Consistently, selective NF-$\kappa$B inhibitors BMS345541 and SM7368 decreased MMP-9 expression in AMPK$\alpha$(-/-) MEFs. Overall, our results suggest that both AMPK$\alpha$ isoforms suppress MMP-9 expression and that both the activity and presence of AMPK$\alpha$ contribute to its function as a regulator of MMP-9 expression by inhibiting the NF-$\kappa$B pathway. Kazunori Miyazaki, Yasuyuki Ohta, Makiko Nagai, Nobutoshi Morimoto, Tomoko Kurata, Yasushi Takehisa, Yoshio Ikeda, Tohru Matsuura and Koji Abe. Disruption of neurovascular unit prior to motor neuron degeneration in amyotrophic lateral sclerosis.. Journal of neuroscience research 89(5):718–28, May 2011. Abstract Recent reports suggest that functional or structural defect of vascular components are implicated in amyotrophic lateral sclerosis (ALS) pathology. In the present study, we examined a possible change of the neurovascular unit consisting of endothelium (PCAM-1), tight junction (occludin), and basement membrane (collagen IV) in relation to a possible activation of MMP-9 in ALS patients and ALS model mice. We found that the damage in the neurovascular unit was more prominent in the outer side and preferentially in the anterior horn of ALS model mice. This damage occurred prior to motor neuron degeneration and was accompanied by MMP-9 up-regulation. We also found the dissociation between the PCAM-1-positive endothelium and GFAP-positive astrocyte foot processes in both humans and the animal model of ALS. The present results indicate that perivascular damage precedes the sequential changes of the disease, which are held in common between humans and the animal model of ALS, suggesting that the neurovascular unit is a potential target for therapeutic intervention in ALS. Hongli Li, Hao Xu and Shaowen Liu. Toll-like receptors 4 induces expression of matrix metalloproteinase-9 in human aortic smooth muscle cells.. Molecular biology reports 38(2):1419–23, February 2011. Abstract Recent evidence supports a role of Toll-like receptor (TLR) signaling in the development of atherosclerotic lesions. It was confirmed that the presence of functional TLR4 promotes a proinflammatory phenotype and proliferation of vascular smooth muscle cells (VSMCs). Here we tested whether designed TLR4 small interfering RNAs (TLR4 siRNAs) is capable of inducing TLR4 deficient and simultaneously regulating the expression of matrix metalloproteinase-9 (MMP-9) in human aortic smooth muscle cells (HASMCs). Human aortic smooth muscle cells were obtained from Cascade Biologics (Portland, USA). The siRNAs used in this study were chemically synthesized by Ambion, diluted in RNase free water at concentration of 2 $\mu$g/ml. The TLR4 siRNAs were complexed with Lipofectamine(TM) 2000 in transfection buffer. After 30 min incubation at room temperature, the complexes were added to the cells. Subsequent to 5 h incubation, cells were treated with 10 ng/ml LPS for 24 h. RT-PCR analysis was used to detect mRNA expression of GAPDH, TLR4 and MMP-9; Western blot analysis was used to examine GAPDH, TLR4 and MMP-9 protein expression. It was shown that all three designed TLR4 siRNAs inhibited the expression of TLR4 in HASMCs as compared to nontargeting siRNA. Notably, TLR4 siRNA-1 exhibited the strongest inhibition effect. Transfection of HASMCs with TLR4 siRNA-1 resulted in down-regulation of LPS-induced expression of MMP-9. It was concluded that TLR4 siRNA-transfected HASMCs were capable for regulating the expression of MMP-9, providing support for the rational design of siRNAs as atherosclerotic therapy. Denise E Lackey and Kathleen A Hoag. Vitamin A upregulates matrix metalloproteinase-9 activity by murine myeloid dendritic cells through a nonclassical transcriptional mechanism.. The Journal of nutrition 140(8):1502–8, August 2010. Abstract Myeloid dendritic cells (DC) are specialized antigen-presenting immune cells. Upon activation in peripheral tissues, DC migrate to lymph nodes to activate T lymphocytes. Matrix metalloproteinase (MMP)-9 is a gelatinase essential for DC migration. We have previously shown that all-trans retinoic acid (atRA), a bioactive metabolite of vitamin A, significantly augmented DC MMP-9 mRNA and protein production. We investigated the mechanisms by which atRA increased MMP-9 activity in vitro. Mouse myeloid DC cultured with atRA demonstrated increased gelatinase activity compared with cells cultured with retinoic acid receptor (RAR)-alpha antagonist. Adding MMP-9 inhibitor significantly blocked DC gelatinase activity and increased adherence of DC in a dose-dependent manner. AtRA-induced Mmp-9 gene expression in DC was blocked by transcriptional inhibition. Because the Mmp-9 promoter contains no canonical retinoic acid response element (RARE), we performed additional studies to determine how atRA regulated DC Mmp-9 transcription. Electrophoretic mobility shift assays for the consensus Sp1, activating protein-1, and nuclear factor-kappaB binding sites located in the Mmp-9 promoter did not indicate greater nuclear protein binding in response to atRA. Chromatin immunoprecipitation assays indicated RARalpha and histone acetyltransferase p300 recruitment to, and acetylation of, histone H3 at the Mmp-9 promoter was greater after atRA treatment. These data suggest that atRA regulated DC adhesion in vitro partly through MMP-9 gelatinase activity. Mmp-9 expression was enhanced through a transcriptional mechanism involving greater RARalpha promoter binding, recruitment of p300, and subsequent histone H3 acetylation, despite the absence of a consensus RARE. Lubin Fang, Marko Teuchert, Friederike Huber-Abel, Dagmar Schattauer, Corinna Hendrich, Johannes Dorst, Heinz Zettlmeissel, Meinhard Wlaschek, Karin Scharffetter-Kochanek, Tamara Kapfer, Hayrettin Tumani, Albert C Ludolph and Johannes Brettschneider. MMP-2 and MMP-9 are elevated in spinal cord and skin in a mouse model of ALS.. Journal of the neurological sciences 294(1-2):51–6, July 2010. K Bahar-Shany, A Ravid and R Koren. Upregulation of MMP-9 production by TNFalpha in keratinocytes and its attenuation by vitamin D.. Journal of cellular physiology 222(3):729–37, March 2010. Abstract MMP-9, a member of the matrix metalloproteinase family that degrades collagen IV and processes chemokines and cytokines, participates in epidermal remodeling in response to stress and injury. Limited activity of MMP-9 is essential while excessive activity is deleterious to the healing process. Tumor necrosis factor (TNFalpha), a key mediator of cutaneous inflammation, is a powerful inducer of MMP-9. Calcitriol, the hormonally active vitamin D metabolite, and its analogs are known to attenuate epidermal inflammation. We aimed to examine the modulation of MMP-9 by calcitriol in TNFalpha-treated keratinocytes. The immortalized HaCaT keratinocytes were treated with TNFalpha in the absence of exogenous growth factors or active ingredients. MMP-9 production was quantified by gelatin zymography and real-time RT-PCR. Activation of signaling cascades was assessed by western blot analysis and DNA-binding activity of transcription factors was determined by EMSA. Exposure to TNFalpha markedly increased the protein and mRNA levels of MMP-9, while pretreatment with calcitriol dose dependently reduced this effect. Employing specific inhibitors we established that the induction of MMP-9 by TNFalpha was dependent on the activity of the epidermal growth factor receptor, c-Jun-N-terminal kinase (JNK), NFkappaB and extracellular signal-regulated kinase-1/2. The effect of calcitriol was associated with inhibition of JNK activation and reduction of DNA-binding activities of the transcription factors activator protein-1 (AP-1) and NFkappaB following treatment with TNFalpha. By down-regulating MMP-9 levels active vitamin D derivatives may attenuate deleterious effects due to excessive TNFalpha-induced proteolytic activity associated with cutaneous inflammation. Oualid Sbai, Adlane Ould-Yahoui, Lotfi Ferhat, Yatma Gueye, Anne Bernard, Eliane Charrat, Ali Mehanna, Jean-Jacques Risso, Jean-Paul Chauvin, Emmanuel Fenouillet, Santiago Rivera and Michel Khrestchatisky. Differential vesicular distribution and trafficking of MMP-2, MMP-9, and their inhibitors in astrocytes.. Glia 58(3):344–66, February 2010. Abstract Astrocytes play an active role in the central nervous system and are critically involved in astrogliosis, a homotypic response of these cells to disease, injury, and associated neuroinflammation. Among the numerous molecules involved in these processes are the matrix metalloproteinases (MMPs), a family of zinc-dependent endopeptidases, secreted or membrane-bound, that regulate by proteolytic cleavage the extracellular matrix, cytokines, chemokines, cell adhesion molecules, and plasma membrane receptors. MMP activity is tightly regulated by the tissue inhibitors of MMPs (TIMPs), a family of secreted multifunctional proteins. Astrogliosis in vivo and astrocyte reactivity induced in vitro by proinflammatory cues are associated with modulation of expression and/or activity of members of the MMP/TIMP system. However, nothing is known concerning the intracellular distribution and secretory pathways of MMPs and TIMPs in astrocytes. Using a combination of cell biology, biochemistry, fluorescence and electron microscopy approaches, we investigated in cultured reactive astrocytes the intracellular distribution, transport, and secretion of MMP-2, MMP-9, TIMP-1, and TIMP-2. MMP-2 and MMP-9 demonstrate nuclear localization, differential intracellular vesicular distribution relative to the myosin V and kinesin molecular motors, and LAMP-2-labeled lysosomal compartment, and we show vesicular secretion for MMP-2, MMP-9, and their inhibitors. Our results suggest that these proteinases and their inhibitors use different pathways for trafficking and secretion for distinct astrocytic functions. Cynthia P W Soon, Peter J Crouch, Bradley J Turner, Catriona A McLean, Katrina M Laughton, Julie D Atkin, Colin L Masters, Anthony R White and Qiao-Xin Li. Serum matrix metalloproteinase-9 activity is dysregulated with disease progression in the mutant SOD1 transgenic mice.. Neuromuscular disorders : NMD 20(4):260–6, 2010. Abstract Amyotrophic lateral sclerosis (ALS) is an adult-onset fatal neurodegenerative disorder characterized by progressive deterioration of motor neurons in the spinal cord, brainstem, and cerebral cortex. Matrix metalloproteinase-9 (MMP-9) is proposed to be a biomarker for ALS due to a potential pathological role in the disease. However, despite numerous studies, it is still unclear whether there is a direct correlation between MMP-9 expression in serum and progression of disease. Therefore, we used a TgSOD1(G93A) mouse with a low transgene copy number. This model shows slow disease progression analogous to human ALS and provides a useful model to study biomarker expression at different stages of disease. Using zymography, we found that serum MMP-9 activity was significantly elevated in animals showing early signs of disease when compared to the younger, pre-symptomatic animals. This was followed by a decrease in MMP-9 activity in TgSOD1(G93A) mice with end-stage disease. These results were confirmed in serum of a high copy number strain of TgSOD1(G93A) mice with rapid progression. MMP-9 expression was changed accordingly in spinal motor neurons, glia and neuropil, suggesting a spinal cord contribution to blood MMP-9 activity. Serum MMP-2 activity followed a similar profile as the MMP-9 in these two models. These data indicate that circulating MMP-9 is altered throughout the course of disease progression in mice. Further studies in human ALS may validate the suitability of serum MMP-9 activity as a biomarker for early stage disease. Michel Steenport, Faisal K M Khan, Baoheng Du, Sarah E Barnhard, Andrew J Dannenberg and Domenick J Falcone. Matrix metalloproteinase (MMP)-1 and MMP-3 induce macrophage MMP-9: evidence for the role of TNF-alpha and cyclooxygenase-2.. Journal of immunology (Baltimore, Md. : 1950) 183(12):8119–27, December 2009. Abstract Matrix metalloproteinase (MMP)-9 (gelatinase B) participates in a variety of diverse physiologic and pathologic processes. We recently characterized a cyclooxygenase-2 (COX-2)–>PGE(2)–>EP4 receptor axis that regulates macrophage MMP-9 expression. In the present studies, we determined whether MMPs, commonly found in inflamed and neoplastic tissues, regulate this prostanoid-EP receptor axis leading to enhanced MMP-9 expression. Results demonstrate that exposure of murine peritoneal macrophages and RAW264.7 macrophages to MMP-1 (collagenase-1) or MMP-3 (stromelysin-1) lead to a marked increase in COX-2 expression, PGE(2) secretion, and subsequent induction of MMP-9 expression. Proteinase-induced MMP-9 expression was blocked in macrophages preincubated with the selective COX-2 inhibitor celecoxib or transfected with COX-2 small interfering RNA (siRNA). Likewise, proteinase-induced MMP-9 was blocked in macrophages preincubated with the EP4 antagonist ONO-AE3-208 or transfected with EP4 siRNA. Exposure of macrophages to MMP-1 and MMP-3 triggered the rapid release of TNF-alpha, which was blocked by MMP inhibitors. Furthermore, both COX-2 and MMP-9 expression were inhibited in macrophages preincubated with anti-TNF-alpha IgG or transfected with TNF-alpha siRNA. Thus, proteinase-induced MMP-9 expression by macrophages is dependent on the release of TNF-alpha, induction of COX-2 expression, and PGE(2) engagement of EP4. The ability of MMP-1 and MMP-3 to regulate macrophage secretion of PGE(2) and expression of MMP-9 defines a nexus between MMPs and prostanoids that is likely to play a role in the pathogenesis of chronic inflammatory diseases and cancer. These data also suggest that this nexus is targetable utilizing anti-TNF-alpha therapies and/or selective EP4 antagonists. Jan H N Lindeman, Hazem Abdul-Hussien, Hajo J Bockel, Ron Wolterbeek and Robert Kleemann. Clinical trial of doxycycline for matrix metalloproteinase-9 inhibition in patients with an abdominal aneurysm: doxycycline selectively depletes aortic wall neutrophils and cytotoxic T cells.. Circulation 119(16):2209–16, April 2009. Abstract BACKGROUND: Doxycycline has been shown to effectively inhibit aneurysm formation in animal models of abdominal aortic aneurysm. Although this effect is ascribed to matrix metalloproteinase-9 inhibition, such an effect is unclear in human studies. We reevaluated the effect of doxycycline on aortic wall protease content in a clinical trial and found that doxycycline selectively reduces neutrophil-derived proteases. We thus hypothesized that doxycycline acts through an effect on vascular inflammation. METHODS AND RESULTS: Sixty patients scheduled for elective open aneurysmal repair were randomly assigned to 2 weeks of low-, medium-, or high-dose doxycycline (50, 100, or 300 mg/d, respectively) or no medication (control group). Aortic wall samples were collected at the time of operation, and the effect of doxycycline treatment on vascular inflammation was evaluated. Independently of its dose, doxycycline treatment resulted in a profound but selective suppression of aortic wall inflammation as reflected by a selective 72% reduction of the aortic wall neutrophils and a 95% reduction of the aortic wall cytotoxic T-cell content (median values; P<0.00003). Evaluation of major inflammatory pathways suggested that doxycycline treatment specifically quenched AP-1 and C/EBP proinflammatory transcription pathways (P<0.0158, NS) and reduced vascular interleukin-6 (P<0.00115), interleukin-8 (P<0.00246, NS), interleukin-13 (P<0.0184, NS), and granulocyte colony-stimulating factor (P<0.031, NS) protein levels. Doxycycline was well tolerated; there were no adverse effects. CONCLUSIONS: A brief period of doxycycline treatment has a profound but selective effect on vascular inflammation and reduces aortic wall neutrophil and cytotoxic T-cell content. Results of this study are relevant for pharmaceutical stabilization of the abdominal aneurysm and possibly for other inflammatory conditions that involve neutrophils and/or cytotoxic T cells. Qin Hu, Chunhua Chen, Junhao Yan, Xiaomei Yang, Xianzhong Shi, Jing Zhao, Jiliang Lei, Lei Yang, Ke Wang, Lin Chen, Hongyun Huang, Jingyan Han, John H Zhang and Changman Zhou. Therapeutic application of gene silencing MMP-9 in a middle cerebral artery occlusion-induced focal ischemia rat model.. Experimental neurology 216(1):35–46, 2009. Abstract RNA interference appears to have a great potential not only as an in vitro target validation, but also as a novel therapeutic strategy based on the highly specific and efficient silencing of a target gene. We hypothesize that MMP-9 siRNA can be effective as an MMP-9 protein inhibitor in a rat focal ischemia model. Male Sprague-Dawley rats (156) were subjected to 2 h of middle cerebral artery occlusion (by using the suture insertion method) followed by 24 h of reperfusion. In the treatment group, 5 microl MMP-9 siRNA was administrated by intracerebroventricular injection within 60 min after 2 h of focal ischemia. The siRNA transfection was demonstrated by fluorescence conjugated siRNA. Treatment with MMP-9 siRNA produced a significant reduction in the cerebral infarction volume, brain water content, mortality rate and accompanying neurological deficits. The followings were recorded: Evan's blue and IgG extravasation were reduced; the expression of MMP-9 mRNA and protein were significantly silenced; and immunohistochemistry and Western blot analysis revealed that the expression of MMP-9 and VEGF were reduced while occludin and collagen-IV were up-regulated in brain tissues. Our findings provide evidence that a liposomal formulation of siRNA might be used in vivo to silence the MMP-9 gene and could potentially serve as an important therapeutic alternative in patients with cerebral ischemia. Lubin Fang, Friederike Huber-Abel, Marko Teuchert, Corinna Hendrich, Johannes Dorst, Dagmar Schattauer, Heinz Zettlmeissel, Meinhard Wlaschek, Karin Scharffetter-Kochanek, Hayrettin Tumani, Albert C Ludolph and Johannes Brettschneider. Linking neuron and skin: matrix metalloproteinases in amyotrophic lateral sclerosis (ALS).. Journal of the neurological sciences 285(1-2):62–6, 2009. Neetu Tyagi, William Gillespie, Jonathan C Vacek, Utpal Sen, Suresh C Tyagi and David Lominadze. Activation of GABA-A receptor ameliorates homocysteine-induced MMP-9 activation by ERK pathway.. Journal of cellular physiology 220(1):257–66, 2009. Abstract Hyperhomocysteinemia (HHcy) is a risk factor for neuroinflammatory and neurodegenerative diseases. Homocysteine (Hcy) induces redox stress, in part, by activating matrix metalloproteinase-9 (MMP-9), which degrades the matrix and leads to blood-brain barrier dysfunction. Hcy competitively binds to gamma-aminbutyric acid (GABA) receptors, which are excitatory neurotransmitter receptors. However, the role of GABA-A receptor in Hcy-induced cerebrovascular remodeling is not clear. We hypothesized that Hcy causes cerebrovascular remodeling by increasing redox stress and MMP-9 activity via the extracellular signal-regulated kinase (ERK) signaling pathway and by inhibition of GABA-A receptors, thus behaving as an inhibitory neurotransmitter. Hcy-induced reactive oxygen species production was detected using the fluorescent probe, 2'-7'-dichlorodihydrofluorescein diacetate. Hcy increased nicotinamide adenine dinucleotide phosphate-oxidase-4 concomitantly suppressing thioredoxin. Hcy caused activation of MMP-9, measured by gelatin zymography. The GABA-A receptor agonist, muscimol ameliorated the Hcy-mediated MMP-9 activation. In parallel, Hcy caused phosphorylation of ERK and selectively decreased levels of tissue inhibitors of metalloproteinase-4 (TIMP-4). Treatment of the endothelial cell with muscimol restored the levels of TIMP-4 to the levels in control group. Hcy induced expression of iNOS and decreased eNOS expression, which lead to a decreased NO bioavailability. Furthermore muscimol attenuated Hcy-induced MMP-9 via ERK signaling pathway. These results suggest that Hcy competes with GABA-A receptors, inducing the oxidative stress transduction pathway and leading to ERK activation. Karim Hnia, Jérôme Gayraud, Gérald Hugon, Michèle Ramonatxo, Sabine De La Porte, Stefan Matecki and Dominique Mornet. L-arginine decreases inflammation and modulates the nuclear factor-kappaB/matrix metalloproteinase cascade in mdx muscle fibers.. The American journal of pathology 172(6):1509–19, 2008. Abstract Duchenne muscular dystrophy (DMD) is a lethal, X-linked disorder associated with dystrophin deficiency that results in chronic inflammation, sarcolemma damage, and severe skeletal muscle degeneration. Recently, the use of L-arginine, the substrate of nitric oxide synthase (nNOS), has been proposed as a pharmacological treatment to attenuate the dystrophic pattern of DMD. However, little is known about signaling events that occur in dystrophic muscle with l-arginine treatment. Considering the implication of inflammation in dystrophic processes, we asked whether L-arginine inhibits inflammatory signaling cascades. We demonstrate that L-arginine decreases inflammation and enhances muscle regeneration in the mdx mouse model. Classic stimulatory signals, such as proinflammatory cytokines interleukin-1beta, interleukin-6, and tumor necrosis factor-alpha, are significantly decreased in mdx mouse muscle, resulting in lower nuclear factor (NF)-kappaB levels and activity. NF-kappaB serves as a pivotal transcription factor with multiple levels of regulation; previous studies have shown perturbation of NF-kappaB signaling in both mdx and DMD muscle. Moreover, L-arginine decreases the activity of metalloproteinase (MMP)-2 and MMP-9, which are transcriptionally activated by NF-kappaB. We show that the inhibitory effect of L-arginine on the NF-kappaB/MMP cascade reduces beta-dystroglycan cleavage and translocates utrophin and nNOS throughout the sarcolemma. Collectively, our results clarify the molecular events by which L-arginine promotes muscle membrane integrity in dystrophic muscle and suggest that NF-kappaB-related signaling cascades could be potential therapeutic targets for DMD management. Margrit Hollborn, Christina Stathopoulos, Anja Steffen, Peter Wiedemann, Leon Kohen and Andreas Bringmann. Positive feedback regulation between MMP-9 and VEGF in human RPE cells.. Investigative ophthalmology & visual science 48(9):4360–7, September 2007. Abstract PURPOSE: The proteolytic activity of matrix metalloproteinases (MMPs) is involved in pathologic angiogenesis in the eye. However, it is unknown whether MMPs may stimulate the production of the major angiogenic factor, vascular endothelial growth factor (VEGF). The authors investigated whether MMP-2 and MMP-9 alter the expression of VEGF by retinal pigment epithelial (RPE) cells. They also sought to determine the effects of MMPs on cellular proliferation and migration and the effect of triamcinolone acetonide on MMP-9-evoked cellular responses. METHODS: Human RPE cell cultures were stimulated with MMP-2 or MMP-9. The gene expression and secretion of MMP-9 and VEGF were determined by real-time RT-PCR and ELISA, respectively. Cellular proliferation was investigated with a bromodeoxyuridine immunoassay, and chemotaxis was examined with a Boyden chamber assay. RESULTS: Under control conditions, RPE cells in vitro expressed a significantly higher amount of mRNA for MMP-2 than for MMP-9. Chemical hypoxia caused upregulation of the gene expression of both MMPs, whereas VEGF increased the gene expression and secretion of MMP-9. The hypoxic expression of MMP-9 was mediated by autocrine VEGF signaling. Exogenous MMP-9 increased the gene expression and secretion of VEGF, whereas MMP-2 reduced the secretion of VEGF. MMP-2 and MMP-9 did not alter the proliferation but stimulated the migration of RPE cells. Triamcinolone fully inhibited the stimulatory effect of MMP-9 on the expression of VEGF and the VEGF-evoked increase in the expression of MMP-9. However, triamcinolone had no effect on the motogenic effect of MMP-9. CONCLUSIONS: There is a positive feedback regulation between MMP-9 and VEGF in RPE cells. The hypoxic expression of MMP-9 may stimulate the production and secretion of VEGF under pathologic conditions. Triamcinolone inhibits the positive feedback regulation between MMP-9 and VEGF under hypoxic conditions through inhibition of the gene expression of MMP-9 and the secretion of VEGF. Mahmoud Kiaei, Khatuna Kipiani, Noel Y Calingasan, Elizabeth Wille, Junyu Chen, Beate Heissig, Shahin Rafii, Stefan Lorenzl and Flint M Beal. Matrix metalloproteinase-9 regulates TNF-alpha and FasL expression in neuronal, glial cells and its absence extends life in a transgenic mouse model of amyotrophic lateral sclerosis.. Experimental neurology 205(1):74–81, 2007. Abstract Whether increased levels of matrix metalloproteinases (MMPs) correspond to a role in the pathogenesis of amyotrophic lateral sclerosis (ALS) needs to be determined and it is actively being pursued. Here we present evidence suggesting that MMP-9 contributes to the motor neuron cell death in ALS. We examined the role of MMP-9 in a mouse model of familial ALS and found that lack of MMP-9 increased survival (31%) in G93A SOD1 mice. Also, MMP-9 deficiency in G93A mice significantly attenuated neuronal loss, and reduced neuronal TNF-alpha and FasL immunoreactivities in the lumbar spinal cord. These findings suggest that MMP-9 is an important player in the pathogenesis of ALS. Our data suggest that the mechanism for MMP-9 neurotoxicity in ALS may be by upregulating neuronal TNF-alpha and FasL expression and activation. This study provides new mechanism and suggests that MMP inhibitors may offer a new therapeutic strategy for ALS. Susanne Petri, Mahmoud Kiaei, Elizabeth Wille, Noel Y Calingasan and M Flint Beal. Loss of Fas ligand-function improves survival in G93A-transgenic ALS mice.. Journal of the neurological sciences 251(1-2):44–9, December 2006. Abstract ALS is a devastating neurodegenerative disorder for which no effective treatment exists. The precise molecular mechanisms underlying the selective degeneration of motor neurons are still unknown. A motor neuron specific apoptotic pathway involving Fas and NO has been discovered. Motor neurons from ALS-mice have an increased sensitivity to Fas-induced cell death via this pathway. In this study we therefore crossed G93A-SOD1 overexpressing ALS mice with Fas ligand (FasL) mutant (gld) mice to investigate whether the reduced Fas signaling could have beneficial effects on motor neuron death. G93A-SOD1 mutant mice with a homozygous FasL mutant showed a modest but statistically significant extension of survival, and reduced loss of motor neurons. These results indicate that motor neuron apoptosis triggered by Fas is relevant in ALS pathogenesis. Véronique Masson, Laura Rodriguez Ballina, Carine Munaut, Ben Wielockx, Maud Jost, Catherine Maillard, Silvia Blacher, Khalid Bajou, Takeshi Itoh, Shige Itohara, Zena Werb, Claude Libert, Jean-Michel Foidart and Agnès Noël. Contribution of host MMP-2 and MMP-9 to promote tumor vascularization and invasion of malignant keratinocytes.. FASEB journal : official publication of the Federation of American Societies for Experimental Biology 19(2):234–6, February 2005. Abstract The matrix metalloproteinases (MMPs) play a key role in normal and pathological angiogenesis by mediating extracellular matrix degradation and/or controlling the biological activity of growth factors, chemokines, and/or cytokines. Specific functions of individual MMPs as anti- or proangiogenic mediators remain to be elucidated. In the present study, we assessed the impact of single or combined MMP deficiencies in in vivo and in vitro models of angiogenesis (malignant keratinocyte transplantation and the aortic ring assay, respectively). MMP-9 was predominantly expressed by neutrophils in tumor transplants, whereas MMP-2 and MMP-3 were stromal. Neither the single deficiency of MMP-2, MMP-3, or MMP-9, nor the combined absence of MMP-9 and MMP-3 did impair tumor invasion and vascularization in vivo. However, there was a striking cooperative effect in double MMP-2:MMP-9-deficient mice as demonstrated by the absence of tumor vascularization and invasion. In contrast, the combined lack of MMP-2 and MMP-9 did not impair the in vitro capillary outgrowth from aortic rings. These results point to the importance of a cross talk between several host cells for the in vivo tumor promoting and angiogenic effects of MMP-2 and MMP-9. Our data demonstrate for the first time in an experimental model that MMP-2 and MMP-9 cooperate in promoting the in vivo invasive and angiogenic phenotype of malignant keratinocytes. Enrico Giraudo, Masahiro Inoue and Douglas Hanahan. An amino-bisphosphonate targets MMP-9-expressing macrophages and angiogenesis to impair cervical carcinogenesis.. The Journal of clinical investigation 114(5):623–33, 2004. Abstract A mouse model involving the human papillomavirus type-16 oncogenes develops cervical cancers by lesional stages analogous to those in humans. In this study the angiogenic phenotype was characterized, revealing intense angiogenesis in high-grade cervical intraepithelial neoplasias (CIN-3) and carcinomas. MMP-9, a proangiogenic protease implicated in mobilization of VEGF, appeared in the stroma concomitant with the angiogenic switch, expressed by infiltrating macrophages, similar to what has been observed in humans. Preclinical trials sought to target MMP-9 and angiogenesis with a prototypical MMP inhibitor and with a bisphosphonate, zoledronic acid (ZA), revealing both to be antiangiogenic, producing effects comparable to a Mmp9 gene KO in impairing angiogenic switching, progression of premalignant lesions, and tumor growth. ZA therapy increased neoplastic epithelial and endothelial cell apoptosis without affecting hyperproliferation, indicating that ZA was not antimitotic. The analyses implicated cellular and molecular targets of ZA's actions: ZA suppressed MMP-9 expression by infiltrating macrophages and inhibited metalloprotease activity, reducing association of VEGF with its receptor on angiogenic endothelial cells. Given its track record in clinical use with limited toxicity, ZA holds promise as an "unconventional" MMP-9 inhibitor for antiangiogenic therapy of cervical cancer and potentially for additional cancers and other diseases where MMP-9 expression by infiltrating macrophages is evident. Marta Toth, Irina Chvyrkova, M.Margarida Bernardo, Sonia Hernandez-Barrantes and Rafael Fridman. Pro-MMP-9 activation by the MT1-MMP/MMP-2 axis and MMP-3: role of TIMP-2 and plasma membranes. Biochemical and Biophysical Research Communications 308(2):386–395, August 2003. Abstract MMP-9 (gelatinase B) is produced in a latent form (pro-MMP-9) that requires activation to achieve catalytic activity. Previously, we showed that MMP-2 (gelatinase A) is an activator of pro-MMP-9 in solution. However, in cultured cells pro-MMP-9 remains in a latent form even in the presence of MMP-2. Since pro-MMP-2 is activated on the cell surface by MT1-MMP in a process that requires TIMP-2, we investigated the role of the MT1-MMP/MMP-2 axis and TIMPs in mediating pro-MMP-9 activation. Full pro-MMP-9 activation was accomplished via a cascade of zymogen activation initiated by MT1-MMP and mediated by MMP-2 in a process that is tightly regulated by TIMPs. We show that TIMP-2 by regulating pro-MMP-2 activation can also act as a positive regulator of pro-MMP-9 activation. Also, activation of pro-MMP-9 by MMP-2 or MMP-3 was more efficient in the presence of purified plasma membrane fractions than activation in a soluble phase or in live cells, suggesting that concentration of pro-MMP-9 in the pericellular space may favor activation and catalytic competence. Increased plasma levels of matrix metalloproteinase-9 in patients with Alzheimer's disease. Neurochemistry International 43(3):191–196, 2003. Abstract Matrix metalloproteinases (MMPs) may play a role in the pathophysiology of Alzheimer's disease (AD). MMP-9 and tissue inhibitors of metalloproteinases (TIMPs) are elevated in postmortem brain tissue of AD patients. MMPs and TIMPs are found in neurons, microglia, vascular endothelial cells and leukocytes. The aim of this study was to determine whether circulating levels of MMP-2, MMP-9, TIMP-1 and TIMP-2 are elevated in the plasma of AD patients. We compared AD patients to age- and gender-matched controls as well as to Parkinson's disease (PD) and amyotrophic lateral sclerosis (ALS) patients. There was constitutive expression of gelatinase A (MMP-2), and gelatinase B (MMP-9), in all the samples as shown by zymographic analysis. Levels of MMP-9 were significantly (P=0.003) elevated in the plasma of AD patients as compared to controls. Plasma levels of MMP-2, TIMP-1 and TIMP-2 were unchanged. There were no significant changes of MMP-2, MMP-9, TIMP-1 and TIMP-2 levels in PD and ALS samples. TIMP-1 and TIMP-2 were significantly correlated with MMP-9 in the AD patients. ApoE genotyping of plasma samples showed that levels of MMP-2, TIMP-1 and TIMP-2 and MMP-9 were not significantly different between the ApoE subgroups. These findings indicate that circulating levels of MMP-9 are increased in AD and may contribute to disease pathology. Vincent Lambert, Ben Wielockx, Carine Munaut, Catherine Galopin, Maud Jost, Takeshi Itoh, Zena Werb, Andrew Baker, Claude Libert, Hans-Willi Krell, Jean-Michel Foidart, Agnès Noël and Jean-Marie Rakic. MMP-2 and MMP-9 synergize in promoting choroidal neovascularization.. FASEB journal : official publication of the Federation of American Societies for Experimental Biology 17(15):2290–2, 2003. Abstract Matrix metalloproteinase 2 (MMP-2) and MMP-9 are increased in human choroidal neovascularization (CNV) occurring during the exudative most aggressive form of age-related macular degeneration (AMD), but their precise role and potential interactions remain unclear. To address the question of MMP-2 and MMP-9 functions, mice deficient in the expression of MMP-2 (MMP-2 KO), MMP-9 (MMP-9 KO), and both MMP-2 and MMP-9 (MMP-2,9 KO) with their corresponding wild-type mice (WT) underwent CNV induction by laser-induced rupture of the Bruch's membrane. Both the incidence and the severity of CNV were strongly attenuated in double deficient compared with single gene deficient mice or corresponding WT controls. The reduced neovascularization was accompanied by fibrinogen/fibrin accumulation. Furthermore, overexpression of the endogenous MMP inhibitors TIMP-1 or TIMP-2 (delivered by adenoviral vectors) in WT mice or daily injection of a synthetic and gelatinase selective MMP inhibitor (Ro 26-2853) significantly decreased the pathological reaction. These findings suggest that MMP-2 and MMP-9 may cooperate in the development of AMD and that their selective inhibition represents an alternative strategy for the treatment of choroidal neovascularization. Mitochondrial involvement in amyotrophic lateral sclerosis. Neurochemistry International 40(6):543–551, May 2002. Abstract The causes of motor neuron death in amyotrophic lateral sclerosis (ALS) are so far unknown. The involvement of mitochondria in the disease was initially suggested by ultrastructural studies. More recently these observations have been supported by studies of mitochondrial function in ALS. Alterations in the activity of complexes which make up the mitochondrial electron transport chain have been recorded as well as mutations in the mitochondrial genome. The calcium buffering function of the mitochondria may also be affected in the disease. This review will discuss how mitochondrial dysfunction could be of relevance in ALS and the evidence that an alteration of mitochondrial function is a feature of the disease. The way in which the involvement of mitochondria fits with other aetiological hypotheses for ALS will also be discussed. S M Plummer, K A Holloway, M M Manson, R J Munks, A Kaptein, S Farrow and L Howells. Inhibition of cyclo-oxygenase 2 expression in colon cells by the chemopreventive agent curcumin involves inhibition of NF-kappaB activation via the NIK/IKK signalling complex.. Oncogene 18(44):6013–20, October 1999. Abstract Colorectal cancer is a major cause of cancer deaths in Western countries, but epidemiological data suggest that dietary modification might reduce these by as much as 90%. Cyclo-oxygenase 2 (COX2), an inducible isoform of prostaglandin H synthase, which mediates prostaglandin synthesis during inflammation, and which is selectively overexpressed in colon tumours, is thought to play an important role in colon carcinogenesis. Curcumin, a constituent of turmeric, possesses potent anti-inflammatory activity and prevents colon cancer in animal models. However, its mechanism of action is not fully understood. We found that in human colon epithelial cells, curcumin inhibits COX2 induction by the colon tumour promoters, tumour necrosis factor alpha or fecapentaene-12. Induction of COX2 by inflammatory cytokines or hypoxia-induced oxidative stress can be mediated by nuclear factor kappa B (NF-kappaB). Since curcumin inhibits NF-kappaB activation, we examined whether its chemopreventive activity is related to modulation of the signalling pathway which regulates the stability of the NF-kappaB-sequestering protein, IkappaB. Recently components of this pathway, NF-kappaB-inducing kinase and IkappaB kinases, IKKalpha and beta, which phosphorylate IkappaB to release NF-kappaB, have been characterised. Curcumin prevents phosphorylation of IkappaB by inhibiting the activity of the IKKs. This property, together with a long history of consumption without adverse health effects, makes curcumin an important candidate for consideration in colon cancer prevention. S Bellosta, D Via, M Canavesi, P Pfister, R Fumagalli, R Paoletti and F Bernini. HMG-CoA Reductase Inhibitors Reduce MMP-9 Secretion by Macrophages. Arteriosclerosis, Thrombosis, and Vascular Biology 18(11):1671–1678, 1998. Abstract Abstract–Macrophages secrete matrix metalloproteinases (MMPs) that may weaken the fibrous cap of atherosclerotic plaque, predisposing its fissuration. The 92-kDa gelatinase B (MMP-9) has been identified in abdominal aortic aneurysms and in atherosclerotic tissues. Fluvastatin, through the inhibition of the isoprenoid pathway, inhibits major processes of atherogenesis in experimental models (smooth muscle cell migration and proliferation and cholesterol accumulation in macrophages). We studied the effect of fluvastatin on the activity of MMP-9 in mouse and human macrophages in culture. Conditioned media of cells treated for 24 hours with fluvastatin were analyzed by gelatin zymography. In mouse macrophages, fluvastatin (5 to 100 micromol/L) significantly inhibited in a dose-dependent manner MMP-9 activity from 20% to 40% versus control. The drug, at a concentration as low as 5 micromol/L, inhibited MMP-9 activity (approx30%) in human monocyte-derived macrophages as well. Phorbol esters (TPA, 50 ng/mL) stimulated MMP-9 activity by 50%, and fluvastatin inhibited this enhanced activity up to 50% in both mouse and human macrophages. The above results on the secretion of MMP-9 were confirmed by Western blotting and ELISA. The inhibitory effect of fluvastatin was overcome by the simultaneous addition of exogenous mevalonate (100 micromol/L), a precursor of isoprenoids. Fluvastatin's effect was fully reversible, and the drug did not cause any cellular toxicity. The statin did not block directly the in vitro activation of the secreted protease. Similar data were obtained with simvastatin. Altogether, our data indicate an inhibition of MMP-9 secretion by the drug. This effect is mediated by the inhibition of synthesis of mevalonate, a precursor of numerous derivatives essential for several cellular functions. Int J Dev Biol - Distinct patterns of MMP-9 and MMP-2 activity in slow and fast twitch skeletal muscle regeneration in vivo. J Iłzecka, Z Stelmasiak and B Dobosz. [Matrix metalloproteinase-9 (MMP-9) activity in cerebrospinal fluid of amyotrophic lateral sclerosis patients].. Neurologia i neurochirurgia polska 35(6):1035–43. Abstract Matrix metalloproteinase-9 (MMP-9) is a member of the family of zinc-dependent endopeptidases that degrade extracellular matrix proteins. It can be activated by serine proteinases or by superoxide radicals. The motor neurons in amyotrophic lateral sclerosis patients express significantly higher levels of MMP-9, suggesting a role in neurodegeneration. The aim of the study was to investigate MMP-9 in cerebrospinal fluid from amyotrophic lateral sclerosis patients. MMP-9 was measured by enzyme-linked immunosorbent assay ELISA in cerebrospinal fluid from 24 amyotrophic lateral sclerosis patients and 15 controls. The mean amyotrophic lateral sclerosis duration was 18 months. According to Munsat ALS Health State Scale, the patients were divided into four groups: mild, moderate, severe, terminal. The patients were also divided into groups with shorter (below 12 months) and longer (above 12 months) duration of the disease. MMP-9 level was insignificantly lower in the cerebrospinal fluid from amyotrophic lateral sclerosis patients compared with controls. MMP-9 level showed a tendency to decrease with clinical status worsening, however this correlation was not statistically significant. The difference between MMP-9 level in the cerebrospinal fluid between the groups of patients with shorter and longer duration of amyotrophic lateral sclerosis was not significant. NEW YORK, NY (January 22, 2014) — Columbia University Medical Center (CUMC) researchers have identified a gene, called matrix metalloproteinase-9 (MMP-9), that appears to play a major role in motor neuron degeneration in amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig's disease. The findings, made in mice, explain why most but not all motor neurons are affected by the disease and identify a potential therapeutic target for this still-incurable neurodegenerative disease. The study was published today in the online edition of the journal Neuron. "One of the most striking aspects of ALS is that some motor neurons—specifically, those that control eye movement and eliminative and sexual functions—remain relatively unimpaired in the disease," said study leader Christopher E. Henderson, PhD, the Gurewitsch and Vidda Foundation Professor of Rehabilitation and Regenerative Medicine, professor of pathology & cell biology and neuroscience (in neurology), and co-director of Columbia's Motor Neuron Center. "We thought that if we could find out why these neurons have a natural resistance to ALS, we might be able to exploit this property and develop new therapeutic options." In a follow-up experiment, the researchers confirmed that the product of MMP-9, MMP-9 protein, is present in ALS-vulnerable motor neurons, but not in ALS-resistant ones. Further, the researchers found that MMP-9 can be detected not just in lumbar 5 neurons, but also in other types of motor neurons affected by ALS. "It was a perfect correlation." said Dr. Henderson. "In other words, having MMP-9 is an absolute predictor that a motor neuron will die if the disease strikes, at least in mice." Parkinson's disease (PD) and amyotrophic lateral sclerosis (ALS) share several clinical and neuropathologic features, and studies suggest that several gene mutations and polymorphisms are involved in both conditions. Matrix metalloproteinase-9 (MMP-9) is implicated in the pathogenesis of PD and ALS, and the C(−1562)T polymorphism in the MMP-9 gene leads to higher promoter activity. We therefore investigated whether this polymorphism predisposes to both PD and sporadic ALS (sALS). Samples from 351 subjects with PD and 351 healthy controls from two major cities in China were compared, while samples from 226 subjects with sALS were compared to the same number of controls from three centers in China. A possible association between the C(−1562)T polymorphism in the MMP-9 gene and PD or sALS was assessed by restriction fragment length polymorphism (RFLP) analysis. Our results show a significant association between the C(−1562)T polymorphism in the MMP-9 gene and risk of PD (odds ratio = 2.268, 95% CI 1.506–3.416, p<0.001) as well as risk of sALS (odds ratio = 2.163, 95% CI 1.233–3.796, p = 0.006), supporting a role for MMP-9 polymorphism in the risk for PD and sALS. The matrix metalloproteinases (MMPs) belong to a large subgroup of proteinases which includes the collagenases, gelatinases and the stromelysins, all of which contain tightly bound zinc (Birkedal-Hansen, 1995; Ries and Petrides, 1995). Several MMPs have been identified but the exact role of their involvement in various physiological and pathological processes is only incompletely understood. There is emerging evidence that MMPs might be involved in the pathogenesis of inflammatory demyelinating disorders such as multiple sclerosis and the Guillain–Barre´ syndrome (Opdenakker and van Damme, 1994). In a first study, Cuzner et al. (1978) demonstrated increased neutral proteinase activity in the CSF during exacerbations of multiple sclerosis. More recently, certain MMPs, i.e. 72-kDa gelatinase, stromelysin-1, interstitial collagenase, matrilysin (MMP-7) and 92-kDa gelatinase (MMP-9), were shown to degrade myelin basic protein (MBP) in vitro (Chandler et al., 1995). Furthermore, evidence is emerging that MMPs might be involved in blood–brain barrier (BBB) breakdown. Thus, non-specific inhibition of MMPs was shown to protect the BBB and could suppress, and even reverse, ongoing experimental autoimmune encephalomyelitis (EAE), an animal model for multiple sclerosis (Gijbels et al., 1994; Hewson et al., 1995). Among all known MMPs, the 92-kDa gelatinase might be particularly involved in BBB breakdown, as increased CSF levels of 92-kDa gelatinase in multiple sclerosis patients are associated with a leaky BBB on magnetic resonance imaging (Rosenberg et al., 1996). Moreover, in multiple sclerosis lesions increased levels of 92-kDa gelatinase were described (Cuzner et al., 1996), where endothelial cells, astrocytes and microglia were all thought to be potential source of production (Maeda and Sobel, 1996). However, besides a recent report on MMP expression and its regulation by a hydroxamate-based inhibitor in actively induced EAE (Clements et al., 1997), there are no data about the temporospatial regulation of 92-kDa gelatinase and other MMPs during the course of neuroinflammatory diseases.
CommonCrawl
In Theorem 5.2 in Lynche (1988) "A data reduction strategy for splines with applications to the approximation of functions and data", a bound for the difference between the $(l_2,t)$ and $L^2$ norms is given (Equation 5.8), where $(l_2,t)$ is the discrete weighted norm defined in Equation 5.4. Could this result be extended to the general $(l_p,t)$ and $L^p$ $(l_2,t)$ norms for $1\leq p\leq\infty$? Or at least for $p=1$? Browse other questions tagged norm approximation spline or ask your own question. How can missing data be organised or classified (Interpolation vs Approximation)?
CommonCrawl
All welcome, I apply a solution based on https://gist.github.com/clifgriffin/728 ... 7b81fa2d8a where a text block is set and the size of the text is automatically selected depending on the length of the text, download https://yadi.sk/d/KExniPdnz1N1cw . Really looking forward to help on this solution for PHP 5.4. You need to compute the $x_pos, so that the text will be centered. Measure the text width and the image width and offset by half the difference.
CommonCrawl
Considering cloud instances are usually expensive, and price of cryptocurrencies ( and especially the ones that are still minable with CPU and GPU ) are collapsing lately, most of you must think I'm turning mad. And, indeed, the target of this walkthrough is absolutely NOT BEING PROFITABLE. Let me now describe my situation. When you're working as a consultant, you often grab some AWS credits directly from AWS or its Partners, for instance, to review or try a service or to watch a specific webinar, but once activated those free credits often expire within a few months. As my AWS consumption is usually around $10 per month, when I get a $100 coupon, I usually barely use half before it expires, even while testing tons of new services. As a result, the best way for me to quickly consume all this free credit before expiration while getting a small amount back is to set-up a Linux AMI with Nvidia drivers and run a small mining fleet in an Auto Scaling group. As I wrote above, doing so is fundamentally not profitable, however before detailing the set-up, I will spend a bit of time on the theoretical aspect of how to increase your profitability slightly by choosing: the right region, the right instance type and the right coin. As you clearly won't reserve your instance(s) for a year and the minimum reservation is usually 12 months, odds are to bet you will use on-demand instances. Spot instances could be an excellent choice to increase your profit as you can easily get them 50% cheaper than regular price, however as they can be automatically shut down if someone bid a higher price, it will make the calculation process more tedious and prices vary highly according to the Availability Zone 1 , be careful. Instance prices are usually proportional inside of the same region, and often, the longer time the AWS region is available, the cheaper it is. This is logic as AWS had more time to make the Infrastructure installation profitable. Another aspect you should take into account is the location of the mining pool2 you're going to use, the better the latency, the more profit you get. Mining pools are most of the time located in East America, Western Europe or China. For the rest of this tutorial, I will use the us-east-1 region as a reference. Now some history, initially Bitcoin was aimed to be mined by everyone through a CPU ( Central Processing Unit ), later people discovered that they could get more calculation power by using their GPU ( Graphics Processing Unit ). As Bitcoin became more expensive, some people began to develop dedicated hardware called ASIC (Application-Specific Integrated Circuit) optimized to solve a problem related to the Bitcoin-specific algorithm, which is SHA256 4. Usually, it seems established to be more difficult to conceive an ASIC when the algorithm is memory intensive although some developer teams consider no algorithm could ever be ASIC resistant, while other teams decided to hard fork their blockchain client every six months to prevent any ASIC being released. Simply put, once an algorithm is mined by ASIC, GPU profitability falls and although there is a lot of speculation lately about FPGA ( Field Programmable Gate-Array ), that could be configured to have both the adaptability of a GPU and the computing power of an ASIC, I'm not even sure AWS f type instances could be set for this purpose so it won't be the subject of this article. As CPU mining is the least profitable and ASIC instances are obviously not provided by AWS, let's review quickly their different available GPU instances. For NVIDIA GPU mining 5, the most important is the number of CUDA cores and the Memory clock frequency, furthermore for ASIC resilient algorithms it's recommended to have at least 4Gb of RAM per GPU, although 6Gb is becoming more recommended. Taking the result of the previous table, I wouldn't counsel you the p3 type for mining as it has a very low memory clock. The g2 instance is the cheapest but are on the way of being decommisionned and because of the limited amount of RAM per GPU, you won't be able to mine on all the available algorithm. My recommendation is to privileged the g2 type while it's available and if it doesn't work to compare the performance between the g3 and p2 instance type. To choose a coin with the maximal ROI 6, I recommend you to have a look at whattomine.com while keeping in mind the following table which represents retail NVIDIA cards performances. As you can see, all retailing NVIDIA GPU have similar stock memory clock, which is not abnormal as the most important clock for gaming is the card CPU core clock, not the memory one. Although in online forum several GPU owners claim they can easily overclock their memory at +1000MHz, on my home computer I use only AMD GPU and usually I overclock the memory at +200MHz, so I couldn't tell if overclocking at +1000MHz an NVIDIA GPU is a good idea. If you compare solely to the number of CUDA Core, we could tell that the p2 instance type is equivalent to a GTX 1080, the g3 instance to a GTX 1070 and the g2 instance to a GTX 1060. You can base on those references to have a first idea of your mining income. Let's consider we launch three g3.8xlarge which could be consider as \(3 \times 2 = 6\) GPU GTX 1070. If you go to whattomine.com, select 6 GPU GTX 1070 while setting the electricity price at 0, I see the most profitable cryptocurrency is Bitcoin Gold, the third fork of the main cryptocurrency Bitcoin that happened the 12 November 2017. This Bitcoin fork has the particularity of being Asics resilient and as such can bring me $3.85 a day at the time of writing this article. As a result, considering we will mine a full day at an hourly price of $1.14 per CPU, it means we can recover $3.85 with \( $1.14 \times 24 \times 6 = $164 \) if using On-demand instances, I told you it's really not profitable. Eventually with a bid of $0.24 on spot instances you could end up with a bill of $34.56 but nothing guaranty your instance would run 24h in a row, so don't launch them 24h before the expiration of your credit and regularly monitor, for example, by sending you an SMS or email message when your instances are started and shut down using a combination of AWS CloudWatch and SNS. I'm now going to give you the list of commands to be executed to configure your mining AMI, I will use an Ubuntu instance, but you can use any distribution you prefer using a supported packet manager ( Although I wouldn't recommend RedHat as you would have to pay the licence, same for any Windows version ). I will consider you have already created your AWS account and set-up your credit card information, then select the EC2 service and request an increase of your EC2 limit, by default AWS don't let you run any graphics instance so you'll have to ask for some, which can take up to 48 hours. To create the AMI, launch any small GPU instance, for instance in AMI I will select Ubuntu server and in type g2.2xlarge. In instance detail, you can choose if you want to use a spot instance and the VPC/subnet location. If you don't know what a VPC and subnet are, just let default ones. In the add storage section, change the volume size to 20 Gb. You can skip the tag section, and in security group create a new one you will name mine-sg and add a description you like, for security reason I recommend you restrict the Source on My IP. Finally, review and launch your instance, AWS will ask you to select an SSH key pair, select "Create a new key pair", call it ec2-mining download the .pem file and launch your instance. I will consider you know how to access an ssh server from an ssh client with an access key. If you're on Windows and use Putty, you will have to convert your .pem key file to Putty .ppk format here a tutorial. You can find the public IP of your instance below when you select it in the instance tab. First, we will update our instance, install the required dependencies and reboot the instance. Now go on NVIDIA website and get the link to the latest version. Once done you should be able to see your driver version by entering lspci -v | grep -i nvidia, if you want to make sure the driver is installed for every GPU of your instance, you can write lshw -c video and make sure at Configuration you have something as driver=nvidia, if you don't see it, you may have to reboot your instance. You now have to choose a miner software, there are hundreds of different available software closed and open source. If you want an open source one, you often have to compile it yourself, here a tutorial for ccminer on Windows 7. Well, it often happens that close source miners get a higher hash rate and provide more profit than open ones; as a result, I've decided to trust and download the EWBF Cuda Equihash Miner binary from Mega. Now package your AMI from the running instance, give it a name and terminate the instance once done ( the AMI creation can take up to 10 minutes ). You can monitor your miner(s) performance on the pool dashboard, once you're done with mining delete all the corresponding instances and attached volumes to prevent any additional charge.
CommonCrawl
Why do we take the compressibility factor equal to one when graph is plotted? I saw the graph of compressibility factor against pressure I was confused to see that all the gases are started by 1 unit of conpressibility factor when pressure is zero. So why do we take the compressibility factor equal to one when the graph is being plotted? I think that at $p = 0$ the compressibility factor ($Z = pV_m/RT$) itself is not particularly well-defined since $p = 0$, $V_m = \infty$ and $T = 0$. Zero pressure is not a state that we can reach. This happens with all gases because the individual gases are identified by their virial coefficients $B, C, \ldots$ and the terms involving these constants vanish as $V_m \to \infty$. A simple qualitative explanation is that in the limit of $p = 0$ all gases behave ideally; for an ideal gas, $Z$ is of course equal to $1$ because of the ideal gas law $pV_m = RT$. Not the answer you're looking for? Browse other questions tagged physical-chemistry gas-laws or ask your own question. Do all gases occupy same volume at equal temperature and pressure conditions? If a gas always occupies the volume of its container, will its volume always be 22.4L at STP? Does MEK-toluene system create a negative deviation from Raoult's law?
CommonCrawl
In recent years, the study of evolution equations featuring a fractional Laplacian has received much attention due to the fact that they have been successfully applied into the modelling of a wide variety of phenomena, ranging from biology, physics to finance. The stochastic process behind fractional operators is linked, in the whole space, to an $\alpha$-stable processes as opposed to the Laplacian operator which is linked to a Brownian stochastic process. In addition, evolution equations involving fractional Laplacians offer new interesting and very challenging mathematical problems. There are several equivalent definitions of the fractional Laplacian in the whole domain, however, in a bounded domain there are several options depending on the stochastic process considered. In this talk we shall present results on the rigorous passage from a velocity jumping stochastic process in a bounded domain to a macroscopic evolution equation featuring a fractional Laplace operator. More precisely, we shall consider the long-time/small mean-free path asymptotic behaviour of the solutions of a re-scaled linear kinetic transport equation in a smooth bounded domain.
CommonCrawl
Linear Feedback Shift Registers (LFSRs) can be excellent (efficient, fast, and with good statistial properties) pseudo-random generators. Many stream ciphers are based on LFSRs and one of the possible designs of such stream ciphers is combining outputs of $m$ LFSRs as input of a boolean function $f:GF(2)^m\rightarrow GF(2)$. This last function has to be carefully selected. My question is a rather elementary one. I understand that using one LFSR to produce the keystream is not appropriate as one can create the whole keystream by knowing a tiny fraction of it: if the tap positions of a length $n$ LFSR are known, one needs $n$ bits to determine the entire keystrem sequence, and if they are not known, one needs $2n$ bits (by using the Berlekamp-Massey algorithm to find out the tap positions). However, why do we need a non-linear combination of LFSRs (among all sorts of other requirements)? What would be the problem of getting a number of LFSRs with appropriate lengths and tap positions and XOR together their output to produce the keystream? If there was no non-linearity, then every bit of keystream output would be a (known) linear function of the unknown key bits. Consequently, in a known-plaintext attack scenario, each bit of known keystream output would allow us to write a linear equation on the unknown key bits. If we have a 128-bit key, there are 128 boolean unknowns (variables), so once we have 128 bits of known keystream, we have 128 linear equations in 128 unknowns. At that point it becomes easy to solve for the original key bits using standard methods for solving a system of linear equations (e.g., Gaussian elimination). Thus, an attacker could recover the key from 128 bits of known output from the stream cipher, which is a total break of the stream cipher. The only way to prevent this kind of attack is to make sure that the cipher contains non-linear elements. To prevent other related but fancier attacks (e.g., linear cryptanalysis), one also needs sufficient non-linearity in the stream cipher. Clarification: To keep it simple, my answer above assumes that the feedback polynomial for the LFSRs is known. The attack does generalize to the case where the feedback polynomials are not known (you need twice as much known keystream output); in that case, the attack gets a bit more complicated, but the basic idea still applies. I tried to keep it simple to help you understand the intuition without getting bogged down in mathematics, but if you want to see more details about the case where the feedback polynomials are not known, Dilip Sarwate has an excellent answer that explains that case more thoroughly. The Berlekamp-Massey algorithm is an iterative method for finding the shortest LFSR that can generate a given sequence of bits. The given sequence might or might not be generated by an LFSR: the Berlekamp-Massey algorithm does not care. It just finds the shortest LFSR that can generate the given sequence, and if the sequence has been generated by an LFSR of length $n$, then the Berlekamp-Massey algorithm is guaranteed to find this LFSR after examining no more than $2n$ bits of the sequence. A simplistic description of what happens is as follows. After the algorithm has found the shortest LFSR that generates the first $k$ bits of the sequence, it examines the $(k+1)$-th bit of the sequence. If this $(k+1)$-th bit of the sequence matches the $(k+1)$-th bit of the output of the current LFSR, the LFSR is accepted as the one that generates the first $k+1$ bits. If not, the LFSR is updated so that the new, typically longer, LFSR generates the first $k+1$ bits. As stated earlier, if the sequence in question was in fact generated by an LFSR of length $n$, then the Berlekamp-Massey algorithm is guaranteed to find this LFSR by the time it has examined $2n$ bits of the sequence. How does the algorithm know that it is done? Well, it doesn't, but after the correct LFSR has been found, the $(2n+1)$-th, the $(2n+2)$-th, the $(2n+3)$-th, $\ldots$ bits of the given sequence match the corresponding outputs of the LFSR and so the Berlekamp-Massey algorithm does not update the $n$-bit LFSR it has found. What does all this have to do with the question asked? Well, the (bit-by-bit XOR) sum of the outputs of the various LFSRs is a sequence that is generated by a longer LFSR (typically, the length of the longer LFSR is the sum of the lengths of the LFSRs whose outputs were summed). So, the cryptographic security is not significantly larger. What is needed is some way of combining the constituent LFSR outputs so that the resulting sequence has linear complexity much larger that the sum of the LFSR lengths. The linear complexity of a sequence is defined as the length of the shortest LFSR that can generate the sequence. What we want is a sequence that has high linear complexity but which can be generated easily as a nonlinear function of the outputs of short LFSRs. The legitimate users of the system can encipher and decipher easily, but a cryptanalyst attempting to break the system via a known plaintext attack has to either figure out the nonlinear function (and the constituent LFSRs) which is not easy to do or attempt a Berlekamp-Massey algorithm attack which may fail because not enough bits of the sequence can be determined via a known plaintext attack to find the shortest LFSR that generates the sequence. Not the answer you're looking for? Browse other questions tagged stream-cipher lfsr linear-cryptanalysis or ask your own question. Can a LFSR be cryptographically secure? Why is there a strong distinction between stream and block ciphers? References for combining LFSRs with different lengths and taps? Why do stream ciphers use a nonce? Why are stream ciphers based on linear-feedback shift registers so popular?
CommonCrawl
The Relationship of Hemolymph Sugar, Juvenile Hormone and Biogenic Amines to the Three Behavioral States of Mature Honey Bees. Differences in sugar titers, JH synthesis rate and biogenic amine metabolism correlated with three distinct activity groups or mature bees: resting bees, followers and dancers. Dancers had lower trehalose titers than followers or resting bees. In absconding bees trehalose titers were even lower, due to greatly elevated locomotor activity. The collecting and transferring of food by dancers and followers resulted in a higher glucose titer than in resting bees. Followers had a higher-JH synthesis rate than dancers, suggesting that JH could be an internal motivational stimulus to recruit followers to forage. Dancers and followers collect and transfer food and therefore had higher $\alpha$-glucosidase activity in the hypopharyngeal gland than resting bees. Higher rates of the $\alpha$-glucosidase activity were associated with higher JH synthesis rates, suggesting a JH regulated reprogramming of the hypopharyngeal gland from royal jelly production to $\alpha$-glucosidase. Dopamine (DA) metabolism was higher in the brain of the dancers than in followers, suggesting that brain DA levels may be involved in regulation of recruitment. The correlation of brain DA metabolism in the brain to that in the hemolymph suggested that brain DA brain was metabolized into the N-$\beta$-alanyldopamine (NBAD), which was then exported into the hemolymph. DA, NBAD, octopamine and serotonin levels were higher in the upper part of the brain than in the lower. A higher rate of DA metabolism was observed in the upper brain (containing the calyx of the mushroom body) of all activity groups, and the DA metabolism was higher in dancers than in followers or resting bees. It is suggested that DA modulation of the neural activity in the calyx might be involved with recruitment behavior. In addition, differences in sugar titers, the JH synthesis rate, and biogenic amine levels were found to be correlated with the availability of food, the season and the time of day. Bozic, Janko, "The Relationship of Hemolymph Sugar, Juvenile Hormone and Biogenic Amines to the Three Behavioral States of Mature Honey Bees." (1996). LSU Historical Dissertations and Theses. 6232.
CommonCrawl
Welcome to our first fully online issue of Parabola Incorporating Function. It's not every day that a mathematics puzzle makes it into mainstream media. But that's what happened recently with "Cheryl's Birthday problem". This problem was posted by Kenneth Kong, the host of a Singaporean TV show, on his Facebook page on 10 April, and it went viral. Finding two ways to enumerate the same collection of objects can often give rise to useful formulae. For instance, the sum \[ 1 + 2 + \cdots + n \] can be interpreted as the maximum number of different handshakes between $n+1$ people. Polygonal numbers enumerate the number of points in a regular geometrical arrangement of the points in the shape of a regular polygon. An example is the triangular number $T_n$ which enumerates the number of points in a regular triangular lattice of points whose overall shape is a triangle. Parabola incorporating Function would like to thank Sin Keong Tong for contributing problem 1472. Q1461 As in problems 1442 and 1452, a particle is projected from one corner of a $2014\times1729$ rectangle. This time, however, the particle is projected at an angle of $30^\circ$ above the horizontal.
CommonCrawl
Meet McWoof, sheep herding dog extraordinaire. Although he's an expert now, he wasn't always able to divide herds of sheep into smaller groups. To stop overgrazing, Shepherd McWilliams' sheep have to graze on different fields. So, we need these 18 sheep to be divided into equal groups of 6 sheep each. In this situation, 18 is the dividend, or the number you are going to divide. 6 is the divisor, or the number of sheep in one group. From 18, we can make three equal groups of 6 sheep, so the quotient is 3. If there were any sheep left over after dividing, that would be the remainder. We could write this equation like this, this, or this. No matter how you write it, the answer, or the quotient, is the same. When he's finished, McWoof gives the shepherd a signal that the job is done and he can check to see if the groups are of equivalent sizes. He was taught that it's important to make sure all the groups are the same size, with no remainders. It looks like McWoof tried to divide the 18 sheep into equal groups with 5 sheep in each group. But after making 3 groups, each with 5 sheep, there are 3 sheep remaining; so the remainder is 3. In addition to splitting the herd into equal groups, McWoof was also taught to keep track of the ratio of white sheep to black sheep in the herd. In this case, we have a ratio of 15 white sheep to 3 black sheep. Typically, we reduce the ratios to the simplest form, just like with fractions! This means that for every five white sheep, there is one black sheep. Ratios can be written in a couple of different ways. McWilliams also taught McWoof to help determine the average amount of wool that comes from his different kinds of sheep. To find the average of something, you take a total sum and divide that by the number of elements you added together. For example, if the black sheep's wool production looks like this, then the total sum of the pounds of wool, 9, is your dividend, and the number of things you summed, in this case, the number of black sheep, 3, is your divisor. Now we just divide 9 by 3 to find the average. So each time one of the black sheep is sheared, Shepherd McWilliams can expect an average of 3 lbs. of wool. Shepherd McWilliams is so proud of his herding hound, that he'd like to give him one of his favorite treats. Oh, I guess counting sheep has unintended consequences. Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Keywords for Division kannst du es wiederholen und üben. Describe different ways to represent division. Check the result by inverting division. dividend $\div$ divisor $=$ quotient. You have different ways to write down a division. But it doesn't make sense to get different results. The dog has to divide the flock of $18$ sheep into groups of $6$. This is a division task: $18\div 6$. $18$ is the dividend, the number you are going to divide. $6$ is the divisor, the number of sheep in each group. The result, the number of groups of sheep, is the quotient. Here it is $3$. No matter which way of writing down division you use, the result is always the same. In this case, this is the quotient $3$. Represent dividend, divisor, and quotient. For addition, you have: (summand) $+$ (summand) $=$ (sum). For subtraction, you have: (minuend) $-$ (subtrahend) $=$ (difference). For multiplication, you have: (factor) $\times$ (factor) $=$ (product). The divisor is that which you divide the dividend. The quotient is the result of a division. The dog has to divide the herd into groups of $6$. $18$ sheep belong to the herd. So you get the following division task: $18\div 6$. $6$ is the divisor, the number of groups you are going to make. The result, the number of groups of $6$ sheep made, is the quotient. Here it is $3$. So you can remember the following: dividend $\div$ divisor $=$ quotient. Keep in mind that we are looking for the ratio of white sheep to black sheep. Not the other way round. But a ratio needs two arguments. You can also describe ratios using division notation. you get the ratio of "white sheep" to "black sheep" by dividing $15$ by $3$. So $15:3$ is the ratio of white sheep $:$ black sheep. You can simplify the expression to $5:1$. This means that for each group of $5$ white sheep, there is just $1$ black sheep in the group. Apply division, ratio, and average problems. Simplify ratios or fractions if it's possible. Divide the resulting sum by the number of values. For the desired ratio, determine the number of female students first. There are $30$ students in total, $18$ of them male. Thus $30-18=12$ are female students. This leads to the ratio $18$ to $12$, which can simplified to $3$ to $2$. You can interpret this result as follows: for each $3$ male students there are $2$ female students. Add all given values: $120+80+70=270$. Divide this sum by the number of values $270\div 3=90$. The desired average score is $90$. Describe how to determine the average. First we add the given pounds to get $4+6=10$. Next we divide this sum by the number of sheep, i.e. $2$. This gives us an average of $10\div 2=5$. The sum of all calculated averages above is $11$. Two of the averages are the same. Add all the given values. So the average number of pounds of wool given by John and Paul is $3$. The average wool production of these three sheep is given by $4$ pounds. All the sheep together produce $4$ pounds of wool on average. Highlight the key words concerning division problems. Remember, dividend $\div$ divisor $=$ quotient. It also can be used for ratios or averages. The number of sweets, $25$, is the dividend. The number of persons, $5$, is the divisor. The resulting number of sweets for each person, $5$, is the quotient. $12\div 3=4$ is the quotient. So she has to do $4$ homework assignments each day. $17+21+34=72$. This is the dividend. Divide the dividend by $3$, the number of people. This is the divisor. The average age is the quotient $72\div 3=24$. In the last example we can examine a remainder: Monica wants to split the dividend $100$ dollars among $6$, the divisor, people. This leads to $100\div 6=16$ remainder $4$ dollars per person.
CommonCrawl
Carrithers, J. et al., 2004. Concurrent Aerobic and Resistance Exercise Effects on Mixed, Myofibrillar, Mitochondrial, and Sarcoplasmic Protein Synthesis. Medicine & Science in Sports & Exercise, 36, p.S194. Coker, R.H. et al., 2002. Prevention of Overt Hypoglycemia During Exercise Stimulation of Endogenous Glucose Production Independent of Hepatic Catecholamine Action and Changes in Pancreatic Hormone Concentration. Diabetes, 51, pp.1310–1318. Honeal, K.P. et al., 2002. Synergistic Reduction in Low-Density Lipoprotein Cholesterol With Combined Hmg-Coa Reductase Inhibitor and Aerobic Exercise Therapy in Obese, Hypercholesterolemic Males. Medicine & Science in Sports & Exercise, 34, p.S50. Koyama, Y. et al., 2002. Prior exercise and the response to insulin-induced hypoglycemia in the dog. American Journal of Physiology-Endocrinology And Metabolism, 282, pp.E1128–E1138. Simonsen, L. et al., 2002. The effect of insulin and glucagon on splanchnic oxygen consumption. Liver, 22, pp.459–466. Coker, R.H. et al., 2001. Stimulation of splanchnic glucose production during exercise in humans contains a glucagon-independent component. American Journal of Physiology-Endocrinology And Metabolism, 280, pp.E918–E927. Crommett, A. et al., 2001. Excess post-exercise oxygen consumption following acute aerobic and resistance exercise in lean and obese women. Medicine & Science in Sports & Exercise, 33, p.S15. Koyama, Y. et al., 2001. Role of carotid bodies in control of the neuroendocrine response to exercise. American Journal of Physiology-Endocrinology And Metabolism, 281, pp.E742–E748. Coker, R.H. et al., 2000. Hepatic $\alpha$-and $\beta$-adrenergic receptors are not essential for the increase in Ra during exercise in diabetes. American Journal of Physiology-Endocrinology And Metabolism, 278, pp.E444–E451. Koyama, Y. et al., 2000. Evidence that carotid bodies play an important role in glucoregulation in vivo. Diabetes, 49, pp.1434–1442.
CommonCrawl
These are, without a doubt, the 21 worst gift ideas ever. They're so bad that you probably should stop reading right now. Just go away. We have to do this sort of thing. Because we're All Gifts Considered, we have to consider not only the best gift ideas, but also the worst. A worst-of option in the form of a call; it is a call on the worst performing stock or asset in a basket or index. It has a lower payoff potential vis-à-vis a call on an identical basket/ index. [Basket model $\infty$] There are of course many variations of the above methods. Depending on the choice of copulas used in model 3 above for instance. Gaussian is the usual go-to choice, but you could pick any other dependence structure. One which accounts for the fact that correlations explode when the market crashes seems like a better option. 1/28/2014 · Option - this example is pricing equity basket option, but it is also an implementation of multi-factor option interface. In the future, we might want to create implementations for other types of multi-factor options, such as the best-of or worst-of basket options. What are the best and worst meals for diabetes? Update Cancel. Before one bite of burrito, you can get 98 grams of carbs and 810 calories in a basket of chips and salsa. If you're trying to slim down and eat less sodium, like many people with diabetes, the burrito adds 950 calories. What are some of the best and worst airline meals ever? 10 Best & Worst Sites to Compare Life Insurance Quotes. by Wendy Connick August 2, 2018 12:09 pm. clicking on one of those buttons let me change the parameters and produced a new basket of quotes immediately. The page also offered me the option to speak with a licensed insurance agent. The implied volatility surface and the option Greeks - to what extent is the information contained in their daily movements the same? 6 How to estimate the greeks with a Monte Carlo simulation? The best option for your home's drinking water is to filter at the point of use with an NSF certified water filter. This addresses all of the chemicals found in well water or an urban water supply, along with any lead that might leach into the water if you have old plumbing. With Aldi hitting a growth spurt in the US market, you're more likely than ever to find yourself heading to this European favorite for your shopping. Not everything there is a great buy, though, so let's talk about some of the best and worst things you can find at Aldi. The "best of the worst" is a tie between these two. Imagine putting five Jimmy Dean Fully Cooked Original Pork Sausage Links between a big ol' slab of greasy bread, and holding the whole shebang. Basket located, you probably start making your plan of attack as soon as you take inventory of your haul. Eat the worst offenders first, so you can focus on the good stuff. From worst to best, here's how that candy would get eaten. Like it or not, this is one instance where sharing is probably your best option. © Best of worst of basket option Binary Option | Best of worst of basket option Best binary options.
CommonCrawl
Abstract: An aggregate signature scheme is a digital signature that supports aggregation: Given $n$ signatures on $n$ distinct messages from $n$ distinct users, it is possible to aggregate all these signatures into a single short signature. This single signature (and the $n$ original messages) will convince the verifier that the $n$ users did indeed sign the $n$ original messages (i.e., user $i$ signed message $M_i$ for $i=1,\ldots,n$). In this paper we introduce the concept of an aggregate signature scheme, present security models for such signatures, and give several applications for aggregate signatures. We construct an efficient aggregate signature from a recent short signature scheme based on bilinear maps due to Boneh, Lynn, and Shacham. Aggregate signatures are useful for reducing the size of certificate chains (by aggregating all signatures in the chain) and for reducing message size in secure routing protocols such as SBGP. We also show that aggregate signatures give rise to verifiably encrypted signatures. Such signatures enable the verifier to test that a given ciphertext $C$ is the encryption of a signature on a given message $M$. Verifiably encrypted signatures are used in contract-signing protocols. Finally, we show that similar ideas can be used to extend the short signature scheme to give simple ring signatures.
CommonCrawl
Abstract: In the present paper we consider the problem of Laplace deconvolution with noisy discrete non-equally spaced observations on a finite time interval. We propose a new method for Laplace deconvolution which is based on expansions of the convolution kernel, the unknown function and the observed signal over Laguerre functions basis (which acts as a surrogate eigenfunction basis of the Laplace convolution operator) using regression setting. The expansion results in a small system of linear equations with the matrix of the system being triangular and Toeplitz. Due to this triangular structure, there is a common number $m$ of terms in the function expansions to control, which is realized via complexity penalty. The advantage of this methodology is that it leads to very fast computations, produces no boundary effects due to extension at zero and cut-off at $T$ and provides an estimator with the risk within a logarithmic factor of the oracle risk. We emphasize that, in the present paper, we consider the true observational model with possibly nonequispaced observations which are available on a finite interval of length $T$ which appears in many different contexts, and account for the bias associated with this model (which is not present when $T\rightarrow\infty$). The study is motivated by perfusion imaging using a short injection of contrast agent, a procedure which is applied for medical assessment of micro-circulation within tissues such as cancerous tumors. Presence of a tuning parameter $a$ allows to choose the most advantageous time units, so that both the kernel and the unknown right hand side of the equation are well represented for the deconvolution. The methodology is illustrated by an extensive simulation study and a real data example which confirms that the proposed technique is fast, efficient, accurate, usable from a practical point of view and very competitive.
CommonCrawl
How can I prove that the XOR problem for dimension d is not lineary seperable? How to relate to an even d and an odd d? These equations obligate wi > 0, for each i. Now lets take the last equation. This equation force wi<=0 for all i. wi > 0, for each i. Cannot be solved because. On the odd - d case, we'll have to consider the all of the equations with 1 zero (d equations), and it will get to the same contradiction. But - i'm not sure it's the good-practice way. Your answer for 2 dimensions can be true even for $d>2$. You can't have $w1, w2$ s.t. If $XOR_d(x_1,x_2,\ldots,x_d)$ is the (linear) XOR function on $d$ variables (defined as 1 iff an odd number of the inputs are 1), then define the two functions $$ f(x_1)=XOR_d(x_1,0,0,\ldots,0)=w_1x_1+a\\ g(x_1)=XOR_d(x_1,0,0,\ldots,0,1)=w_1x_1+b $$ But $f$ is (strictly) increasing and $g$ is (strictly) decreasing, which is impossible. Not the answer you're looking for? Browse other questions tagged systems-of-equations neural-networks theorem-provers or ask your own question. how to find the input for this optimization problem? How to solve simultaneous logical equations? Can neural networks figure out some unknown transform? How to translate this to algebraic language?
CommonCrawl
You have a stick of length $x$ and you want to divide it into $n$ sticks, with given lengths, whose total length is $x$. On each turn you can take any stick and divide it into two sticks. The cost of such an operation is the length of the original stick. What is the minimum cost needed to create the sticks? The first input line has two integers $x$ and $n$: the length of the stick and the number of sticks in the division. The second line has $n$ integers $d_1,d_2,\ldots,d_n$: the length of each stick in the division. Print one integer: the minimum cost of the division. Explanation: You first divide the stick of length $8$ into sticks of length $3$ and $5$ (cost $8$). After this, you divide the stick of length $5$ into sticks of length $2$ and $3$ (cost $5$). The total cost is $8+5=13$.
CommonCrawl
Can someone please clarify this? What do they mean by direction? If I have a one tailed distribution with the mean close to 0 and no negative values, then what exactly would "less" do and "greater" do? Here's an example of my frequency dist. The x-axis is # of samples and y-axis is number of components of interest and each sample is an image. Fisher Exact Tests consider all possible permutations of $2 \times 2$ contingency tables. Each table permutation can be ranked in terms of its degree of association by considering the odds ratio. An odds ratio which is greater than 1 shows a positive association between an exposure and an outcome whereas an OR less than 1 shows a negative association. The null hypothesis of a two sided Fisher's Exact Test is that the odds ratio of outcome and an exposure is 1 whereas the alternative is that it is not 1. A single sided test has a null hypothesis that the odds ratio is either less than 1 or greater than 1. Not the answer you're looking for? Browse other questions tagged distributions statistical-significance mathematical-statistics python or ask your own question. Is it justified to use a one-tailed t-test if I my hypothesis is one-sided? How to make a two-tailed hypergeometric test? Two-tailed tests… I'm just not convinced. What's the point?
CommonCrawl
Abstract : In this paper we define a new type cohomology for multiplicative Hom-associative algebras, which generalize Hom-type Hochschild cohomology and fits with deformations of Hom-associative algebras including the deformation of the structure map alpha. Moreover, we provide various observations and similarly a new type cohomology of Hom-bialgebras extending the Gerstenhaber-Schack cohomology for Hom-bialgebras and fitting with formal deformations including deformations of the structure map. Keywords : Cohomology, Hom-associative algebra, deformation, Hom-bialgebra, $L_\infty$-structure.
CommonCrawl
Maintain contact with the person talking either visually or with focused listening. Practice pure listening. Remove all distractions and minimise internal and external filters. Refrain from multi-tasking. Be sensitive to what is not being said. Listen for voice tone and observe body language for incongruent messages. Practice patience. Do not interrupt, finish the speaker's sentence, or change the subject. Listen empathetically and listen to understand. Try to see things from his or her perspective. Act a if there will be a quiz at the end. Clarify any uncertainties after he or she has spoken. Make sure you understood what was said by rephrasing what you heard. Don't jump to conclusions or make assumptions. Keep an open and accepting attitude. Turn off your mind and "be with" the speaker. How do you think that … will affect your business? That word has many definitions, what way do you mean it? 1. If gradient descent is working correctly, $J(\theta)$ should decrease after each iteration. 2. If $\alpha$ is too small, we will have slow convergence. 3. If $\alpha$ is too large, $J(\theta)$ may not converge.
CommonCrawl
Abstract : We consider the structure of aperiodic points in $\mathbb Z^2$-subshifts, and in particular the positions at which they fail to be periodic. We prove that if a $\mathbb Z^2$-subshift contains points whose smallest period is arbitrarily large, then it contains an aperiodic point. This lets us characterise the computational difficulty of deciding if an $\mathbb Z^2$-subshift of finite type contains an aperiodic point. Another consequence is that $\mathbb Z^2$-subshifts with no aperiodic point have a very strong dynamical structure and are almost topologically conjugate to some $\mathbb Z$-subshift. Finally, we use this result to characterize sets of possible slopes of periodicity for $\mathbb Z^3$-subshifts of finite type.
CommonCrawl
It is known that the cylinder $X:=S^1\times \mathbb R$ is a complex manifold. Does $X$ have analytic functions. i.e, if $f:X\to \mathbb C$ is analytic then is $f$ necessarily constant? Browse other questions tagged complex-analysis riemann-surfaces or ask your own question. All analytic functions are constant?
CommonCrawl
We need to mix 12 gallons of the 15% salt solution. Let X represent the amount of the 15% salt solution that is to be mixed. The second solution is 8 gallons. That means that the final solution will be 8 + X gallons. We use the following guideline to solve this problem: Amount of salt in the first solution + Amount of salt in the second solution = Amount of salt in final solution. 15% $\times$ X + 20% $\times$ 8 = 17% $\times$ (8 + X) $.15 \times X + .2 \times 8 = .17 \times (8+X)$ 0.15X + 1.6 = 1.36 + 0.17X 0.24 = 0.02X Divide both sides by 0.02 X = 12 We need to mix 12 gallons of the 15% salt solution.
CommonCrawl
This will be an article about the mathematics of algorithms. An algorithm is a set of rules and instructions used to solve a real-life problem. Often this algorithm will then be run on a computer. One of the focal points of the recent Sydney 2000 Olympic Games was the medal tally. We all want to know which country 'won' the Olympics or which country has the best athletes. In the previous issue of Parabola we saw how to encode (and also how to break!) monoalphabetic ciphers (i.e. we replace each letter of the alphabet by some other letter every time it occurs in the message). We now look at some more complex codes. Q1082. There are $25$ people sitting around a table and they are playing a game with a deck of $50$ cards. Each of the numbers $1, 2,\ldots , 25$ is written on two of the cards. Q1072. Is it possible to fill the empty circles in the diagram below with the integers $0, 1, \ldots, 9$ so that the sum of the numbers at the vertices of each shaded triangle is the same?
CommonCrawl
What are preimage resistance and collision resistance, and how can the lack thereof be exploited? What is "preimage resistance", and how can the lack thereof be exploited? How is this different from collision resistance, and are there any known preimage attacks that would be considered feasible? For a given $h$ in the output space of the hash function, it is hard to find any message $x$ with $H(x) = h$. A function with this property is also called one-way function (though this term is also used for non-hash functions with this property, like asymmetric encryption primitives). For a given message $x_1$ it is hard to find a second message $x_2 \neq x_1$ with $H(x_1) = H(x_2)$. It is hard to find a pair of messages $x_1 \neq x_2$ with $H(x_1) = H(x_2)$. Of course, from a (second) preimage attack we also get a collision attack. The other direction doesn't work as easily, though some collision attacks on broken hash functions seem to be extensible to be almost as useful as second preimage attacks (i.e. we find collisions where most parts of the message can be arbitrarily fixed by the attacker). Also, there is the generic birthday attack on hash functions (which even works for random oracles), which needs $O(\sqrt N)$ tries to have a good probability to hit a duplicate, where $N$ is the size of the output space of the function, where a similar generic brute-force attack on the second-preimage and preimage resistance needs about $O(N)$ queries (which is not feasible for the output sizes of all hash functions used in practice). I know of no practical preimage attack for the usual hash functions (please add them in other answers, or as comments), while for example for MD5 the collision-resistance is about totally broken, and SHA-1 is starting to show cracks (in 2017, a research team from Google produced two colliding documents). How can one exploit the lack? This strongly depends on the use of the hash function in a higher-level protocol (or algorithm build on top of it). In some protocols only the other properties are used directly, but as said, missing preimage resistance always also leads to missing second-preimage and collision resistance. For example, in signature schemes we usually hash the message first, and a (second) preimage attack allows to create a second message with the same hash as the first one, i.e. where the same signature fits. A collision attack is the ability to find two inputs that produce the same result, but that result is not known ahead of time. In a typical case (e.g., the attack on MD5) only a relatively small number of specific inputs are known to produce collisions. Collision resistance obviously means that a collision attack is difficult (for some definition of "difficult" varying from "no attack better than brute force is known" to "the known attacks aren't really feasible"). A preimage attack gives the ability to create an input that produces a specified result. A feasible preimage attack basically means that (as a crypographic hash) an algorithm is almost completely broken. Essentially the only attack that [edit: might] break it more completely is a second preimage attack. Either, however, basically means that what you have isn't a cryptographic hash function at all any more -- the whole point of a cryptographic hash is that it's a one-way function, but either sort of preimage attack means it's now a two-way function. A collision attack can be used in a relatively small number of specific scenarios (e.g., signed certificates) but isn't nearly as comprehensive as a preimage or second preimage attack. I don't know of a feasible preimage attack on any of the currently-popular hashes (e.g., SHA-1, SHA-256). Then again, if there was a preimage attack that was even remotely feasible, they would (or certainly should anyway) lose popularity extremely quickly. For comparison, MD5 is generally considered obsolete due to a feasible collision attack, but not any preimage attack at all. Even though it's not really feasible, the collision attack that's known on SHA-1 is still enough that most people would advise against continuing to use it (2017 edit: and with good reason--the collision attack on SHA-1 is now entirely feasible). There are preimage attacks against a number of older hash functions such as SNEFRU (e.g., there's a second preimage attack on three-pass SNEFRU with a complexity of 233 operations, which means that (for example) reading the original message in from disk probably takes longer than computing the second preimage. Attacks that good are fairly unusual though: once reasonably feasible attacks on a particular algorithm become known, most people (users and researchers alike) tend to move on to bigger and better things, so to speak. Often the hash (iterated and salted mostly) of a password is saved in a database, instead of the password. If a user logs in, the hash is computed and compared against the stored hash value. This way a user that can see the database of hashes does not see the password directly, but this property depends crucially on the hash being resistant to a pre-image attack, as described by other replies. With an ideal hash, no information about the password leaks, except the fact that an attacker then try passwords himself in an offline way or consult tables of precomputed hashes, and to make attacks like that harder, the salting and iterating steps are done. Also hashes can be used in coin flipping protocols: suppose player 1 and 2 want to flip a coin, without being in the same place, and assume they have some secure communication channel. One way to do it, is that player 1 picks a large number $n$ (from some prearranged fixed but large range) and sends player 2 the hash of it. Player 2 then guesses whether $n$ is odd or even, and if he's right he wins the coin flip. To verify correctness, player 1 sends him the number $n$ and player 2 can verify that the hash of it matches the hash he received earlier. Now, if the hash were not pre-image resistant, player 2 could maybe compute $n$ from the hash and know instead of guess. And if the hash were not collision resistant player 1 could find (maybe) an odd $n$ and an even $m$ in the right range, with the same hash, so that he can produce the other number at the reveal stage, depending on what the other chose, and so player 1 could cheat. As mentioned by Paulo, the second preimage resistance is important when you consider digital signatures, e.g. So it will depend on the application of the hash what it will need to satisfy, but because hashes are so versatile, we want for a standard one that it has as many of them as possible. E.g. Because they are used in cryptographic randomness generators as well, the output should look as random as possible as well, and the output should have uniformly distributed ranges, as close as possible. Not the answer you're looking for? Browse other questions tagged hash collision-resistance security-definition preimage-resistance or ask your own question. What makes a hash function good for password hashing? Would it be possible to generate the original data from a SHA-512 checksum? What gives SHA-256 its preimage resistance? Is there a feasible preimage attack for any hash function (no matter how deprecated) today? What is the probability of the first half of a hash output being identical to the first half of another hash output? Why doesn't preimage resistance imply the second preimage resistance? Are there attacks that break collision resistance but not preimage resistance? Does collision resistance imply (or not) second-preimage resistance? The difference between being not strongly collision resistant, and not weakly collision resistant? What is the complexity to break second preimage resistance? Is there a way one can combine two correlated hash outputs to maximize the collision resistance?
CommonCrawl
Abstract: NP-complete problems abound in every aspect of our daily lives. One approach is to simply deploy heuristics, but for many of these we do not have any idea as to when the heuristic is effective and when it is not. Approximation algorithms have played a major role in the last three decades in developing a foundation for a better understanding of optimization techniques - greedy algorithms, algorithms based on LinearProgramming (LP) relaxations have paved the way for the design of (in some cases) optimal heuristics. Are these the best ones to use in "typical" instances? Maybe, maybe not. In this talk we will focus on two specific areas - one is in the use of greedy algorithms for a basic graph problem called connected dominating set, and the other is in the development of LP based algorithms for a basic scheduling problem in the context of data center scheduling. Berwaldian spaces and use it to investigation of essentially conformally Berwaldian manifolds. conformal and metric invariants of finsler manifolds. partially joint with M. Troyanov (EPF Lausanne) and Yu. Nikolayevsky (Melbourne). Abstract: In Linear Algebra 101, we encounter two important features of the group of invertible matrices: Gauss elimination method, or the LU decomposition of almost all matrices, which is an important special case of the Bruhat decomposition; the Jordan normal form, which gives a classification of the conjugacy classes of invertible matrices. The study of the interaction between the Bruhat decomposition and the conjugation action is an important and very active area. In this talk, we focus on the affine Deligne-Lusztig variety, which describes the interaction between the Bruhat decomposition and the Frobenius-twisted conjugation action of loop groups. The affine Deligne-Lusztig variety was introduced by Rapoport around 20 years ago and it has found many applications in arithmetic geometry and number theory. In this talk, we will discuss some recent progress on the study of affine Deligne-Lusztig varieties, and some applications to Shimura varieties. Abstract: One of the deepest problems in ecology is in understanding how so many species coexist, competing for a limited number of resources. This motivated much of Darwin's thinking, and has remained a theme explored by such key thinkers as Hutchinson ("The paradox of the plankton"), MacArthur, May and others. A key to coexistence, is in the development of spatial and spatio-temporal patterns, and in the coevolution of life-history patterns that both generate and exploit spatio-temporal heterogeneity. Here, general theories of pattern formation, which have been prevalent not only in ecology but also throughout science, play a fundamental role in generating understanding. The interaction between diffusive instabilities, multiple stable basins of attraction, critical transitions, stochasticity and far-from-equilibrium phenomena creates a broad panoply of mechanisms that can contribute to coexistence, as well as a rich set of mathematical questions and phenomena. This lecture will cover as much of this as time allows. Abstract: Our lecturers Hilaf Hasson, Kendall Williams and Allan Yashinski will be hosting the panel on the realities of teaching. The target audience first includes Math TAs but we are hoping to attract many in the department. Light refreshments to follow in room 3201. Ergodic properties of parabolic systems. Abstract: Parabolic dynamical systems are systems of intermediate (polynomial) orbit growth. Most important classes of parabolic systems are: unipotent flows on homogeneous spaces and their smooth time changes, smooth flows on compact surfaces, translation flows and IET's (interval exchange transformations). Since the entropy of parabolic systems is zero, other properties describing chaoticity are crucial: mixing, higher order mixing, decay of correlations. One of the most important tools in parabolic dynamics is the Ratner property (on parabolic divergence), introduced by M. Ratner in the class of horocycle flows. This property was crucial in proving famous Ratner's rigidity theorems in the above class. We will introduce generalisations of Ratner's property for other parabolic systems and discuss it's consequences for chaotic properties. In particular this allows to approach the Rokhlin problem in the class of smooth flows on surfaces and in the class of smooth time changes of Heisenberg nilflows. Abstract: Sarnak's Mobius disjointness conjecture speculates that the Mobius sequence is disjoint to all topological dynamical systems of zero topological entropy. We will survey the recent developments in this area, and discuss several special classes of dynamical systems of controlled complexity that satisfy this conjecture. Part of the talk is based on joint works with Wen Huang, Xiangdong Ye, and Guohua Zhang. No background knowledge in either dynamical systems or number theory will be assumed. Abstract: Abstract: In this talk, I will discuss a long-standing open problem in the dimension theory of dynamical systems, namely whether every expanding repeller has an ergodic invariant measure of full Hausdorff dimension, as well as my recent result showing that the answer is negative. The counterexample is a self-affine sponge in $\mathbb R^3$ coming from an affine iterated function system whose coordinate subspace projections satisfy the strong separation condition. Its dynamical dimension, i.e. the supremum of the Hausdorff dimensions of its invariant measures, is strictly less than its Hausdorff dimension. More generally we compute the Hausdorff and dynamical dimensions of a large class of self-affine sponges, a problem that previous techniques could only solve in two dimensions. The Hausdorff and dynamical dimensions depend continuously on the iterated function system defining the sponge, which implies that sponges with a dimension gap represent a nonempty open subset of the parameter space. This work is joint with Tushar Das (Wisconsin -- La Crosse). Abstract: Abstract : In this presentation I will highlight the great potential offered by the interplay between data science and computational science to efficiently solve real life large scale problems . The leading application that I will address is the numerical simulation of the heart function. The motivation behind this interest is that cardiovascular diseases unfortunately represent one of the leading causes of death in Western Countries. Mathematical models based on first principles allow the description of the blood motion in the human circulatory system, as well as the interaction between electrical, mechanical and fluid-dynamical processes occurring in the heart. This is a classical environment where multi-physics processes have to be addressed. Appropriate numerical strategies can be devised to allow for an effective description of the fluid in large and medium size arteries, the analysis of physiological and pathological conditions, and the simulation, control and shape optimization of assisted devices or surgical prostheses. This presentation will address some of these issues and a few representative applications of clinical interest. Abstract: We will present some recent mathematical contributions related to nonperiodic homogenization problems. The difficulty stems from the fact that the medium is not assumed periodic, but has a structure with a set of embedded localized defects, or more generally a structure that, although not periodic, enjoys nice geometrical features. The purpose is then to construct a theoretical setting providing an efficient and accurate approximation of the solution. The questions raised ranged from the theory of elliptic PDEs, homogenization theory to harmonic analysis and singular operators. under investigation. One example deals with the spectral analysis of the Laplacian on the famous basilica Julia set, the Julia set of the polynomial z^2-1. This is a joint work with Luke Rogers and several students at UConn. The other example deals with spectral, stochastic, functional analysis for the canonical diffusion on the pattern spaces of an aperiodic Delone set. This is a joint work with Patricia Alonso-Ruiz, Michael Hinz and Rodrigo Trevino. Abstract: We will describe a certain stability for the centers of the group algebras of the symmetric groups S_n for varying n, and its geometric counterpart. (To experts: this is not about Schubert calculus). We shall then explain the generalization of this stability phenomenon for wreath products and for Hecke algebras. This talk should be accessible to graduate students. Alpha invariants and birational geometry. Kahler-Einstein metrics on Fano manifolds. invariants in (global and local) birational geometry. Abstract: Fiber-reinforced structures arise in many engineering and biological applications. Examples include space inflatable habitats, vascular stents supporting compliant vascular walls, and aortic valve leaflets. In all these examples a metallic mesh, or a collection of fibers, is used to support an elastic structure, and the resulting composite structure has novel mechanical characteristics preferred over the characteristics of each individual component. These structures interact with the surrounding deformable medium, e.g., blood flow or air flow, or another elastic structure, constituting a fluid-structure interaction (FSI) problem. Modeling and computer simulation of this class of FSI problems is important for manufacturing and design of novel materials, space habitats, and novel medical constructs. Mathematically, these problems give rise to a class of highly nonlinear, moving- boundary problems for systems of partial differential equations of mixed type. To date, there is no general existence theory for solutions of this class of problems, and numerical methodology relies mostly on monolithic/implicit schemes, which suffer from bad condition numbers associated with the fluid and structure sub- problems. In this talk we present a unified mathematical framework to study existence of weak solutions to FSI problems involving incompressible, viscous fluids and elastic structures. The mathematical framework provides a constructive existence proof, and a partitioned, loosely coupled scheme for the numerical solution of this class of FSI problems. The constructive existence proof is based on time-discretization via operator splitting, and on our recent extension of the Aubin-Lions-Simon compactness lemma to problems on moving domains. The resulting numerical scheme has been applied to problems in cardiovascular medicine, showing excellent performance, and providing medically beneficial information. Examples of applications in coronary angioplasty and micro- swimmer biorobot design will be shown. Speaker: Paul McNicholasAbstract: The application of mixture models for clustering has burgeoned into an important subfield of multivariate statistics and, in particular, classification. The framework for mixture model-based clustering is established and some historical context is provided. Then, some previous work is reviewed before some recent advances are presented. Previous work is discussed with some focus on technical detail. However, recent advances are presented with more focus on illustration via real data problems. The recent work discussed will include an approach for clustering Airbnb reviews as well as applications of mixtures of matrix variate distributions. Thomson's problem and have been studied for about 100 years. Abstract: Positive Mass Theorem Revisited - We will introduce the positive mass theorem which is a problem originating in general relativity, and which turns out to be connected to important mathematical questions including the study of metrics of constant scalar curvature and the stability of minimal hypersurface singularities. We will then give a general description of our recent work with S. T. Yau on resolving the theorem on high dimensional non-spin manifolds. Abstract: Structure theorems play an important role in dynamics with Veech's structure theorem as an outstanding example. We will describe a structure theorem in a measure-theoretic context: namely for "stationary" group actions. These are actions where a measure on a group space is invariant "on the average" relative to a probability measure on the group. One application is to "multiple recurrence" for non-amenable group actions, and associated Ramsey type theorems. Abstract: The basic objects of algebraic number theory are number fields, and the basic invariant of a number field is its discriminant, which in some sense measures its arithmetic complexity. A basic finiteness result, proved by Hermite at the end of the 19th century, is that there are only finitely many degree-d number fields of discriminant at most X. It thus makes sense to put all the number fields in order of their discriminant, and ask if we can say how many you've encountered by the time you get to discriminant X. It turns out there's a way to synthesize the Narkiewicz conjecture and the Batyrev-Manin conjecture into a unified heuristic which includes both of those conjectures as special cases, and which says much more in general. This involves defining "the height of a rational point on an algebraic stack" and I will say as much about what this means as there's time to! Abstract: This talk will survey ideas surrounding a conjecture in number theory about the structure of class groups of number fields. Each number field has associated to it a finite abelian group, the class group, and as long ago as Gauss, deep questions arose about the distribution of class groups as the field varies over a family. Many of these questions remain unanswered. We will introduce one particular conjecture about p-torsion in class groups, and indicate how it is closely related to several other deep conjectures in number theory. Then we will present several contrasting ways we have recently made progress toward the p-torsion conjecture. to a bias correction, which can be evaluated via a Monte-Carlo method. joint with Mahmoud Torabi of the University of Manitoba, Canada. We are glad to announce the 7th Metro Area Differential Geometry Seminar (MADGUYS), organized jointly by Howard University, Johns Hopkins University and the University of Maryland. All are invited, there are no registration fees. Young mathematicians and students are especially encouraged to attend. Abstract: The nonlinear Schroedinger equation is a prototype model to describe propagation of waves in dispersive media. It arises in several modelisation and noise appears naturally. It may represent the noise due to amplifiers or random dispersion in the fiber. In this talk I will present some aspects of well-posedness and influence on blow-up phenomena for the stochastic nonlinear Schroedinger.
CommonCrawl
This post discusses the equivalence between separating automata and universal graphs beyond parity games. The goal of this post is to present again the result of this paper, which is a joint work with Thomas Colcombet, but this time in more generality. The paper talks about parity games, and shows the equivalence between universal graphs, universal trees, and separating automata, as presented in this post. The result is actually much more general as it applies to any half-positional winning condition. In particular this will be useful for studying mean payoff games, see this post. An important remark is that for parity games, a third notion comes into play: universal trees. This notion is very specific to the parity condition, and does not extend to other winning conditions. The other two notions, separating automata and universal graphs, are naturally extended in the following way. Why are these two notions interesting? Separating automata yield a family of algorithms for solving games by reduction to safety games. Given the success story of solving parity games using separating automata, it is natural to ask whether one can use separating automata for constructing efficient algorithms for other types of games. Universal graphs yield another family of algorithms for solving games by value iteration. Assumption: In any finite game, if Eve has a strategy ensuring $W$, then she has a positional strategy ensuring $W$. We fix $n$ a parameter. The automata we consider are deterministic safety automata over infinite words on the alphabet $C$, where safety means that all states are accepting: a word is rejected if there exist no run for it. The size of an automaton is its number of states. In the following we say that a path in a graph is accepted or rejected by an automaton; this is an abuse of language since what the automaton reads is only the labels of the corresponding path. We say that a graph satisfies $W$ if all paths in the graph satisfy $W$. Definition: An automaton is -separating if the two following properties hold. The case of separating parity automata is discussed in more details in this post. The use of separating automata is to solve games with condition $W$ by reduction to safety games (in exactly the same way as was done for parity games). Lemma: Let $L$ be the language recognised by a $n$-separating automaton . Then for all games $G$ of size $n$, we have that Eve has a strategy ensuring $W$ if and only she has a strategy ensuring $L$. Proof: Let us first assume that Eve has a strategy $\sigma$ ensuring $W$. It can be chosen positional thanks to our assumption on $W$. Then $G[\sigma]$, the graph obtained by restricting the game $G$ to the moves prescribed by $\sigma$, is a graph of size $n$ satisfying $W$, so accepts all paths in $G[\sigma]$. In other words, all paths consistent with $\sigma$ are in $L$, or equivalently $\sigma$ ensures $L$. Conversely, assume that Eve has a strategy $\sigma$ ensuring $L$. Since $L \subseteq W$ thanks to the second item of the definition of separating automata, the strategy $\sigma$ also ensures $W$. It follows that solving the game $W$ is equivalent to solving a safety game with $m \times |A|$ edges, where $m$ is the number of edges of $G$ and $|A|$ the number of states of the separating automaton. Since solving a safety game can be done in linear time in the number of edges, this gives an algorithm whose running time is linear in $m$ and $|A|$. Definition: A graph is -universal if it satisfies and any graph of size $n$ satisfying can be mapped homomorphically into it. The case of universal parity graphs is discussed in more details in this post. All the ideas of the proof are in the parity games case, as presented in this post. For the sake of completeness the full proof is given in this paper for the case of mean payoff games. In both cases, the key observation is that the set of vertices (or the set of states, depending on the direction) can be totally ordered.
CommonCrawl
We generalize the construction of linear codes via skew polynomial rings by using Galois rings instead of finite fields as coefficients. The resulting non commutative rings are no longer left and right Euclidean. Codes that are principal ideals in quotient rings of skew polynomial rings by a two sided ideals are studied. As an application, skew constacyclic self-dual codes over $GR(4, 2)$ are constructed. Euclidean self-dual codes give self-dual $\mathbb Z_4$−codes. Hermitian self-dual codes yield 3−modular lattices and quasi-cyclic self-dual $\mathbb Z_4$−codes. Keywords: Cyclic codes, self-dual codes, skew polynomial rings, modular lattices., $\ZZ_4-$codes. Mathematics Subject Classification: 94B60, 12H1.
CommonCrawl
Your boss has hired you to drive a big truck, transporting items between two locations in a city. You're given a description of the city, with locations of interest and the lengths of roads between them. Your boss requires that you take a shortest path between the starting and ending location, and she'll check your odometer when you're done to make sure you didn't take any unnecessary side trips. However, your friends know you have plenty of unused space in the truck, and they have asked you to stop by several locations in town, to pick up items for them. You're happy to do this for them. You may not be able to visit every location to pick up everything your friends want, but you'd like to pick up as many items as possible on your trip, as long as it doesn't make the path any longer than necessary. The two graphs above show examples of what the city may look like, with nodes representing locations, edges representing roads and dots inside the nodes representing items your friends have asked you to pick up. Driving through a location allows you to pick up all the items there; it's a big truck, with no limit on the items it can carry. In the graph on the left, for example, you have to drive the big truck from location $1$ to location $6$. If you follow the path $1 \rightarrow 2 \rightarrow 3 \rightarrow 6$, the length is $9$, and you'll get to pick up $4$ items. Of course, it would be better to drive $1 \rightarrow 4 \rightarrow 5 \rightarrow 6$; that's still a length of $9$, but going this way instead lets you pick up an additional item. Driving $1 \rightarrow 4 \rightarrow 3 \rightarrow 6$ would let you pick up even more items, but it would make your trip longer, so you can't go this way. The first line of input contains an integer, $n$ ($2 \leq n \leq 100$), giving the number of locations in the city. Locations are numbered from $1$ to $n$, with location $1$ being the starting location and $n$ being the destination. The next input line gives a sequence of $n$ integers, $t_1 \ldots t_ n$, where each $t_ i$ indicates the number of items your friends have asked you to pick up from location $i$. All the $t_ i$ values are between $0$ and $100$, inclusive. The next input line contains a non-negative integer, $m$, giving the number of roads in the city. Each of the following $m$ lines is a description of a road, given as three integers, $a \: b \: d$. This indicates that there is a road of length $d$ between location $a$ and location $b$. The values of $a$ and $b$ are in the range $1 \ldots n$, and the value of $d$ is between $1$ and $100$, inclusive. All roads can be traversed in either direction, there is at most one road between any two locations, and no road starts and ends at the same location. If it's not possible to travel from location $1$ to location $n$, just output out the word "impossible". Otherwise, output the length of a shortest path from location $1$ to location $n$, followed by the maximum number of items you can pick up along the way.
CommonCrawl
Abstract : Let k be a totally real number field ant let k∞ be its cyclotomic Zp-extension for a prime p > 2. We give (Theorem 3.4) a sufficient condition of nullity of the Iwasawa invariants λ, µ, when p totally splits in k, and we obtain important tables giving quadratic fields and various p for which we can conclude that λ = µ = 0. We show that the number of ambiguous p-classes of kn (the nth stage in k∞) becomes equal to the order of the torsion group Tk , of the Galois group of the maximal Abelian p-ramified prop extension of k (Theorem 4.7), for all n ≥ e, where pe is the exponent of U*k /Ek (local units of norm 1 modulo global units). Thus we recover some classical results of Fukuda, Greenberg, Inatomi, Komatsu, Sumida, Taya,. .. Then we establish analogs of Chevalley's formula for a family (Λi) 0≤i≤mn of subgroups of k$\times$ , prime to p, containing Ek , in which any x is norm of an ideal of kn (Theorem 4.9, Corollary 4.12). This family is attached to the subgroups of the classical filtration of the p-class group of kn giving the theoretical algorithm computing its order in mn steps. We show that mn ≥ (λ · n + µ · p n + ν)/vp(#Tk) and that the condition mn = O(1) (i.e., λ = µ = 0) depends essentially on the p-adic valuations, for p|p, of the Fermat quotient of xi ∈ Λi, so that Greenberg's conjecture seems strongly related to the (tricky) properties of Fermat quotients of suitable elements of k$\times$. A statistical analysis of these Fermat quotients (Section 7) shows that they follow natural probabilities, whatever the value of n, showing that, almost surely, λ = µ = 0 (see the main Heuristic 7.3). This would imply that for a proof of Greenberg's conjecture, some deep p-adic results are necessary before referring to the purely algebraic Iwasawa theory.
CommonCrawl
PS Added the link as suggested by quid; the bug is not always there however, right now it reappeared after I reloaded that page. while the actual content of the garbled part should be "Proof that the homotopy category of a stable $\infty$-category is triangulated" Thus the whole thing is broken into columns in a wrong way somehow. Browse other questions tagged bug user-interface mathjax .
CommonCrawl
For questions about groups, rings, fields, vector spaces, modules and other algebraic objects. Associate with related tags like group-theory, ring-theory, modules, etc. to clarify which topic of abstract algebra is most related to your question and help other users when searching. "The Egg:" Bizarre behavior of the roots of a family of polynomials. Why are rings called rings? I've done some search in Internet and other sources about this question. Why the name ring to this particular object? Just curiosity. Thanks. Can we ascertain that there exists an epimorphism $G\rightarrow H$? Let $G,H$ be finite groups. Suppose we have an epimorphism $$G\times G\rightarrow H\times H$$ Can we find an epimorphism $G\rightarrow H$? Can you give me an example of infinite field of characteristic $p\neq0$? Thanks. Why "characteristic zero" and not "infinite characteristic"? More than 99% of groups of order less than 2000 are of order 1024? Rings, groups, and fields all feel similar. What are the differences between them, both in definition and in how they are used? Are there real world applications of finite group theory? How to find the Galois group of a polynomial? Why are groups more important than semigroups? What kind of "symmetry" is the symmetric group about? Are all algebraic integers with absolute value 1 roots of unity? Why do books titled "Abstract Algebra" mostly deal with groups/rings/fields? How is a group made up of simple groups? Is there an intuitive reason for a certain operation to be associative? Is Lagrange's theorem the most basic result in finite group theory? What are some mathematical topics that involve adding and multiplying pictures? How do I prove that $x^p-x+a$ is irreducible in a field with $p$ elements when $a\neq 0$? How was the Monster's existence originally suspected? Examples of finite nonabelian groups. Can anybody provide some examples of finite nonabelian groups which are not symmetric groups or dihedral groups? In categorical terms, why is there no canonical isomorphism from a finite dimensional vector space to its dual? Does every Abelian group admit a ring structure? Why can't the Polynomial Ring be a Field? If I know the order of every element in a group, do I know the group? Generalizing the case $p=2$ we would like to know if the statement below is true. Let $p$ the smallest prime dividing the order of $G$. If $H$ is a subgroup of $G$ with index $p$ then $H$ is normal.
CommonCrawl
Subset $X$ is the set of $x$ in $[a,b]$ for which $f(x)$ is less than $0$ (•). Let's name the lowest upper bound of $X$ as $x_0$ (•). If we assume $f(x_0)$ is less than zero or greater than zero then we get a contradiction. Therefore $f(x_0)$ must equal zero. We know that $X$ will have various upper bounds, which are values that no element of $X$ ever exceeds. Function $f$ is continuous. Therefore we can get as close to $f(x)$, but not equal it, if we get close enough to $x$. If you assume $x_0$ is greater than zero you'll find a smaller number that's an upper bound. If you assume $x_0$ is less than zero you'll find a largest number that's an upper bound. Any $x_1$ within $\delta$ (plus or minus) of $x$ takes $f(x_1)$ to within $\epsilon$ (plus or minus) of $f(x)$.
CommonCrawl
I wonder which closed orientable 3-manifolds can be embedded in $\mathbb R^4$ and which in $\mathbb R^5$. Is there a way to determine whether given closed 3-manifold, obtained, say by Dehn surgery on knot, can be embedded into $R^4$ ? Is the answer known for spherical 3-manifolds (finite fundamental group) ? I am mainly interested in topological properties of manifolds. The known answers for PL or smooth embedding are of value as well for me. I appreciate any kind of answer. I am not interested in such peculiarities as exotic $R^4$. There is an analogy to surfaces in a sense. For 3-manifolds that fibre over surfaces there is a complete answer. For a variety of Seifert-fibred manifolds there are complete answers -- but not all. For example, Seifert-fibred homology spheres are still problematic. The preprint that Ian linked to in his comments has much more results of this kind in it. 1) We likely do not have a complete set of invariants that obstruct embedding into $\mathbb R^4$. 2) We appear to be far from knowing all the "natural" constructions of embeddings of 3-manifolds into $\mathbb R^4$ for the manifolds that are known to embed. It is quite possible there are elements of formal logic obstructing both 1 and 2. For example, if a compact boundaryless connected 3-manifold embeds in $S^4$ it separates it into two components. It is possible that one or even both of these components has a fundamental group with an unsolvable word problem. This would restrict the kinds of techniques one could use for creating obstructions in (1). edit: I see Agol and Freedman's paper on this topic as connected to this last concern. 2-manifolds in $S^3$ have the Fox re-embedding theorem. So you could hope for some nice re-embedding theorems for $3$-manifolds in $S^4$. You shouldn't expect too nice a re-embedding theorem in $S^4$, since the tool that makes Fox's theorem work is Dehn's lemma, and the analogies to Dehn's lemma in 4-manifold theory are generally not true. Trans. Amer. Math. Soc. 367 (2015), no. 1, 559–595. Not the answer you're looking for? Browse other questions tagged 3-manifolds or ask your own question. Which spherical space forms embed in $S^4$? Can all n-manifolds be obtained by gluing finitely many blocks? What is the three-dimensional hyperbolic volume of a four-manifold? What are the different ways of defining 3-manifolds?
CommonCrawl
Abstract: For space observatories, the glitches caused by high energy phonons created by the interaction of cosmic ray particles with the detector substrate lead to dead time during observation. Mitigating the impact of cosmic rays is therefore an important requirement for detectors to be used in future space missions. In order to investigate possible solutions, we carry out a systematic study by testing four large arrays of Microwave Kinetic Inductance Detectors (MKIDs), each consisting of $\sim$960 pixels and fabricated on monolithic 55 mm $\times$ 55 mm $\times$ 0.35 mm Si substrates. We compare the response to cosmic ray interactions in our laboratory for different detector arrays: A standard array with only the MKID array as reference; an array with a low $T_c$ superconducting film as phonon absorber on the opposite side of the substrate; and arrays with MKIDs on membranes. The idea is that the low $T_c$ layer down-converts the phonon energy to values below the pair breaking threshold of the MKIDs, and the membranes isolate the sensitive part of the MKIDs from phonons created in the substrate. We find that the dead time can be reduced up to a factor of 40 when compared to the reference array. Simulations show that the dead time can be reduced to below 1 % for the tested detector arrays when operated in a spacecraft in an L2 or a similar far-Earth orbit. The technique described here is also applicable and important for large superconducting qubit arrays for future quantum computers.
CommonCrawl
We investigate the spectrum of the two-dimensional model for a thin plate with a sharp edge. The model yields an elliptic $3\times3$ Agmon–Douglis–Nirenberg system on a planar domain with coefficients degenerating at the boundary. We prove that in the case of a degeneration rate $\alpha<2$, the spectrum is discrete, but, for $\alpha\geq2$, there appears a nontrivial essential spectrum. A first result for the degenerating scalar fourth order plate equation is due to Mikhlin. We also study the positive definiteness of the quadratic energy form and the necessity to impose stable boundary conditions. These results differ from the ones that Mikhlin published.
CommonCrawl
What happens when I include a squared variable in my regression? By doing this my D estimate does not vary from zero any more, with a high p-value. How do i interpret the squared term in my equation (in general)? Well, first of, the dummy variable is interpreted as a change in intercept. That is, your coefficient $\beta_3$ gives you the difference in the intercept when $D=1$, i.e. when $D=1$, the intercept is $\beta_0 + \beta_3$. That interpretation doesn't change when adding the squared $x_1$. That is the point at which the relationship has its turning point. You can take a look at Wolfram-Alpha's output for the above function, for some visualization of your problem. That is, you can not interpret $\beta_1$ in isolation, once you added the squared regressor $x_1^2$! Regarding your insignificant $D$ after including the squared $x_1$, it points towards misspecification bias. A good example of including square of variable comes from labor economics. If you assume y as wage (or log of wage) and x as an age, then including x^2 means that you are testing the quadratic relationship between an age and wage earning. Wage increases with the age as people become more experienced but at the higher age, wage starts to increase at decreasing rate (people becomes older and they will not be so healthy to work as before) and at some point the wage doesn't grow (reaches the optimal wage level) and then starts to fall (they retire and their earnings starts to decrease). So, the relationship between wage and age is inverted U-shaped (life cycle effect). In general, for the example mentioned here, the coefficient on age is expected to be positive and than on age^2 to be negative.The point here is that there should be theoretical basis /empirical justification for including the square of the variable. The dummy variable, here, can be thought of as representing gender of the worker. You can also include interaction term of gender and age to examine the whether the gender differential varies by age. Not the answer you're looking for? Browse other questions tagged regression multiple-regression interpretation least-squares polynomial or ask your own question. When conducting multiple regression, when should you center your predictor variables & when should you standardize them? When is there a point for using regression with controls to analyze experimental data? When are OLS linear regression parameters inaccurate? How can I interpret relative and absolute income of both partners in one regression?
CommonCrawl
Two of the common assumptions for proving hardness of approximation results are $P \neq NP$ and Unique Games Conjecture. Are there any hardness of approximation results assuming $NP \neq coNP$ ? I am looking for problem $A$ such that "it is hard to approximate $A$ within a factor $\alpha$ unless $NP = coNP$". It is known that "showing factor $n$ NP-hardness for shortest vector problem would imply that $NP = coNP$". Note that this is the "opposite" of what I am looking for. Clarification : It is possible that $NP=coNP$ and still the P vs NP question is open. I am looking for hardness of approximation result which will become false if $NP=coNP$ but is unaffected (i.e., still remains as a conjecture) by $P \neq NP$. Here's a straightforward observation. If you assume $NP \neq coNP$, then it is pretty easy to see there are $NP$ optimization problems which do not even have good nondeterministic approximation algorithms, in some sense. For example, the PCP theorem says that you can translate SAT into the problem of distinguishing whether $1-\varepsilon$ of the clauses are satisfied and all of the clauses are satisfied, for some $\varepsilon > 0$. Suppose there is a nondeterministic algorithm which can distinguish between these two cases, in the sense that the nondeterministic algorithm can report in each computation path either "all satisfied" or "at most $1-\varepsilon$", and it says "at most $1-\varepsilon$" in some path if at most $1-\varepsilon$ can be satisfied, otherwise it says "all satisfied" in every computation path if all equations can be satisfied. This is enough to decide SAT in $coNP$, so $NP=coNP$. It seems clear that the existence of such a nondeterministic algorithm has no bearing on whether $P = NP$. It's quite plausible that a more "natural" scenario exists: an optimization problem which is hard to approximate in deterministic polynomial time under $NP \neq coNP$ but not known to be hard under $P \neq NP$. (This is probably what you really wanted to ask.) Many hardness of approximation results are first proven under some stronger assumption (e.g. $NP$ not in subexponential time, or $NP$ not in $BPP$). In some cases, later improvements weaken the necessary assumption, sometimes down to $P \neq NP$. So there is hope that there's a slightly more satisfactory answer to your question than this one. It is hard to wonder how there could be a problem that cannot be proved hard to approximate in deterministic polytime under $P \neq NP$, but it can be proved hard under $NP \neq coNP$. That would mean that $NP \neq coNP$ tells us something about deterministic computations that $P \neq NP$ doesn't already say; intuitively, this is hard to grasp. Disclaimer: this is not a direct answer. Actually there are many more hardness conditions other than P != NP and the UGC. David Johnson wrote a beautiful column for the Transactions on Algorithms back in 2006 on precisely this issue. He lists out the numerous different assumptions that are used to show hardness, and how they relate to each other. Unfortunately, these are all NP vs deterministic classes (with the exception of NP and co-AM). NP vs co-NP is not covered at all. $NP \ne coNP$ is a stronger hypothesis than $P \ne NP$ since $NP\ne coNP$ implies $P\ne NP$. So, any hardness of approximation result assuming $P\ne NP$ would also follow from $NP\ne coNP$ assumption. Not the answer you're looking for? Browse other questions tagged approximation-hardness conditional-results or ask your own question. Is MAX CUT approximation resistant? Limits of variants of Independent Set?
CommonCrawl
This week's riddler asks us to simulate a game of baseball using rolls of a dice. To solve this problem, we're going to treat the game of Baseball like a markov chain. To borrow an analogy from one of my favorite professors (Hi, Dr. Popescu!), a markov chain is like a frog hopping among lily pads. The frog has what's called a "current state" - that is, the lily pad it is currently occupying. The frog also has a random probability of hopping to any of the lily pads around it, including some probability of staying where it is. Once the frog hops to a new lily pad, it might have different probabilities of hopping to other lily pads, and this process continues for as long as we like. However, a key property of this system is that we only need to know the current state of the frog in order to estimate where it will go next. It doesn't matter how the frog got to its current state. It may have taken hundreds of hops or just a single hop to arrive in its current state, but that doesn't affect the frog's likelihood of moving to other lily pads. This simplifies the modeling process because we don't need to keep track of the history of events over time - we just need to know where we are and the probabilities of moving to new states. Here's the problem we're given. Next, we'll outline the steps to solve it. Under the simplified dice framework, we can treat the game of baseball like a markov chain that has a current state, a set of transition probabilities to subsequent states, and associated payoffs (runs scored) when certain states are reached. Using this paradigm, we can simulate innings probabilistically, count the runs scored by each team, and determine the winner. Let's walk through exactly how we turn the game of baseball into a markov chain by describing the different components we'll need. Current state - describes the strike count for the current batter, the number of outs, and the position of any runners on base. A small quirk of this game: we can only roll for strikes (not balls), so we only need to keep track of the current strikes for our batter. For example, each of these is a valid state of the game. Additionally, we define the end of the inning as "0 strikes, 3 outs, bases empty". Eventually, every inning will end up here because the probability of staying in this state once we arrive is 100%. In markov chains this is called an "absorbing state" because we can never leave once we arrive. Dice roll (3,1): move from "0 strikes, 0 outs, bases empty" to "0 strikes, 0 outs, runner on first" Dice roll (1,3): move from "2 strikes, 2 outs, runner on first" to "0 strikes, 2 outs, runners on first and third", no increase in runs. Dice roll (1,3): move from "1 strike, 1 out, runner on second" to "0 strikes, 1 out, bases empty". Reward of +1 for the single runner that scored. Dice roll (6,6): move from "0 strikes, 2 outs, bases loaded" to "0 strikes, 2 outs, bases empty". Reward of +4 because this was a grand slam. Now that we have the tools to describe our game, we need to specificy all possible states, all possible transitions between states, and the rewards associated with each move. We'll use python to do the heavy lifting. The full code can be found at the bottom of this article or as a github gist. We can simplify the task of creating all states by taking advantage of independence among state components. For example, runners on base is independent from the number of strikes and the number of outs. Therefore, we can generate the possibilities separately and combine them for the overall list. There are three possibilities for strikes on an active batter: 0, 1, or 2. (We don't need the number 3 because if a batter has three strikes, then the next batter is up and we instantly reset to 0.) We also have three possibilities for the number of outs in an active inning: 0, 1, or 2, for the same reason as before. However, we do create a special state with 3 outs to signal the end of the inning. Next, we identify all the possibilities of bases that could be occupied: single runners on first, second, or third, runners on first and second, first and third, second and third, and bases loaded. We can express the runners as a tuple of three integers, where 1 means a base is occupied and 0 means it is empty. For example (0,0,0) is bases empty, (0,1,0) means a runner on second, and (1,1,1) means bases loaded. Once we generate each set of possibilities for strikes, outs, and runners, we can use python to assemble all the unique combinates, which results in a list of length $3\times8\times3+1=73$. To represent the current state, we need a data structure that has an attribute for strikes, outs, and runners. We can use the namedtuple data structure as an easy way to store this data. The code snipped below creates this class and instantiates a single instance representing 0 strikes, 1 out, and runners on first and third. This next part involves some manual work and a bit of baseball assumption-making. The objective is to identify all the ways that we can move from one state to another. If each state could transition to each other state, we would need to calculate $72\times72=5184$ probabilities, which would be a tedious task. Instead, we can simplify this number substantially. For example, we know that it's impossible to go directly from 0 strikes to 2 strikes, or from bases empty to bases loaded in a single step. Therefore, many of the transition probabilities between states will be zero, which simplifies our task. Another benefit of our dice game is that we don't have to deal with complicated probabilties. There are 21 unique combinations of two dice, and each combination is equally likely. Furthermore, several of the outcomes from unique dice rolls are the same ("strike", "out at first", etc.) As a result, each state has at most 11 potential moves to another state. Let's write them out with their effect on the game and the probabilty (out of 21). Note: there are some obvious simplifying assumptions in my "outcomes" above. It's possible to get different answers to this Riddler if you treat game events differently. For example, "double play" and "base on error" have the most embedded assumptions about who is tagged out, who scores, and how runners advance. Most of the other events are fairly straightfoward. Please let me know if you think my algorithmic team management style should be adjusted! Now we need to encode how a given state will change as the results of a game action. We'll need 11 functions - one for each game action - that take a starting state as input and return the ending state and any runs that may have occured as a result. Here's an example of one of these functions, that describes what happens when there's a "fly out". In this case, we assume a runner on third is able to score (called "tagging up"), but only if the pop fly is the first or second out. If it's the third out, the inning ends without the run counting. As I mentioned above, there is some subjectivity in the outcomes of each game event, and that's part of the fun of this problem! Also note that the function below takes a single input: the game state, but returns two results: the subsequent game state and the number of runs that were scored. In this way, we can track the game state as it changes, and keep a running total of runs as they occur during the inning. You can find the other 10 game action functions below in the code. Now we have all the tools we need to track the state of our game, the likelihood of moving to new states, and the rewards associated with certain actions. All innings start with 0 strikes, 0 outs, and bases empty: Inning(0, 0, (0,0,0)). From the starting state, we simulate dice rolls and play the game, updating the game state and tracking runs as we go. Here's a python function that plays a single inning and prints a summary of what happens. Note that the results are randomly generated according to the weighted probabilities described above, so the results will change every time it's run. of runs scored during the inning. If verbose=True, print a "play-by-play" ... what's the average number of runs that would be scored in nine innings of this dice game? What's the distribution of the number of runs scored? Now that we can simulate a single inning, we can answer the question by repeating our simulation a large number of times to estimate the long-term distribution of runs scored. Fortunately, each inning is independent (at least in our dice game - no momentum or slumps to worry about). Therefore, if we accurately simulate a single inning, we can accurately simulate a single game by multiplying by nine. Perhaps the truly detail-oriented among you may note that this answer isn't entirely correct because it precludes the possibility of extra innings. There is a small probability that both teams score the same number of runs after nine innings. If that happened, they would continue playing one inning at a time until one team wins. I may save this small addendum for a separate update later. """Simulate a large number of innings and return the total runs scored""" Ignoring the small possibility of ties and extra innings, we see that the expected number of runs per inning is roughly 1.61, which means the expected number of runs per nine-inning game is roughly 14.5. According to sportingcharts.com the actual total score of MLB games from 1990 to 2016 hovered between 8-10, which means each team scored between 4 and 5 runs. Our dice game certainly gives the advantage to the sluggers! What's also interesting is that these results allow us to track not only the average number of runs scored per inning, but the full distribution: how many goose eggs (roughly 40% of the time) vs. quadruple-grand-slams (the highest inning recorded 22 runs). These results are best viewed as a histogram showing exactly how many simulated innings ended up with a given number of runs, which you can see below. The full code to replicate my results is below. Feel free to let me know if my baseball strategy should be adjusted. You can submit pull requests to the github gist, found here. 1 out, and a runner on second base. it's assumed the two runners farthest advanced are picked off. Built with Pelican and Bootstrap. © 2019 Jason Ash. All rights reserved.
CommonCrawl
a.) for any $\epsilon \gt 0$, there exists $\delta \gt 0$ such that for all $x \in (0,1)$ and $0 \lt |x-x_0| \lt \delta$, one has $|f(x) - L|\lt \epsilon$. b.) for any $\epsilon \gt 0$, there exists $\delta \gt 0$ such that for all $x \in (0,1)$ and $|x-x_0| \leq \delta$, one has $|f(x) - L|\leq \epsilon$. c.) for any $\epsilon \gt 0$, for any $\delta \gt 0$ such that for all $x \in (0,1)$ and $0 \lt |x-x_0| \lt \delta$, one has $|f(x) - L|\lt \epsilon$. this definition is false because for any ϵ>0 there exists some δ>0 that is small enough. It can't be any delta. d.) for any $\epsilon \gt 0$, there exists $\delta \gt 0$ such that for all $x \in (0,1)$ and $0 \lt |x-x_0| \lt \delta$, one has $0 \lt |f(x) - L|\leq \epsilon$. this definition is false. suppose $f(x) = 0$ then with $L = 0$ the limit does not exist anywhere for every$x$ as $|f(x) - L| = 0$ $0$ is not less then $0$. these are my attempts on the identification. Could you please help me confirm these answers? we could be allowed to take $x=x_0$ but in the definition we are assuming a deleted neighborhood of $x_0$ that is $x\neq x_0$. What's the "limit" in the definition of Riemann integrals?
CommonCrawl
so consider $\tau_e$ as a function of $x_j$, the result is the got. Not the answer you're looking for? Browse other questions tagged calculus chain-rule or ask your own question. How do I express this chain rule? Chain Rule - What have I done wrong? How do I apply the chain rule to double partial derivative of a multivariable function? Where does the relative sign come from in this chain rule application?
CommonCrawl
We study the configuration space $C(n,w)$ of $n$ non-overlapping unit-diameter disks in an infinite strip of width $w$. We present an asymptotic formula for the $k$th Betti number of $C(n,w)$, for fixed $k$ and $w$ as $n \to \infty$. We find that there is a striking phase transition: for $w > k$ the $k$th homology is stable and is isomorphic to the $k$th homology of the configuration space of points. But for $w \le k$, the $k$th homology is wildly complicated, growing exponentially fast with $n$. Joint work with Robert MacPherson (Institute for Advanced Study).
CommonCrawl
I know I can find Dover books on Amazon, but are they also available from normal book stores? My friend's going to Chicago in a couple of days, and I'd thought I'd ask him for a few books I've wanted. If I buy them online, I'm gonna have to pay a rather hefty shipping fee. Any decent bookshop will probably be more than happy to special order books that aren't in stock for a customer. But if your friend is only going to be there for a few days the bookstore might not be able to get the books in time. Some bookstores (Borders being my favourite) do stock some Dover titles. I've found Borders usually stocks more than another large chain bookstore. I'd recommend having your friend go there first to look for the books you want. Or just go to your nearest friendly bookseller and hand them a list of the titles you're interested in and have them order them for you. Thanks for your help, imabug. www.doverbooks.com is the website. I tend to peruse it at least monthly to see what is new. I found that both of Micheal Tinkhams' books, "Group Theory and Quantum Mechanics" and "Introduction to Superconductivity" have been picked up along with Messiahs' "Quantum Mechanics". Tinkhams' book taught me more group theory than I ever thought I'd need, matter a fact I used quite a bit of it in my dissertation. It is better to buy these books at either Borders or Barnes and Noble because the shipping is so much less. If you buy even one book from them online you will get emails about sales and catalogs through your snail mail forever. I currently have eyes for their featured book on the Riemann conjecture; it covers the technical math on the subject from Riemann up to maybe 15 years ago. I don't know if I'm going to buy it or not. During this last year I've order over 45 books from Amazon. Most of them are Dover publications, the rest are Schaum's Outlines. I'm an Amazon, Dover, Schaum's addict! The Dover books are just so reasonably priced! I'd typically buy about 4 of them on a each topic that I'm interested in than one high-priced textbook. That way I get four different perspectives on each topic. I also include a Schaum's Outline or two on the same topic for examples of solved problems. I also find that some authors are more abstract while others tend to use more geometric or phenomenological approaches so I get a wider view of the topics in that sense as well. Or course, I also spent a lot of time reading customer reviews and the online table of contents and excerpts that Amazon makes available on their web site so I can be sure to chose books that have different points of view. I've found the Dover book Ordinary Differential Equations by Tenenbaum & Pollard to be the best self-study book on ODEs that I've ever found. It takes you step-by-step from scratch with ODEs with nice problem sets and complete answers to every problem right in the section you are working with. It's laid out in lessons instead of chapters. It's really nice for someone starting out with ODEs. I also highly recommend Mathematics of Classical and Quantum Physics by Byron & Fuller, but I don't recommend it as a beginner's book. It's definitely a first-year graduate book. Do Tenenbaum first,.. Well, I better quit now or I'll have listed every Dover book that I own! By the way, I never get any spam email from Amazon. Just be sure to uncheck the box requesting promotional emails when you place your order. Should you forget to uncheck that box you can always cancel the promotional emails when you receive the first one by just clicking on the link to cancel them. I love Amazon, but I prefer to go to their site at my convenience so I never request their promotional emails. But you do have to uncheck that box each time you order. I also never got a catalog from Amazon via snail mail. Hmmm? Actually I think that'd be cool. I currently have eyes for their featured book on the Riemann conjecture; it covers the technical math on the subject from Riemann up to maybe 15 years ago. I don't know if I'm going to buy it or not. Which book are you referring too? I can only think of Ivic's from Dover that recently. I was very pleased when they put it back in print at such a nice price, though I was a little dissapointed that they just stuck in errata pages rather than fix the typos (and there were quite a few in the original). I also wish Ivic had updated it, at least the notes at the end of the chapters. Dover also has Edward's (older) book very cheap. It's follows the theory more along historical lines and is worth it simply for the translation of Riemann's original paper it contains. I recently visited one of my old profs, J.W. Lee, who is one of the authors of https://www.amazon.com/exec/obidos/tg/detail/-/0486688895/qid=1103531050/sr=8-1/ref=sr_8_xs_ap_i1_xgl14/103-3301678-5021428?v=glance&s=books&n=507846 book. He was pretty happy with the way Dover works. He, as the author, retains all rights to the book, Dover pays a one time licensing fee, and is able to keep in publication at reasonable prices, books like his, that are not in large volume demand. NeutronStar, another Dover fanatic here! Unfortunately, I don't qualify for free shipping because I don't live in the states. Hmmmmm, …. I'm so used to living in Amazon dot com space that I wasn't thinking in terms of any effects that spatial transformations could have on the phenomenon of free shipping. Evidently the probability of observing free shipping drops to zero discontinuously at the borders of the U.S and remains zero everywhere outside those borders. Within the U.S. the probability of observing free shipping is apparently directly proportional to the degree of fanaticism of the purchaser. I wonder if this is a reflection of the underlying quantum nature of the Amazon dot com management? You could write to Amazon dot com and ask if they will consider sending you the books free via a momentary fluxuation of their field policies thus permitting them to tunnel the books to you through the discontinuous borders of their free-shipping policy. While there exists a high probability that their field policies will be unaffected by your letter, there is always some hope that there exists some probability that, if worded ingeniously enough, and read by the right person, your letter could have an observable affect in the real world. I've been reading too many Dover books! The book I'm considering is Riemann's Zeta Function by H. M. Edwards. Chapter titles: 1.Riemann's paper. 2. The product formula for [tex]\zeta[/tex]. 3. Riemann's Mass Formula. 4.The Prime Number Theorem. 5.De la Valee Poussin's Theorem. 6. Numerical Analysis of the Roots by Euler-MacLaurin Summation. 7.The Riemann-Siegel Formula. 8.Large-Scale Computations. 9. The growth of [tex]\zeta[/tex] as [tex]t \rightarrow \infty[/tex]. 10.Fourier Analysis. 11. Zeros on the Line. 12. Miscellany. I would like to second this book by Frederick W. Byron, Jr and Robert W. Fuller. It has an excellent introduction of "Differential Operations on Scalar and Vector Fields", which discusses the significance of the gradient, divergence and curl. There is also a brief introduction of "Cartesian Tensors". Probably a great book for getting the novice's feet wet. I am another Dover book collector. I also collect some of the Springer-Verlag (yellow/gold) series. Are they still called "the yellow peril" as when I was in grad school? In those days they were noticibly "purer" (=drier + harder) than other texts. Since then they have developed some really good writers. I have collected books in the 'yellow peril' series during my grad school days somewhen in the past. I have not yet browsed one of the modern books, but I suspect that they are still for the serious student or professor. They are about as dry and as confusing as they come. I've found more Dover Physics and Math books at Barnes and Noble than at Borders. My new strategy for book buying is to browse through them in libraries and bookstores, make up my mind and then order online. Requires patience and delayed gratification, I know, but saves $$ in the long run! Buy just one book from them online and you'll get their catalogs forever, not to mention their specials by email. "Dover books" You must log in or register to reply here. Analysis Are Terry Tao's Books On Analysis Worth Getting?
CommonCrawl
(1) Explain the features of the graph of the relation $|x|+|y|=1$. (3) Consider the family of relations $x^n+y^n=1$ in the first quadrant. Choose one particular value of $n$ and show that $y$ decreases as $x$ increases. (4) Sketch some graphs in all four quadrants of the family of relations $|x|^n+|y|^n=1$ for even values of $n$ and explain why the graphs get closer to a square shape as $n\to \infty$. (5) Plot the graph of $x^3+y^3=1$ in all four quadrants. Why do the graphs of the relations $x^n+y^n=1$ differ according to whether $n$ is odd or even? Mathematical reasoning & proof. Families of Graphs. Inequalities. Creating and manipulating expressions and formulae. Differentiation of parametric and implicit functions. Symmetry. Turning points. Limits of Sequences. Trigonometric functions and graphs. Graph sketching.
CommonCrawl
Welcome to the first part of our free online course to help you learn $\mathrm\LaTeX$. If you have never used LaTeX before, or if it has been a while and you would like a refresher, this is the place to start. This course will get you writing LaTeX right away with interactive exercises that can be completed online, so you don't have to download and install LaTeX on your own computer. In part two and part three, we'll build up to writing beautiful structured documents with figures, tables and automatic bibliographies, and then show you how to apply the same skills to make professional presentations with beamer and advanced drawings with TikZ. Let's get started! We hope you enjoy learning LaTeX with our course. Ready to start writing? Sign up for Overleaf—it's the easiest way to write and collaborate on your new LaTeX projects!
CommonCrawl
The transformation matrix connecting the atomic orbitals (columns of FULLAOSO) with the symmetry-adapted orbitals (rows of FULLAOSO). This matrix multiplied on the left of an AO basis object will transform it to the SO basis. This matrix is in general not unitary. This matrix differs from AO2SOINV in that it is always a full square - it includes in the SO basis, functions such as spherical harmonic contaminants, which are deleted in the computational basis. The inverse transformation is given in the FULLSOAO record. Data type: floating point. Dimension: NAO$\times$NAO. Written by: xvmol2ja. This page has been visited 606 times since December 2010.
CommonCrawl
Abstract: We introduce the monic rank of a vector relative to an affine-hyperplane section of an irreducible Zariski-closed affine cone X. This notion is well-defined and greater than or equal to the usual X-rank. We describe an algorithmic technique based on classical invariant theory to determine, in concrete situations, the maximal monic rank. Using this technique, we establish three new instances of a conjecture due to Shapiro which states that a binary form of degree $d\times e$ is a sum of $d$ many $d$-th powers of forms of degree $e$. This is joint work with A. Bik, J. Draisma, and A. Oneto.
CommonCrawl
Search Results: 1 - 10 of 7899 matches for " Adam Williamson " Abstract: Global mobile robot localization is the problem of determining a robot's pose in an environment, using sensor data, when the starting position is unknown. A family of probabilistic algorithms known as Monte Carlo Localization (MCL) is currently among the most popular methods for solving this problem. MCL algorithms represent a robot's belief by a set of weighted samples, which approximate the posterior probability of where the robot is located by using a Bayesian formulation of the localization problem. This article presents an extension to the MCL algorithm, which addresses its problems when localizing in highly symmetrical environments; a situation where MCL is often unable to correctly track equally probable poses for the robot. The problem arises from the fact that sample sets in MCL often become impoverished, when samples are generated according to their posterior likelihood. Our approach incorporates the idea of clusters of samples and modifies the proposal distribution considering the probability mass of those clusters. Experimental results are presented that show that this new extension to the MCL algorithm successfully localizes in symmetric environments where ordinary MCL often fails. Abstract: Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of single-user memoryless channels. Recently Polyanskiy et al. studied the benefit of variable-length feedback with termination (VLFT) codes in the non-asymptotic regime. In that work, achievability is based on an infinite length random code and decoding is attempted at every symbol. The coding rate backoff from capacity due to channel dispersion is greatly reduced with feedback, allowing capacity to be approached with surprisingly small expected latency. This paper is mainly concerned with VLFT codes based on finite-length codes and decoding attempts only at certain specified decoding times. The penalties of using a finite block-length $N$ and a sequence of specified decoding times are studied. This paper shows that properly scaling $N$ with the expected latency can achieve the same performance up to constant terms as with $N = \infty$. The penalty introduced by periodic decoding times is a linear term of the interval between decoding times and hence the performance approaches capacity as the expected latency grows if the interval between decoding times grows sub-linearly with the expected latency. Abstract: The read channel in Flash memory systems degrades over time because the Fowler-Nordheim tunneling used to apply charge to the floating gate eventually compromises the integrity of the cell because of tunnel oxide degradation. While degradation is commonly measured in the number of program/erase cycles experienced by a cell, the degradation is proportional to the number of electrons forced into the floating gate and later released by the erasing process. By managing the amount of charge written to the floating gate to maintain a constant read-channel mutual information, Flash lifetime can be extended. This paper proposes an overall system approach based on information theory to extend the lifetime of a flash memory device. Using the instantaneous storage capacity of a noisy flash memory channel, our approach allocates the read voltage of flash cell dynamically as it wears out gradually over time. A practical estimation of the instantaneous capacity is also proposed based on soft information via multiple reads of the memory cells. Abstract: Recent work by Polyanskiy et al. and Chen et al. has excited new interest in using feedback to approach capacity with low latency. Polyanskiy showed that feedback identifying the first symbol at which decoding is successful allows capacity to be approached with surprisingly low latency. This paper uses Chen's rate-compatible sphere-packing (RCSP) analysis to study what happens when symbols must be transmitted in packets, as with a traditional hybrid ARQ system, and limited to relatively few (six or fewer) incremental transmissions. Numerical optimizations find the series of progressively growing cumulative block lengths that enable RCSP to approach capacity with the minimum possible latency. RCSP analysis shows that five incremental transmissions are sufficient to achieve 92% of capacity with an average block length of fewer than 101 symbols on the AWGN channel with SNR of 2.0 dB. The RCSP analysis provides a decoding error trajectory that specifies the decoding error rate for each cumulative block length. Though RCSP is an idealization, an example tail-biting convolutional code matches the RCSP decoding error trajectory and achieves 91% of capacity with an average block length of 102 symbols on the AWGN channel with SNR of 2.0 dB. We also show how RCSP analysis can be used in cases where packets have deadlines associated with them (leading to an outage probability). Abstract: This paper presents a reliability-based decoding scheme for variable-length coding with feedback and demonstrates via simulation that it can achieve higher rates than Polyanskiy et al.'s random coding lower bound for variable-length feedback (VLF) coding on both the BSC and AWGN channel. The proposed scheme uses the reliability output Viterbi algorithm (ROVA) to compute the word error probability after each decoding attempt, which is compared against a target error threshold and used as a stopping criterion to terminate transmission. The only feedback required is a single bit for each decoding attempt, informing the transmitter whether the ROVA-computed word-error probability is sufficiently low. Furthermore, the ROVA determines whether transmission/decoding may be terminated without the need for a rate-reducing CRC. Abstract: This paper presents a variable-length decision-feedback scheme that uses tail-biting convolutional codes and the tail-biting Reliability-Output Viterbi Algoritm (ROVA). Comparing with recent results in finite-blocklength information theory, simulation results for both the BSC and the AWGN channel show that the decision-feedback scheme using ROVA can surpass the random-coding lower bound on throughput for feedback codes at average blocklengths less than 100 symbols. This paper explores ROVA-based decision feedback both with decoding after every symbol and with decoding limited to a small number of increments. The performance of the reliability-based stopping rule with the ROVA is compared to retransmission decisions based on CRCs. For short blocklengths where the latency overhead of the CRC bits is severe, the ROVA-based approach delivers superior rates. Abstract: We present extensions to Raghavan and Baum's reliability-output Viterbi algorithm (ROVA) to accommodate tail-biting convolutional codes. These tail-biting reliability-output algorithms compute the exact word-error probability of the decoded codeword after first calculating the posterior probability of the decoded tail-biting codeword's starting state. One approach employs a state-estimation algorithm that selects the maximum a posteriori state based on the posterior distribution of the starting states. Another approach is an approximation to the exact tail-biting ROVA that estimates the word-error probability. A comparison of the computational complexity of each approach is discussed in detail. The presented reliability-output algorithms apply to both feedforward and feedback tail-biting convolutional encoders. These tail-biting reliability-output algorithms are suitable for use in reliability-based retransmission schemes with short blocklengths, in which terminated convolutional codes would introduce rate loss. Abstract: Multiple sclerosis (MS) is a leading cause of disability in young adults. Susceptibility to MS is determined by environmental exposure on the background of genetic risk factors. A previous meta-analysis suggested that smoking was an important risk factor for MS but many other studies have been published since then. Abstract: Multiple sclerosis (MS) appears to develop in genetically susceptible individuals as a result of environmental exposures. Epstein-Barr virus (EBV) infection is an almost universal finding among individuals with MS. Symptomatic EBV infection as manifested by infectious mononucleosis (IM) has been shown in a previous meta-analysis to be associated with the risk of MS, however a number of much larger studies have since been published.
CommonCrawl
Suppose that we have two topological spaces $X$ and $Y$, and that $f : X \to Y$ is continuous. If $X$ is a path connected space, then we should expect that the range, $f(X)$, is also path connected. If we take any two points $x$ and $y$ in the range, then there exists two points in the domain, $u$ and $v$ that are mapped to $x$ and $y$ respectively. Now consider a path from $u$ to $v$ in $X$. Then if we take the image of all the points in this path, then since $f$ is continuous, the resulting image of points will be continuous and will be a path from $x$ to $y$. We prove this result in the following theorem. Theorem 1: Let $X$ and $Y$ be topological spaces and let $f : X \to Y$ be continuous. If $X$ is path connected in then $f(X)$ is path connected in $Y$. Proof: Let $x, y \in f(X)$. Then there exists $u, v \in X$ such that $f(u) = x$ and $f(v) = y$. Since $X$ is path connected, there exists a path $\alpha : [0, 1] \to X$ such that $\alpha(0) = u$ and $\alpha(1) = v$. Thus $f(X)$ is path connected.
CommonCrawl
I need to find a $3 \times 3$ matrix and the determinant of this matrix has to be $0$. I also need to be able to delete randomly chosen column and row to make the determinant nonzero? Is it even I also need to be able to delete randomly chosen column and row to make the determinant nonzero?
CommonCrawl
If $R$ is a simple Artinian ring, then when is a finitely generated module free? Here's an exercise from my book, which only gives a brief solution which leaves me very confused. Let $R$ be a simple Artinian ring, say $R=K_r$. Show that there is only one simple right $R$-module up to isomorphism, $S$ say, and that every finitely generated right $R$-module $M$ is a direct sum of copies of $S$. If $M\cong S^k$ say, show that $k$ is uniquely determined by $M$. What is the condition on $k$ for $M$ to be free? $K_r$ denotes the $r\times r$ matrices over the field $K$ (I imagine). Each row is a simple right ideal and all are isomorphic. $S^k$ is free iff $r\mid k$. $R$ being a simple Artinian ring means it has no proper submodules, and it has an ascending chain of ideals which eventually breaks off. An ideal is also a $R-$submodule, so if we take the minimal ideal in a chain of the Artinian ring then it can be viewed as a simple $R$-module. A finitely generated module has a generating set (kind of like a basis, but without the condition of linear independence). Is my answer to the first question on the right track? How do I answer the other three questions? Why should the $r\mid k$ imply that $S^k$ has a basis? How do I answer the other three questions? My advice would be to go read a bit about semisimple rings in a book like Isaac's Algebra: a graduate course or Jacobson's Basic Algebra II or Lam's First course in noncommutative rings. All of these questions are special cases of the theory of semisimple rings. Every (not just the finitely generated ones) $R$ module has to be a direct sum of copies of $S$. But the finitely generated ones are the ones that look like $S^k$ for a natural number $k$. I can't think of a better way to prove this than to learn that $R$ is a semisimple ring, and that the modules of semisimple rings are direct sums of simple submodules. If there is only one simple module up to isomorphism (as is the case for a simple semisimple ring) then naturally any module is a direct sum of copies of that module ($S$, in our case.) You'll be able to find this in any of the references I gave before. It's not hard, but it doesn't fit into a post either. Thinking in terms of bases is not nearly as big of a win as thinking of free modules as "sums of copies of $R$." Now, $R$ is certainly a finitely generated $R$ module, so $R\cong S^k$ for some $k$. Every finitely generated free module is a finite direct sum of copies of $R$. So... how many copies of $S$ does such a module have? Not the answer you're looking for? Browse other questions tagged abstract-algebra ring-theory modules self-learning ideals or ask your own question. Are all simple left modules over a simple left artinian ring isomorphic? Is there an Artinian module with infinitely generated proper submodule? Let M be an Artinian module over a commutative Artinian unital ring R. Is M necessarily finitely generated? what is the difference between finitely generated module and finitely generated free module? For a semi-simple ring $R$ and $R$-module $M$ show TFAE: M finitely generated, artinian and noetherian. If $M$ is finitely generated left module over a left noetherian ring $R$, then $M$ is a noetherian module.
CommonCrawl
Let's imagine that we have two separate models, both used to forecast the return for the next period. Both models are estimated everyday, and both models outputs a probability distribution. How can we evaluate if one model has been better to forecast the distribution of future returns better than the other ? My first intuition was simply to compute for each date the probability of the ex-post event given by the two models, and simply sum up the probabilities for each model over the different periods. The model with the highest sum would be better. But I feel that this way is not really clean and lacks of robustness. Any idea on how to improve my methodology ? If you assume your returns are independent (yes your models might loosen this assumption) then the two models, $Q_1$ and $Q_2$ assign probability distributions to the returns on any given day, $i$: $q_1^i(r^i)$ and $q_2^i(r^i)$. In this case this is also equivalent to the cross entropy between $p$ the true probability distribution which is 1 for the observed state and 0 otherwise, relative to either model $q_1$ or $q_2$. If you do not assume independence of returns then you have a slightly more complicated problem, post more details if otherwise.. This will be at least as good as the best model $Q_1$ or $Q_2$ for $\alpha=1$ or $\alpha=0$, but of course you need to cross-validate $\alpha$ otherwise you will just be overfitting this hyper parameter to your observed data. Your feeling that there is more to the problem than adding up probabilities is very justified. To give you the bad news first: Your problem as stated has no solution. Since probability distributions have many degrees of freedom there is no general way to compare them. Practically speaking your two models may be good or bad in two different non-comparable ways. E.g. your first model might get the mean well, but has too light tails so misses the extreme market movements. While the other is spot on in predicting market crashs but gets confused in the times between crashes. This is why any serious attempt on quality assessment must focus on the ultimate purpose of your models. For example if you would like to use your prediction models to inform trading strategies to get rich quick, it were easier to assess the the performance of the trading strategies instead the quality of your prediction distributions. Of course the ugly problem of non-comparability (aka risk-return trade off) will raise its head again but at least you are clear about the target function you are interested in. Seriously interested in this kind of problem are not the financial market bunch but weather forecasters. Start with this for further reading. A textbook treatment of this topic can be found in "Statistical Methods in the Atmospheric Sciences" Chapter 8: Forecast Verification. Not the answer you're looking for? Browse other questions tagged backtesting probability distribution or ask your own question.
CommonCrawl
In this paper we study a nonparametric estimation problem for stochastic feedforward systems with nodes of type G/G/$\infty.$ We assume that we have observations of the external arrival and external departure processes of customers of the system, but no information about the movements of the indistinguishable customers in the network. Our aim is the construction of estimators for the characteristic functions and the densities of the service time distributions at the nodes as well as for the routing probabilities. Since the system is only partly observed we have to clarify first if the parameters are identifiable from the given data. The crucial point in our approach is the observation that in our stochastic networks under study the influence of the arrival processes on the departure processes can be described in a linear and time-invariant model. This makes it possible to apply cross-spectral techniques for multivariate point processes. The construction of the estimators is then based on smoothed periodograms. We prove asymptotic normality for the estimators. We present the statistical analysis for a tandem system of two nodes in full details and show afterwards how the results can be generalized to feedforward systems of three or more nodes and to systems with positive feedback probabilities at the nodes. Electron. J. Statist., Volume 6 (2012), 1670-1714. Asmussen, S. (2003)., Applied Probability and Queues. Second edition, Springer-Verlag, New York. Bingham, N.H. and S.M. Pitts (1999). Non-parametric estimation for the $M|G|\infty$ queue., Ann. Inst. Statist. Math., 51(1), 71–97. Bingham, N.H. and S.M. Pitts (1999). Nonparametric inference from $M|G|1$ busy periods., Comm. in Statistics - Stochastic Models, 15, 247–272. Böhm, H. and R. von Sachs (2009). Shrinkage estimation in the frequency domain of multivariate time series., Journal of Multivariate Analysis, 100(5), 913–935. Bohman, H. (1960). Approximate Fourier analysis of distribution functions., Arkiv för Matematik, band 4, nr 10, 99–157. Brillinger, D.R. (1972). The spectral analysis of stationary interval functions., Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Vol. I: Theory of statistics, Univ. California Press, Berkeley, Calif., 483–513. Brillinger, D.R. (1974). Cross-spectral analysis of processes with stationary increments including the stationary $G/G/\infty$ queue., Annals of Probability, 2, 815–827. Brillinger, D.R. (1975)., Statistical inference for stationary point processes. In: Stochastic processes and related topics, Proc. Summer Res. Inst., Statist. Inference for Stochastic Processes, Indiana Univ., Bloomington, Ind., Vol. 1, 55–99. Academic Press, New York. Brillinger, D.R. (1981)., Time series. Data analysis and theory. International Series in Decision Processes. Holt, Rinehart and Winston, Inc., New York-Montreal, Que.-London. Brockwell, P.J. and R.A. Davis (1991)., Time Series: Theory and Methods. Second edition, Springer-Verlag, New York. Daley, D.J. and D. Vere-Jones (2003)., An introduction to the theory of point processes. Vol. 1. Elementary theory and methods. Second edition. Probability and its Applications (New York). Springer-Verlag, New York. Daley, D.J. and D. Vere-Jones (2008)., An introduction to the theory of point processes. Vol. 2. General Theory and Structure. Second edition. Probability and its Applications (New York). Springer-Verlag, New York. D'Auria, B. and S.I. Resnick (2008). The influence of dependence on data network models., Advances in Applied Probability, 40, 60–94. Chu, Y-K. and J-C. Ke (2006). Confidence intervals of mean response time for an M/G/1 queueing system: Bootstrap simulation., Applied Mathematics and Computation, 180, 255–263. Chu, Y-K. and J-C. Ke (2007). Mean response time for a G/G/1 queueing system: Simulated computation., Applied Mathematics and Computation, 186, 772–779. Hall, P. (1985). On inference in a one-dimensional mosaic and an M/G/$\infty$ queue., Advances in Applied Probability, 17, 210–229. Hall, P. (1988), Introduction to the theory of coverage processes. John Wiley & Sons, New York. Hall, P. and J. Park (2004). Nonparametric inference about service time distribution from indirect measurements., J. R. Statist. Soc. B, 66(4), 861–875. Hansen, M.B. and S.M. Pitts (2006). Nonparametric inference from the $M/G/1$ workload., Bernoulli, 12(4), 737–759. Hawkes, A.G. (1971). Spectra of some self-exciting and mutually exciting point processes., Biometrika, 58, 83–90. Jolai, F., S.M. Asadzadeh and M.R. Taghizadeh (2008). Performance estimation of an email contact center by a finite source discrete time Geo/Geo/1 queue with disasters., Computers and Industrial Engineering, 55, 543–556. Ke, J.-C. and Y.-K. Chu (2006). Nonparametric and simulated analysis of intensity for a queueing system., Applied Mathematics and Computation, 183, 1280–1291. Liu, Z., L. Winter, C. Xia and F. Zhang (2006). Parameter inference of queueing models for IT systems using end-to-end measurements., Performance Evaluation, 63, pp. 36–60. Mino, H. (2001). Parameter estimation of the intensity process of self-exciting point processes using the EM algorithm., IEEE Transactions on Instrumentation & Measurement, 50, 658–664. Narayan Bhat, U. and S. Subba Rao (1987). Statistical analysis of queueing systems., Queueing Systems, 1, 217–247. Neudecker, H. (1969). Some theorems on matrix differentiation with special reference to Kronecker matrix products., Journal of the American Statistical Association, 64, 953–963. Oakes, D. (1975). The Markovian self-exciting process., J. of Applied Probability, 12, 69–77. Ozaki, T. (1979). Maximum likelihood estimation of Hawkes' self-exciting point process., Annals of the Institute of Statistical Mathematics, 31(B), 145–155. Pitts, S. (1994). Nonparametric estimation of the stationary waiting time distribution function for the $GI|G|1$ queue., The Annals of Statistics, 3, 1428–1446. Sharma, V. and R. Mazumdar (1998). Estimating traffic parameters in queueing systems with local information., Performance Evaluation, 32, 217–230. Stewart, G.W. (1990). Stochastic perturbation theory., SIAM Review, 32(4), pp. 579–610. Van der Vaart, A.W. (1998)., Asymptotic Statistics. Cambridge University Press, Cambridge. Xi, B., H. Chen, W.S. Cleveland and Th. Telkamp (2010). Statistical analysis and modeling of Internet VoIP traffic for network engineering., Electronic Journal of Statistics, 4, pp. 58–116.
CommonCrawl
The aggregate motion of a flock of birds, a herd of land animals, or a school of fish is a beautiful and familiar part of the natural world... The aggregate motion of the simulated flock is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course. Each simulated bird is implemented as an independent actor that navigates according to its local perception of the dynamic environment, the laws of simulated physics that rule its motion, and a set of behaviors programmed into it... The aggregate motion of the simulated flock is the result of the dense interaction of the relatively simple behaviors of the individual simulated birds. Our boids will each have an x velocity and a y velocity, and an x position and a y position. We'll build this up in NumPy notation, and eventually, have an animated simulation of our flying boids. Let's start with simple flying in a straight line. Our positions, for each of our N boids, will be an array, shape $2 \times N$, with the x positions in the first row, and y positions in the second row. We'll want to be able to seed our Boids in a random position. We used broadcasting with np.newaxis to apply our upper limit to each boid. rand gives us a random number between 0 and 1. We multiply by our limits to get a number up to that limit. So we multiply a $2\times1$ array by a $2 \times 10$ array -- and get a $2\times 10$ array. We can reuse the new_flock function defined above, since we're again essentially just generating random numbers from given limits. This saves us some code, but keep in mind that using a function for something other than what its name indicates can become confusing! Here, we will let the initial x velocities range over $[0, 10]$ and the y velocities over $[-20, 20]$. And download the saved animation. You can even view the results directly in the notebook.
CommonCrawl
If you're a teacher read the notes here first. Have a go at becoming a detective! You can now explore this further. Click on the picture to get started. When you've explored what you can do with $3 \times 4 - 5$ then it's time to explore further. You could change just one part of the number plumber, for example the $-5$ bit. You might try $3 \times 4 - 6$ or $3 \times 4 + 5$ or $3 \times 3 - 5$ and compare the results. You'll have lots of your own ideas about things to explore too. Mathematicians like to ask themselves questions about what they notice. What possible questions could you ask? These questions may lead you to come to some decisions about what can happen. Interlocking cubes. Practical Activity. Working systematically. Addition & subtraction. Recording mathematics. Investigations. Area - squares and rectangles. Triangles. Describing Sequences. Combinations.
CommonCrawl
Lemma 14.11.5. Let $U$, $V$ be simplicial sets. Let $a, b \geq 0$ be integers. Assume every $n$-simplex of $U$ is degenerate if $n > a$. Assume every $n$-simplex of $V$ is degenerate if $n > b$. Then every $n$-simplex of $U \times V$ is degenerate if $n > a + b$. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0179. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0179, in case you are confused.
CommonCrawl
We introduce a class of volume-contracting surface diffeomorphisms whose dynamics is intermediate between one-dimensional dynamics and general surface dynamics. For that type of systems one can associate to the dynamics a reduced one-dimensional model and it is proved a type of $C^\infty$-closing lemma on the support of every ergodic measure. We also show that this class contains Hénon maps with Jacobian in (–1/4, 1/4).
CommonCrawl
January 2, 2019 Reading time: about 1 minute (322 words). I don't always use recursion; but when I do, I don't always use recursion. April 6, 2017 Reading time: less than a minute (162 words). April 1, 2017 Reading time: less than a minute (152 words). I started a GitHub repository called UniversalAlgebra/Conferences in an effort to keep up with the vast number of meetings in these areas and help make summer travel plans accordingly. The UniversalAlgebra/Conferences repository is simply a list of links to the conferences listed. December 6, 2016 Reading time: about 2 minutes (427 words). This post details the steps I use to update or install the latest version of the Java Development Kit (JDK) on my Linux systems. February 13, 2015 Reading time: about 1 minute (333 words). Suppose the lattice shown below is a congruence lattice of an algebra. Conjecture: If the three $\alpha_i$'s pairwise permute, then all pairs in the lattice permute. William DeMeo is a postdoc in mathematics working at the University of Colorado, Boulder, USA.
CommonCrawl
A functional identity is an identical relation in a ring $R$ that, besides ring elements, also involves functions from $R^n$ to $R$ which are considered as unknowns. The goal is to find their forms. While this can be accomplished in rather general rings, the problem becomes, apparently paradoxically, very difficult in some well understood classes of rings such as PI rings of low degree. An algebra $A$ over a field $F$ is said to be zero product determined if for every bilinear map $f:A\times A\to F$ with the property that $ab=0$ implies $f(a,b)=0$ there exists a linear functional $\varphi$ on $A$ such that $f(a,b)=\varphi(ab)$ for all $a,b\in A$. Here, $A$ may be any nonassociative algebra; most results, however, concern associative and Lie algebras. As a sample result, we mention that a finite-dimensional associative unital algebra is zero product determined if and only if it is generated by idempotents. Results on functional identities have turned out to be applicable to various problems in noncommutative algebra, nonassociative algebra, linear algebra, and operator theory. In particular, they were used as an essential tool for solving Herstein's Lie map conjectures. Results on zero product determined algebras also have a variety of applications, especially in the theory of Banach algebras. Although the two theories, that of functional identities and that of zero product determined algebras, are quite different, they both arose from similar questions and, moreover, have some related applications. This makes it possible to consider them concurrently in this mini course. Matej Brešar is Professor of Mathematics at University of Ljubljana and University of Maribor. He is the author or co-author of over 150 research papers, the co-author of the book Functional Identities (Birkhäuser, 2007), and the author of the book Introduction to Noncommutative Algebra (Springer, 2014). According to MathSciNet, his works have been cited over 3000 times. He is an Associate Member of the Slovenian Academy of Sciences and Arts. Everybody is invited! A partial support is available for the students and postdoctoral fellows. Please ask your supervisor to send a recommendation letter to [email protected].
CommonCrawl
Results for "Luis Fernández Barquín" Uniqueness Properties for Discrete equations and Carleman estimatesSep 28 2015Using Carleman estimates, we give a lower bound for solutions to the discrete Schr\"odinger equation in both dynamic and stationary settings that allows us to prove uniqueness results, under some assumptions on the decay of the solutions. On non-formal simply connected manifoldsDec 10 2002Jan 24 2003We construct examples of non-formal simply connected and compact oriented manifolds of any dimension bigger or equal to 7. A non local Monge-Ampere equationOct 01 2014Jun 26 2015We introduce a non local analog to the Monge-Ampere operator and show some of its properties. We prove that a global problem involving this operator has C 1,1 solutions in the full space. $A_\infty$ structures and Massey productsMay 19 2017Jan 11 2018We study the relationship between the higher Massey products on the cohomology $H$ of a differential graded algebra, and the $A_\infty$ structures induced on $H$ via homotopy transfer techniques. The universal zeta function for curve singularities and its relation with global zeta functionsMar 02 2017The purpose of this note is to give a brief overview on zeta functions of curve singularities and to provide some evidences on how these and global zeta functions associated to singular algebraic curves over perfect fields relate to each other. A variation on the homological nerve theoremSep 12 2016An equivalent but useful version on the Homological Nerve Theorem is proved. The Lagrangian cobordism group of $T^2$Oct 30 2013Nov 25 2014We compute the Lagrangian cobordism group of the standard symplectic 2-torus and prove that it is isomorphic to the Grothendieck group of its derived Fukaya category. The proofs use homological mirror symmetry for the 2-torus. Solving diophantine equations x^4 + y^4 = q z^pApr 27 2003We give a method to solve generalized Fermat equations of type $x^4 + y^4 = q z^p$, for some prime values of $q$ and every prime $p$ bigger than 13. We illustrate the method by proving that there are no solutions for $q= 73, 89$ and 113. The $dδ$--lemma for weakly Lefschetz symplectic manifoldsJan 17 2005For a symplectic manifold $(M,\omega)$, not necessarily hard Lefschetz, we prove a version of the Merkulov $d\delta$--lemma. We also study the $d\delta$--lemma and related cohomologies for compact symplectic solvmanifolds. The level 1 weight 2 case of Serre's conjectureDec 05 2004Apr 02 2007We give a proof of some small weight and level cases of Serre's conjecture. Modularity of abelian surfaces with Quaternionic MultiplicationApr 27 2003We prove that any abelian surface defined over $\Q$ of $GL_2$-type having quaternionic multiplication and good reduction at 3 is modular. We generalize the result to higher dimensional abelian varieties with ``sufficiently many endomorphisms".
CommonCrawl
Abstract: We obtain a construction of the irreducible unitary representations of the group of continuous transformations $X\to G$, where $X$ is a compact space with a measure $m$ and $G=PSL(2,\mathbf R)$, that commute with transformations in $X$ preserving $m$. This construction is the starting point for a non-commutative theory of generalized functions (distributions). On the other hand, this approach makes it possible to treat the representations of the group of currents investigated by Streater, Araki, Parthasarathy, and Schmidt from a single point of view.
CommonCrawl
what could be the difference between a closed set and a closed geometric figure ? Can anyone explain me with an example please. Two different meanings of closed. A closed geometric figure is topologically a loop. Like a circle or a polygon. If they don't intersect themselves they're called simple closed curves. A closed set is a set that contains its limit points. The closed unit interval on the x-axis in the x-y plane is a closed set but not a closed curve or a closed geometric figure. I'm trying to think of an example the other way but drawing a blank at the moment. edit -- I think a closed curve must be a closed set. A closed curve by definition is a continuous (or maybe smooth) function from the closed unit interval to $\mathbb R^n$ (or some topological space) such that $f(0) = f(1)$. The continuous image of a compact set is compact so I think we have a proof. To sum up: The image of a closed curve must be a closed set. A closed set need not be a closed curve. edit2 -- My proof requires that the topological space be Hausdorff. Better to leave the target as Euclidean space, then my proof works. The continuous image of a compact interval is a closed set and is also by definition a curve. Last edited by Maschke; February 24th, 2017 at 06:18 PM. Is compact set and closed curve are same ? Why do you take Hausdroff space ? I think all the subsets of the H. space are disjoint which implies they are open?! I have seen that a topological space which is closed isn't a H.space then how is it possible to show it as a closed set ? I'm new to topology, I'm learning it for my M.Sc. entrance. Kindly correct me if I'm wrong in the concept. Last edited by skipjack; March 28th, 2017 at 11:44 PM.
CommonCrawl
cvgmt: Scaling in fracture mechanics by Bazant's law: from finite to linearized elasticity. Scaling in fracture mechanics by Bazant's law: from finite to linearized elasticity. We consider crack propagation in brittle non-linear elastic materials in the context of quasi-static evolutions of energetic type. Given a sequence of self-similar domains $n \Omega$ on which the imposed boundary conditions scale according to Bazant's law, we show, in agreement with several experimental data, that the corresponding sequence of evolutions converges (for $n \to \infty$) to the evolution of a crack in a brittle linear-elastic material.
CommonCrawl
The recent implementation of a swap Monte Carlo algorithm (SWAP) for polydisperse mixtures fully bypasses computational sluggishness and closes the gap between experimental and simulation timescales in physical dimensions $d=2$ and $3$. Here, we consider suitably optimized systems in $d=2, 3,\dots, 8$, to obtain insights into the performance and underlying physics of SWAP. We show that the speedup obtained decays rapidly with increasing the dimension. SWAP nonetheless delays systematically the onset of the activated dynamics by an amount that remains finite in the limit $d \to \infty$. This shows that the glassy dynamics in high dimensions $d>3$ is now computationally accessible using SWAP, thus opening the door for the systematic consideration of finite-dimensional deviations from the mean-field description.
CommonCrawl
$L_a$: The length of the radiating element of the antenna. $D_a$: The diameter of the radiating element of the antenna. It it also my understanding that such an antenna is not actually resonant (Meaning the impedance is purely resistive) when $L_a = 0.5\lambda$, but rather something a tad smaller. A common number I've seen thrown around is ~$0.41\lambda$, but different sources seem to have slightly different values. My general question is, assuming the antenna is radiating in free-space, what is the theoretical mathematical relationship between $L_a$, $D_a$, and $Z_a$? For a given $D_a$, what value of $L_a$ (when $L_a \le 0.5\lambda)$ would give me a real (purely resistive) $Z_a$ in free-space? For a given $L_a$ and $D_a$ (or when those variables are constrained so that $Z_a$ is always real), what is the exact value of $Z_a$ in free-space? Bonus points for elaborating on how these values are related in practical, non-free-space environments. I have seen many confusing estimations for $Z_a$, ranging between 1800Ω and 5000Ω, which is a huge range. I want to better understand what factors are involved and how to mathematically calculate the value under ideal circumstances. It's very difficult to predict the impedance of an end-fed wire, other than to say it's high. Usually it's determined empirically. You are looking for a theoretical formulation. Consider, the feedpoint is a voltage source which makes a difference in electric potential between to things. The end of the dipole, and...what? Maybe you could imagine the feedpoint connected to a theoretical shell of infinite radius, similar to how self-capacitance is calculated? I'd guess the result depends strongly on the geometry of the wire, but generally the thicker the wire, the lower the impedance. I don't know of what practical value this model would be since any real antenna has at least a feedline and a ground/aircraft/spacetraft nearby which would be more significant. You can also calculate the impedance of an off-center fed dipole, very close to the end. You'll notice the limit of that function as the feedpoint approaches the end is infinity. This is of course an approximation assuming the dipole is relatively thin, and that the capacitance to the other half of the dipole is the most relevant factor. So if there isn't another half of the dipole, what is the relevant factor? In practice it's going to be the ground, and the feedline. Neither is amenable to a simple expression. Simulation of your particular installation is your best bet. An end-fed dipole is resonant (or not) just like a more ordinary center-fed dipole. The feedpoint (in the middle, near the end, or somewhere between) is transparent to the resonance of the wire. An exactly half-wave dipole isn't resonant in the sense that its feedpoint impedance has a reactive component. Theoretically, (73 + j42.5)Ω. When the dipole is too short, its reactance will be capacitive. When it's too long, inductive. Since the exactly half-wave dipole has a slightly inductive reactance, shortening it can eliminate the reactance. The exact amount of shortening depends on the thickness of the wire. 0.41λ sounds like a reasonable estimate. The feedpoint impedances repeat with every wavelength of length. That is, in terms of feedpoint impedance, 0.5λ, 1.5λ, 2.5λ, ... dipoles all look the same. As the dipole becomes thicker, its bandwidth increases. That means for an equal change in length, the impedance of a thicker dipole will change less than a thinner dipole. Since the impedance must repeat with every wavelength, this also means that the highest impedance (for example, at the ends of a half-wave dipole) is lower with a thicker wire. Finally, a practical point: since the impedance at the end of a dipole is very high, the common-mode impedance looks relatively low. Successfully making an end-fed dipole then depends on very effective choking (a very high common-mode impedance), which is difficult to realize in practice. If the choking isn't very effective (meaning, the common mode impedance isn't much higher than the differential mode), then making the choking more effective will increase the impedance seen by the transmitter. The monopole always has another set of conductors to which the second wire from source is connected. This second coundutor may be earth, or a metal object commonly known as "ground plane". The ground plane turns the monopole into a virtual dipole with length each limb of virtual dipole equal to length of monopole. Thus the half wave monopole's virtual dipole is a full wave dipole. The impedance of monopole is half of its virtual dipole's impedance. For example a quarterwave monopole with ground planehas impedance of 36 ohms, which is half of 75 ohms, the impedance of its virtual dipole, the halfwave dipole. The impedance of a dipole varies very sharply when its length is fullwave. The equivalen monopole is half of its length, so monopole's impedance varies very sharply when its length is halfwave. The best way is to use simulation software to determine impedance (R and X). A plot of R & X vs length by sweep can give the point where you can get desired values. Model the 1/2 wave center fed antenna using the wire size, altitude, and all location attributes to determine the center fed impedance. divide 600 by the 73 ohms or whatever you calculate the center fed impedance to be then multiply the answer by 600 --- you're home. Example: $600/73 = 8.22 \times 600 = 4900$ ohms. This is the original and correct quarter wave transform using 600 ohms to represent the surge impedance of a single wire. Earlier I edited the 600 ohm value to 380 ohms in error because some of the CAD antenna design tools apparently used approximately 380 ohms. Using repeatable measurements, the Surge impedance of a single wire verifies that the impedance of a wire unmolested by external sources will be approximately 600 ohms and will vary less with wire diameter than previously thought. The measurement can be done at the feed point of an open wire and verified with termination value. where $L$ is the length of the wire and $d$ is the diameter of the wire — same units; In space. The following url represents a circuit that can be used to measure the surge impedance and velocity factor of a single wire or transmission line. It is a variable voltage divider, adjustable from 50 to 1050 ohms. The following url represents the measurement of the surge impedance and velocity factor of a 54 foot length of #14 wire. Important to note: The two timing lines from the rise to 115ns represent an approximately 4 volt @ .007A wave entering the wire. The wave will have reached the end of the wire at mid point, then return during the second half where you see the voltage rise to 8 volts and current cease. The rise and fall times are a result of instrument limitations. Surge current behaves as purely resistive, much like the impedance of space. It is used to calculate the radiation resistance of an antenna and is largely unaffected by proximity to the ground. The above Image represents the measurement of the Surge impedance and velocity factor of a single wire. It can be used to measure the Surge impedance and velocity factor in coax, twin lead or any antenna wire. The surge impedance of a single conductor or transmission line is essentially composed of two primary notions, the impedance of space 377 ohms and the energy used to accelerate electrons. Be mindful that the electrons in a 1 KW 14MHz transmission line or antenna will remain within approximately 1/10,000 of an inch from where they started. This can be visualized using a Newton's cradle. With that information it becomes clear that the opinions regarding end fed antennas are wanting: The impedance at the feed point is not infinite and is easily calculated. The reflected waves/current that flow within an antenna when reaching an end are wholly reflected. The next edit will consist of a cycle by cycle narrative of the measurements and events as the antenna is spooled up. Any "realistic" end fed wire antenna has a counterpoise which may merely be the ground or actual wire conductors on the ground or some variation. The best way to determine R,X of the feed point (assuming that is what you want) is to compute them using NEC2. NEC2 is very easy to use and computing the R,X input is also very easy to do as it is one of the regular (always printed) outputs for the execution. There are lots of resources in the Internet available via Google search to learn how to set up the geometry input specification for your antenna and running a solution given a specified frequency. If you want runs at multiple frequencies you can do a frequency sweep execution (starting frequency plus delta increment add-on frequency). NEC4 is the licensed version which is newer and more accurate. And, I mention in this particular case in that it is more accurate for ground models and ground level wire conductors for the counterpoise. Depending on your antenna geometry and the design of the counterpose, the difference between NEC2 and NEC4 can be significant. P.S. I use end fed antennas quite a bit for my portable ops which are always QRP which means that to make contacts, the antenna needs to work, and they do work. Typically, I throw a wire up over a branch of a tree (with the help of a sling shot to throw fishing line over the branch first). This wire is hooked directly to the enter conductor of the PL-259 I use to hook up to my rig (KX3). The ground wire counterpoise is just a conductor laying on the ground and tied to shield conductor of the PL-259. I actually made a PL-259 with ready to connect wires that make this task easy. The wires are about 60 feet or so long but I have operated with shorter wires too. The auto-tuner on the Elecraft KX3 has a wide mismatch tuning range. Below is a more elaborate analysis of an EFHW, with a length of vertical coaxial cable attached at its feedpoint. It includes the effects of radiation from the "unchoked" r-f current that could flow along the outer surface of the outer conductor of the coaxial cable. Other details are shown in the Comment block at the lower left section of the graphic. Below is a graphic showing the free space radiation pattern envelope and input Z of an end-fed, 1/2-wave (EFHW) antenna, for the other conditions shown there. The radiation pattern maximum field is centered on the physical center of the antenna, but the pattern in this graphic was shifted to the left so as better to show the nulls off the ends of the antenna. NOTE: Antennas and transmitters both are two-terminal devices, and will supply/radiate almost no energy if only one of their terminals is connected. The short, so-called counterpoise conductor of an "end-fed" antenna in reality means that this configuration is an off-center-fed dipole, as stated in the graphic. Not the answer you're looking for? Browse other questions tagged antenna-theory impedance end-fed-antenna or ask your own question. How does moving a feedpoint off-center in a dipole affect the resonant frequency and resistive load? Do I need an 9:1 UNUN for a half-wave end fed?
CommonCrawl