text
stringlengths
100
500k
subset
stringclasses
4 values
Abstract: We propose a new, constant-round protocol for multi-party computation of boolean circuits that is secure against an arbitrary number of malicious corruptions. At a high level, we extend and generalize recent work of Wang et al. in the two-party setting and design an efficient preprocessing phase that allows the parties to generate authenticated information; we then show how to use this information to distributively construct a single ``authenticated'' garbled circuit that is evaluated by one party. - Efficiency: For three-party computation over a LAN, our protocol requires only 95 ms to evaluate AES. This is roughly a 700$\times$ improvement over the best prior work, and only 2.5$\times$ slower than the best known result in the two-party setting. In general, for $n$ parties our protocol improves upon prior work (which was never implemented) by a factor of more than $230n$, e.g., an improvement of 3 orders of magnitude for 5-party computation.
CommonCrawl
Farmer John's $N$ cows ($3 \leq N \leq 50,000$) are all located at distinct positions in his two-dimensional field. FJ wants to enclose all of the cows with a rectangular fence whose sides are parallel to the x and y axes, and he wants this fence to be as small as possible so that it contains every cow (cows on the boundary are allowed). FJ is unfortunately on a tight budget due to low milk production last quarter. He would therefore like to enclose a smaller area to reduce maintenance costs, and the only way he can see to do this is by building two enclosures instead of one. Please help him compute how much less area he needs to enclose, in total, by using two enclosures instead of one. Like the original enclosure, the two enclosures must collectively contain all the cows (with cows on boundaries allowed), and they must have sides parallel to the x and y axes. The two enclosures are not allowed to overlap -- not even on their boundaries. Note that enclosures of zero area are legal, for example if an enclosure has zero width and/or zero height. The first line of input contains $N$. The next $N$ lines each contain two integers specifying the location of a cow. Cow locations are positive integers in the range $1 \ldots 1,000,000,000$. Write a single integer specifying amount of total area FJ can save by using two enclosures instead of one.
CommonCrawl
Abstract: We aim to improve the surface of last scattering (SLS) optimal cross-correlation method in order to refine estimates of the Poincaré dodecahedral space (PDS) cosmological parameters. We analytically derive the formulae required to exclude points on the sky that cannot be members of close SLS-SLS cross-pairs. These enable more efficient pair selection without sacrificing uniformity of the underlying selection process. In certain cases this decreases the calculation time and increases the number of pairs per separation bin. (i) We recalculate Monte Carlo Markov Chains (MCMC) on the five-year WMAP data; and (ii) we seek PDS solutions in a small number of Gaussian random fluctuation (GRF) simulations. For 5 < alpha/deg < 60, a calculation speed-up of 3-10 is obtained. (i) The best estimates of the PDS parameters for the five-year WMAP data are similar to those for the three-year data. (ii) Comparison of the optimal solutions found by the MCMC chains in the observational map to those found in the simulated maps yields a slightly stronger rejection of the simply connected model using $\alpha$ than using the twist angle $\phi$. The best estimate of $\alpha$ implies that_given a large scale auto-correlation as weak as that observed,_ the PDS-like cross-correlation signal in the WMAP data is expected with a probability of less than about 10%. The expected distribution of $\phi$ from the GRF simulations is approximately Gaussian around zero, it is not uniform on $[-\pi,\pi]$. We infer that for an infinite, flat, cosmic concordance model with Gaussian random fluctuations, the chance of finding_both_ (a) a large scale auto-correlation as weak as that observed,_and_ (b) a PDS-like signal similar to that observed is less than about 0.015% to 1.25%.
CommonCrawl
I'm trying to work through a theorem in the Lehmann statistical inference book and I'm confused about a proof. They are proving that a set of tests are UMP unbiased level-alpha tests for a series of hypotheses in a multiparameter exponential family. My question is, how do you know now that the $T_i$ are a sufficient statistic for the $\theta_i$, $i=2,3,\ldots,n$? I"m guessing the result must be obvious from the factorization theorem, but I'm not sure how to employ it with the $\theta_1T_1(x)$ term at the beginning of the exponential. Browse other questions tagged self-study exponential-family sufficient-statistics or ask your own question. Do the mean and the variance always exist for exponential family distributions? What the dimension of an exponential family tell us about that family? Is the canonical parameter (and therefore the canonical link function) for a Gamma not unique? Are the cumulants of sufficient statistics finite for the exponential family?
CommonCrawl
Most of my articles on calculators are just a support to the video presentation of that calculator. The video for this calculator will be prepared on April 2016. Meanwhile you may enjoy the prelimiary attempt here. This is the Diehl DS 18 calculator. The calculator is a close model to the Diehl KR 15 introduced in a separate article. There are only few differences. The model DS does not have back transfer to the input register (missing R in the name of the model) and has a register which allows to sum up the results - a version of mechanical memory (the letter S in the name of the model). With this device someof the computations can be performed easily and faster than on other calculators. Here we describe some applications of the summation register. Suppose that the user has to fill in the following table. or to mutltiply the pairs of the numbers without clearing the output register. With the first approach the user has to note the results (the price in each row) and reuse them in the final computation wheh he sums the partial prices together. This could be source of errors, if he or she rewrites a number with a typo. With the second method the danger of an error is smaller, since we need not to make notes and reuse them to get the final result. On the other hand, using the second method the machine operator does not see the particular prices in each row, but just the total price. With the summation register the user evaluates the product (price in the first row) and then transfers to the sum register. The result register is cleared after this operation. Then he or she proceeds in the same way with all the rows. Each particular product appears in the output register and then it is added to the summation register. When the last product is added to the summation register, this register shows the total price. The lever used to transfer the output register to the summation register is marked with the plus sign in a circle. The following actions are performed when the lever is pushed. The number from the output register is added to the memory. Both output register and counter are cleared. It the lever on the right of the summation register is in minus position, then the output register is subtracted and the lever is returned to the plus position. The keyboard is cleared or is not cleared depending on the position of the associated lever just besides the button for clearing the keyboard. The carriage is moved to the initial position (leftmost position). The button on the left of the register has been designed to reset the memory register to zero. There is also some lever behind, but I do not understand the purpose of that lever. The scheme of the memory (see the Literature below) also mentions a button in the front which can be used to transfer the number from the memory back to the output register (for division etc). However, something like this seems not to be present on my model. In the showcase you can enjoy the beauty under the cover. The calculator came very nice and clean inside. No throughout cleaning was necessary. But this does not mean that it worked. Enjoy the beauty of the machine first and the have a look to the restoration story. You may enjoy the long input where you can enter numbers up to the 9 digit in length. Definition: A zeroless pandigital number without redundant digits (ZPN) is the number which arises as a permutation of digits 1,2,3,4,5,6,7,8 and 9. There is no ZPN such that nine-multiple of this number is again a ZPN. The number $123456789$ is ZPN which can be multiplied by 2, 4, 5, 7 or 8 and we get again ZPN. There are two consecutive multplies of $123456789$ which are both pandigital without redundant digits (i.e. a permutation of the digits from $0$ to $9$). One such a pair is $$61\times 123456789=7530864129$$ and $$62\times 123456789=7654320918.$$ Other numbers $n$ with the property that both $n\times 123456789$ and $(n+1)\times 123456789$ are a permutation of all ten digits are $16$, $22$, $25$, $31$, $34$, $43$ and $52$. The calculator came blocked. Like the brother of this calculator, Diehl KR 15, I succeeded to unblock by some (partly random partly organized) pushing the levers by the screwdriver and moving the motor by the hand. The motor worked, but the calculator got blocked again and again. The suspicion was, that the the motor runs too fast, perhaps problems with speed regulation. This proved to be the case and after fixing the problems with electricity, all the basic functionality has been restored. The electric circuit is on the picture. The calculator is protected by some sort of LRC filter attached to the main socket with 220V AC. The solenoid in this part burned and I removed this part without replacement. I hoped, it is not vital for basic function and can be replaced later by a convenient universal net filter. Further you can see that the main switch is in the series with the motor and a speed regulator (another switch mounted directly on the motor with the on/off state regulated by the rotation of the motor). The main switch is protected by a component (probably RC filter) which burned in my calculator after a short while. I proceeded in the same way as with the input filter: I removed this component as not vital and if the calculator will work, I wil use some modern RC filter (or equivalent and much cheaper option: just a capacitor and resistor). Note that if the capacitor in this part has a short circuit inside, the curcuit is never disconnected. The motor is always on and never stops. On the other hand, if the capacitor is interrupted, the switch is not protected and may produce sparks. The switch at the speed regulator is also protected by the RC filter. As before, if the capacitor has a short circuit inside, the switch is never off and the speed regulation does not work. This was my case: the high speed of the motor blocked the calculator also after very simple computation. Moreover, the gears got shock from fast move and ruthless bump caused into deformation if fine mechanics inside. Parallel to the RC filter at the speed regulator you can find a resistor of high resistance, which does not allow to jump the current to zero, but keeps it on certain positive value and protects the motor from too steep jumps in the power management. The resistors used in the circuit can hardly go wrong, I hope. But all the other parts burned with a lot of smoke and stink. Unfortunately, no two parts went to the component heaven simultaneously. I had to disassemble the calculator and rebuild electric after destroying each single component. Now I am sure, that if I will repair similar Diehl calculator in future, the first thing will be to rebuild the hole electric circuit and replace old components by their modern versions. If I would do this when I got the calculator from the current article, I would avoid a lot of smoke and odor from burned components and also I would avoid severe shocks of the fine mechanics inside suffering from high speed when the speed regulator was out of order. These shocks had some consequences on the mechanical part and these problems could be prevented. Patent DE 1124742 for the mechanical memory. Scheme of the mechanical memory.
CommonCrawl
Silicon-on-insulator (SOI) wafer technology can be used to achieve a monolithic pixel detector, in which both a semiconductor pixel sensor and readout electronics are integrated in the same wafer. We are developing an SOI pixel sensor SOFIST, SOI sensor for Fine measurement of Space and Time, optimized for the vertex detector system of the International Linear Collider (ILC) experiment. This sensor has a pixel size of 20$\times$20 um$^2$ with fine position resolution for identifying the decay veteces of short life-time particles. The pixel circuit stores both the signal charge and timing information of the incident particles. The sensor can separate hit events with recording timing information during bunch-train collisions of the ILC beam. Each pixel has multiple stages of analog memories and time-stamp circuits for accumulating multiple hit events. SOFIST Ver.1, the first prototype sensor chip, was fabricated using 0.2 $\mu$m SOI process of LAPIS Semiconductor. The prototype chip consists of 50$\times$50 pixels and Column-ADC circuits in a chip size of 3x3 mm$^2$. We have designed the pixel circuit for the charge signal read out with a pre-amplifier circuit and 2 analog memories. We measured the sensor position resolution with 120 GeV Proton beam at Fermilab Test Beam Facility in January 2017. We observed the position resolution of 3 $\mu$m, which is required as a pixel sensor for ILC vertex detector. In 2016, we have submitted SOFIST Ver.2, which measures the hit timing information. We are designing SOFIST Ver.3 storing both the signal charge and timing information within a pixel area of 20$\times$20 $\mu$m$^2$. We adopt 3D stacking technology which implements additional circuit layer on the SOI sensor chip. The additional layers are connected electrically by advanced micro-bump technology, which can place bump with the pitch of 5 $\mu$m. In this presentation, we report the status of the development and the evaluation of the SOFIST prototype sensor.
CommonCrawl
The next argument should be a character. In case of AutoLisp there's no character type (yet), so in this case the first character of an input string will be used. A new line is output in this position. There's an optional parameter to this where ~N% outputs $N \times NewLines$. So a code of ~3% would result in 3 new lines being placed in this position. Inserts a new line at this position if and only if it's not already at the start of a line. There's an optional parameter to this where ~N& outputs $N \times NewLines$ if the position is not at the start of a line, else it outputs $( N - 1 ) \times NewLines$. So a code of ~3& would result in 3 new lines being placed in this position if the position is not at the start of a line, else it would place 2 new lines. Outputs a page separator character - Form Feed (Unicode "\U+000C" ; or ASCII 12). In AutoLisp it would be the ASCII version using the Octal code of "\014". There's an optional parameter to this where ~N| outputs $N \times FormFeeds$. So a code of ~3| would result in 3 form feed characters being placed in this position. Can be seen as the "escape" code. It simply results in a tilde character being output to the position. ~N~ would result in $N \times Tildes$, so ~3~ becomes "~~~". "~:R" results in an ordinal English number, e.g. "fourth". "~@R" results in Roman Numerals, e.g. "VI". "~:@R" results in old Roman Numerals, e.g. "IIII". mincol: The minimum length of the output, also known as field-length. Default is 0. E.g. "~10,5R" with a value of 123 would result in "00123". padchar: Has no meaning if mincol is omitted, or less than / equal to the length of the output. The character used in padding, defaults to 0. E.g. "~10,5,` R" with a value of 123 would result in " 123". commachar: Has no meaning without an accompanying comma-interval. Defaults to a comma [,], so "~10, , , ,3R" with a value of 12345 would result in "12,345". comma-interval: A base 10 integer showing the interval between comma separators. Default is 0. E.g. "~10, , ,` ,3R" with a value of 12345 would result in "12 345". This is generally the same as using the Radix code with a radix of 10: "~10R" is equivalent to "~D". It only has 4 optional parameters, the mincol,padchar,commachar,comma-interval. These work the same as for the Radix code. If the value of the argument is not of an integer type the result is the same as would be gotten from "~A" with a decimal base. This is generally the same as using the Radix code with a radix of 2: "~2R" is equivalent to "~B". It only has 4 optional parameters, the mincol,padchar,commachar,comma-interval. These work the same as for the Radix code. If the value of the argument is not of an integer type the result is the same as would be gotten from "~A" with a decimal base. This is generally the same as using the Radix code with a radix of 8: "~8R" is equivalent to "~O". It only has 4 optional parameters, the mincol,padchar,commachar,comma-interval. These work the same as for the Radix code. If the value of the argument is not of an integer type the result is the same as would be gotten from "~A" with a decimal base. This is generally the same as using the Radix code with a radix of 16: "~16R" is equivalent to "~X". It only has 4 optional parameters, the mincol,padchar,commachar,comma-interval. These work the same as for the Radix code. If the value of the argument is not of an integer type the result is the same as would be gotten from "~A" with a decimal base.
CommonCrawl
Bessie likes downloading games to play on her cell phone, even though she doesfind the small touch screen rather cumbersome to use with her large hooves. She is particularly intrigued by the current game she is playing.The game starts with a sequence of $N$ positive integers ($2 \leq N\leq 248$), each in the range $1 \ldots 40$. In one move, Bessie cantake two adjacent numbers with equal values and replace them a singlenumber of value one greater (e.g., she might replace two adjacent 7swith an 8). The goal is to maximize the value of the largest numberpresent in the sequence at the end of the game. Please help Bessiescore as highly as possible! of $N$ numbers at the start of the game. Please output the largest integer Bessie can generate. not optimal to join the first two 1s.
CommonCrawl
A morphism of Artin stacks $f:X\to Y$ over $\mathbb Q$ is representable by algebraic spaces if and only if its geometric fibres are algebraic spaces. I would like to know if one can use this to prove the following statement. Let $f:X\to Y$ be a morphism of finite type separated DM stacks over $\mathbb Q$. Suppose that, for any geometric point $x$ of $X$ with $y= f(x)$, the induced morphism on stabilizers $Stab(x)\to Stab(y)$ is injective. Then $f:X\to Y$ is representable by algebraic spaces. 3 $f$ is representable by algebraic spaces. See also http://stacks.math.columbia.edu/tag/04YY for a fancy reformulation. Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry arithmetic-geometry stacks automorphism-groups algebraic-stacks or ask your own question.
CommonCrawl
When $X$ is a smooth manifold, Lipyanskiy defines a chain complex whose homology is isomorphic to singular homology - let's say $GC_*(X)$ - generated by maps $\sigma: M \to X$ from compact $k$-manifolds with corners $M$. Working over $\Bbb F_2$ to avoid orientation discussions, $GC_k(X)$ is the free vector space generated by isomorphism classes of smooth maps $M \to X$ with connected domain modulo the subspace of "degenerate chains" (a chain has small image if its image can be covered by the image of a smooth manifold of strictly smaller dimension; a chain is degenerate if $\sigma$ and $\partial \sigma$ have small image). This is useful so that one can easily define intersection and fiber product maps on the chain level. There is of course a natural product map $GC_*(X) \otimes GC_*(Y) \to GC_*(X \times Y)$ which is a homology isomorphism given by multiplying the chains on the nose. I would like for there to be an associative natural transformation $AW: GC_*(X \times Y) \to GC_*(X) \otimes GC_*(Y)$ giving a homotopy inverse to the product map. (Ideally, it would be a right inverse to the product map on the nose, but I'm not convinced this is possible.) Is there such a transformation? This is certainly true in other non-simplicial settings (in cubical homology, for instance), and it seems plausible. Browse other questions tagged at.algebraic-topology gt.geometric-topology or ask your own question. Do chains and cochains know the same thing about the manifold? What are normalized singular chains good for? Does there exist a model of chains on oriented manifolds with both a strict intersection pairing and strict functoriality for closed embeddings? Singular chains generated by manifolds with corners — does it really work? Does this variant of singular chains have a name?
CommonCrawl
Reddy, SNS and Leonard, DN and Wiggins, LB and Jacob, KT (2005) Internal Displacement Reactions in Multicomponent Oxides: Part I. Line Compounds with Narrow Homogeneity Range. In: Metallurgical and Materials Transactions A, 36A (10). pp. 2695-2703. As a model of an internal displacement reaction involving a ternary oxide line compound, the following reaction was studied at 1273 K as a function of time, t: $Fe+NiTiO_3 = Ni + FeTiO_3$ Both polycrystalline and single-crystal materials were used as the starting $NiTiO_3$ oxide. During the reaction, the Ni in the oxide compound is displaced by Fe and it precipitates as a \gamma -(Ni-Fe) alloy. The reaction preserves the starting ilmenite structure. The product oxide has a constant Ti concentration across the reaction zone, with variation in the concentration of Fe and Ni, consistent with ilmenite composition. In the case of single-crystal $NiTiO_3$ as the starting oxide, the _ alloy has a "layered structure and the layer separation is suggestive of Liesegang-type precipitation. In the case of polycrystalline $NiTiO_3$ as the starting oxide, the alloy precipitates mainly along grain boundaries, with some particles inside the grains. A concentration gradient exists in the alloy across the reaction zone and the composition is >95 at. pct Ni at the reaction front. The parabolic rate constant for the reaction is $k_p = 1.3 \times 10^1^2 m^2 s^-^1$ and is nearly the same for both single-crystal and polycrystalline oxides. The copyright for this article belongs to Minerals Metals and Materials Society.
CommonCrawl
Is it possible to have a $3 \times 3$ matrix that is both orthogonal and skew-symmetric? I know it has something to do with the odd order of the matrix and it is not possible to have such a matrix. But what is the reason? Othogonal matrices have their eigenvalues on the unit circle. Skew-symmetric Matrices have their eigenvalues on the imaginary axis. Matrices with real entries have complex-conjugate pairs of eigenvalues. The only points where the unit circle intersects with the imaginary axis are $i$ and $-i$, which make up one perfect complex-conjugate pair. But your matrix needs $3$ eigenvalues, so we are missing one. It cannot have a complex partner, so it must be on the real axis. The only point where the real axis intersects with the imaginary axis is $0$, which is not on the unit circle. You have three constraints, but these never meet at one point. No, an orthogonal matrix has determinant $\pm 1$ whereas a skew symmetric matrix of order 3 has determinant $0$. No, it is not possible, for real matrices. Let $A$ be an $n \times n$ real matrix, with $n$ being odd. Suppose it is skew-symmetric, that is, $A^T = -A$. Then, $A^T A = -A^2$. If the matrix is also orthogonal, $I = -A^2$. Now take determinants to get $$\det(A)^2 + 1 = 0$$ The determinant is imaginary, giving a contradiction, since we assumed $A$ to be a real matrix. Not the answer you're looking for? Browse other questions tagged linear-algebra matrices orthogonal-matrices or ask your own question. How many skew symmetric matrices are possible? How many degrees of freedom do orthogonal skew-symmetric matrices have? Can we write any unitary matrix as the exponential of a skew-symmetric complex matrix?
CommonCrawl
is anyone certain of what question 4 means? (4) Which elements commute with every other element? If an element, say $a\in G$, commutes with every other element, then this means $ax=xa$ for all $x\in G$. It's like being abelian, but you're only checking whether an individual element commutes with everything else. If the group is already abelian, then ALL the elements commutes with all the other elements. However, in a nonabelian group, some elements may still commute with every other element. Look at the group table and see if there are any rows that are identical to the corresponding column. Do you see why this is checking what I'm asking? Maybe the answer is "none." There is always at least one element that commutes with everything. thanks Dana we were lost at that question. I don't know if I was sleeping in class or what but how do we know when to either add or multiply in a cayley table? This is making me have a difficult time trying to interpret the cayley table for $H$ in the sage lab. Shaun, the answer is that it doesn't matter what the operation is. The table tells you how to combine any two elements of the group regardless of the operation. Since I alluded to what each of these groups is, you can figure out what the operation is, but you do not need that information to answer any of the questions. You can get everything that you need from the table. By the way, the x0, x1, etc notation is short for $x_0, x_1$, etc. I was wondering if anyone could explain to me how to figure out if theire is a non trivial subgroup, I think I'm just stuck on how to read the graph it gave me. Now that we've covered a little bit of chapter 3, the easiest thing to do is to find the cyclic subgroup generated by something. Pick any of the non-identity elements. Find the cyclic subgroup generated by that element. If you get the whole group, try a different element. Hint: you should end up with a subgroup of order 2 or 3. I'll accept Sage lab 2 anytime up until midnight tonight if you are still working on it.
CommonCrawl
And also, if we have built the 496 of $SO(32)$ as a symmetrized pairing of 16 + 16 "particles and antiparticles", it can be further splitted to $256 + 240$, which is a more informal statement of the above branchings, and again looks as two copies of SO(16). This kind of coincidencies is usually mentioned as lore (say Baez' TWF and similar) but rarely more substance is given. Is there more content here, thus? Such as actually defining $E_8$ as some action in those vector spaces that are also representations of subgroups of $SO(32)$? And viceversa? Also, is $SO(16) \times SO(16)$ the only maximal common subgroup useful for this sort of descriptions? They share rank and order. What can you say about the difference of their Dynkin diagrams? Due diligence: re-read your Slansky. I very strongly suspect you'd find more takers in the math cousin of this site.
CommonCrawl
I've been recently acquainted with a statistical technique of amazing utility and versatility that has its roots in matrix decomposition, a basic—though profound—concept in linear algebra. For the purposes of this discussion, we're going to consider it a very elegant way of taking a large, confusing dataset with many variables and transforming it so that you can find patterns based on the correlations among the variables, thus allowing you to describe your data with fewer of them. Though it comes in several of flavors which go by various names, we'll call it Principal Component Analysis (PCA), and later I'm going to show you how you can use it to implement a sort of computer vision/face recognition thing using either Matlab or GNU Octave. Before this, though, we need to be comfortable with two concepts: (co)variance and eigen-things. If you are already, SKIP TO PCA or SKIP TO EIGENFACES (this is a very long post). Notice that here, the numerator is exactly what we did above, but the denominator is (n-1) instead of just n. In stat-speak, that's because, in our case, we aren't interested in estimating population variance from our sample; if we were, dividing by (n-1) would give a better (unbiased) estimate. In this example, we are effectively treating our sample as a population. Seen in this way, variance is just a special case of covariance, where you calculate the covariance of the dimension with itself. The plot on the left shows a much neater, tighter relationship, with changes in one variable corresponding closely to changes in the other (varying_ together; _heavier people also tend to be taller people); the one on the right, while still having positive covariance (more hours spent on homework tends to result in higher grades), doesn't look quite as tight. A negative covariance would mean that the variables do not change together; that increases in one are associated with decreases in the other. The covariance for height and weight is 92.03 and the covariance for homework and grades is 5.13; while we should be pretty convinced by this disparity that height & weight vary together more closely than do homework & grades, in order to confirm this the covariances should be standardized. To do this, we divide each covariance by the standard deviation of each of its variables, resulting in the correlation coefficient, Pearson's r. For height vs. weight, r = 0.77 and for grades vs. homework, r = 0.33, confirming our observations. These will be crucial later, so keep them in mind. $Mv=xv$, where x is any integer. An example should make this clear. Say we have the following square matrix; we want to find the vector [a b] such that x is an integer. Any such vector [a b] we call an eigenvector of this matrix, and any such integer x an eigenvalue. Here, [1 2] is an eigenvector and 6 is an eigenvalue of our matrix. These aren't easy to come by, and solving for them by hand is usually infeasible. An $n \times n$ matrix will have n eigenvalues. In this case, 1 is the other, and an example of an eigenvector is [1 -3]. See? OK, that's probably enough to "get" PCA; just remember covariance matrices and eigenvectors, and you're set. PCA was independently proposed by Karl Pearson (of correlation fame) and Harold Hotelling in the early 1900s. It is used to turn a set of possibly correlated variables into a smaller set of uncorrelated variables, the idea being that a high-dimensional dataset is often described by correlated variables and therefore only a few meaningful dimensions account for most of the information. PCA finds the directions with the greatest variance in the data, which are called principal components. You wont always get good results when you reduce the dimensionality of your data, especially if it's just to get a 2D/3D graph; sometimes there is just no simpler underlying structure. I'll walk you through one using data from a class I'm taking; for now, I'm just going to use R so I can provide some visualizations. There are three dimensions here (grades, parents education, and homework); note that this is lame/inappropriate data for this sort of technique, but it'll suffice for illustration. The OLS best-fitting plane is shown; it minimizes the squared deviations of actual Grades from Grades predicted from Parent's Education and Homework (Hours/Day). A single eigenvector accounts for ~90% of the variance in the data! We'll keep the others around, just to see what happens, but at this point you might eliminate weaker components. This is a still from the interactive plot which is much more convincing, so run it yourself! The eigenvectors are the axes of the transformed data, thus providing a better characterization. In summary, we have transformed our data so that it is expressed in terms of the patterns between them (lines that best describe the relationships among the variables); essentially, we have classified our data points as combinations of the contributions from all three lines, which can be thought of as representing the best possible coordinate system for the data: the greatest variance of some projection of the data comes lies on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, etc. Many face-recognition techniques treat the pixels of a each face-picture as a vector of values; this one does PCA on many such vectors to abstract characteristic features which can then be used to classify new faces. Think of an image as a matrix of pixels; let's restrict our attention to grayscale for clarity. Each pixel in a grayscale image is assigned an intensity value from 0 to 255, where "0" is completely black and "255" is completely white. The pictures I am using are 92px by 112px, meaning that there are 92 pixels along each of 112 rows, or equivalently, 112px in each of 92 columns. So, each image comprises a total of 112 * 92 = 10,304 pixels. You can think of each face as a point in a space of 10,304 dimensions! Or rather, you can't very easily think of this, but that's exactly what we've got! …where each $px$ is a single pixel with a grayscale intensity value ranging from 0 to 225. At this point, we have a vector full of numbers… a single dimension. Now, the idea is you take a bunch of these image vectors (made from different images), slap them all into a matrix, normalize them, generate a covariance matrix, find the eigenvectors/values for the matrix, and use these values to measure the difference between a new image and the originals. If the distance is small enough (per some threshold value), then a match condition is satisfied! I'll show you how all of this works in GNU Octave, but the code should work in Matlab too. In Octave, you need to have the images package installed. First, we need a "training set" of small grayscale images, all the same size. Many computer vision databases exist, with many such sets to choose from. I'm using 200 images from the AT&T face database (20 subjects, 10 images each). Good! Now we subtract this mean image from each image in our training-set and compute the covariance matrix exactly as shown in the discussion above on PCA. The clincher is that for any distribution of data with n variables, we can describe them with a basis of eigenvectors, and because these are necessarily orthogonal, the variables will be uncorrelated. Below are the eigenfaces*, and boy are they ghastly! Think of them as the pixel representation of each eigenvector formed from the covariance matrix of all images; these faces represent the most similar parts of some faces, and the most dramatic differences between others. *BTW, I think the first one was mistakenly replaced by the mean image in this picture. Now that we have facespace, how do we go about recognizing a new face? The recognition procedure is as follows: once we have projected every sample image into the eigenface subspace, we have a bunch of points in a 20 dimensional space. When given a new image (a new picture of someone in the training set), all we need do is project it into face space and find the smallest distance between the new point and each of the projections of the sample images: of the people pictured in the training set, this gives us the one that best matches the input picture. These graphs show the distance of each input image to each person on the training set (along the x-axis, 1-20); both matched! The first input image had the shortest distance to other images of that person, and so did the second (see the red dot in the figure below)! I decided to try a picture of myself that was already in black and white, to see if it could reconstruct it; with a large enough data set, it could produce a perfect reconstruction, much in the way that Fourier transforms/decompositions work. Not good at all, really, is it? I probably screwed something up… this is one of my first forays into GNU Octave and I'm just fumbling my way through someone else's Matlab code. Still, we were able to positively classify two untrained images! That's not too bad for a first go. What could this technique be used for, practically? It's actually pretty old, and has been largely supplanted by newer, more accurate recognition methods. One immediate use for eigenfaces would be to implement a face-recognition password system for your computer, like these guys did. You can also use it for face detection, not just recognition.
CommonCrawl
A modification of the surface layer of a Ti6Al4V titanium alloy is carried out using ultrasonic impact treatment (UIT) with addition of the Al$_2$O$_3$ and Cr$_2$O$_3$ powders to the deformation zone. As shown by means of x-ray diffraction phase analysis, optical and scanning electron microscopies, the surface layers of composite are formed during the UIT induced severe plastic deformation. The microhardness of the composite layers is 2 times higher than that of the matrix alloy. The high-temperature oxidation of composite layers containing Cr$_2$O$_3$ particles and the Cr$_2$O$_3$ + Al$_2$O$_3$ mixture leads to strengthening the underlying layers due to the formation of a solid solution of oxygen in the $\alpha$-phase, which is not observed in the case of a layer/coating formed with addition of Al$_2$O$_3$. According to the gravimetric analysis of samples during the cyclic high-temperature oxidation in air (20 cycles for 5 hours at a temperature of 550°C), it is concluded that the composite layer/coating saturated with Al$_2$O$_3$ particles have the highest heat resistance. This is due to the close values of the thermal expansion coefficients of Al$_2$O$_3$ coating and Ti6Al4V alloy, as opposed to the behaviour of the rough alloy and other composite layers, which are destroyed during the cyclic heating–cooling process. Key words: ultrasonic impact treatment (UIT), composite layers, oxide powders, coating, microhardness, heat resistance. A. A. Il'in, B. A. Kolachev, and I. S. Pol'kin, Titanovye Splavy: Sostav, Struktura, Svoystva [Titanium Alloys: Composition, Structure, Properties] (Moscow: VILS-MATI: 2009) (in Russian). I. Ya. Smokovych, I. S. Pohrebova, T. V. Loskutova, and V. H. Khyzhnyak, Naukovi Visti NTUU 'KPI', No. 1: 84 (2013) (in Ukrainian). G. I. Prokopenko, B. M. Mordyuk, M. O. Vasyliev, and S. M. Voloshko, Fizychni Osnovy Ul'trazvukovogo Udarnogo Zmitsnennya Metalevykh Poverkhon' [Physical Principles of Ultrasonic Impact Hardening of Metallic Surfaces] (Kyiv: Naukova Dumka: 2017) (in Ukrainian). D. S. Gertsriken, V. F. Mazanko, V. M. Tyshkevich, and V. M. Fal'chenko, Massoperenos v Metallakh pri Nizkikh Temperaturakh v Usloviyakh Vneshnikh Vozdeystviy [Masstransfer in Metals at Low Temperatures at External Loads] (Kyiv: RIO IMF: 1999) (in Russian). B. N. Mordyuk, Yu. V. Milman, M. O. Iefimov, and K. E. Grinkevych, J. Manufact. Technol. Res., 9, Nos. 3–4: 121 (2017). M. O. Vasylyev, V. O. Tinkov, S. M. Voloshko, V. S. Filatova, and L. F. Iatsenko, Metallofiz. Noveishie Tekhnol., 34, No. 5: 687 (2012) (in Russian). L. V. Tikhonov, V. A. Kononenko, G. I. Prokopenko, and V. A. Rafalovskiy, Mekhanicheskie Svoystva Metallov i Splavov [Mechanical Properties of Metals and Alloys] (Kiev: Naukova Dumka: 1986) (in Russian). G. V. Samsonov, A. L. Borisova, T. G. Zhidkova, T. N. Znatokova, Yu. P. Kaloshina, A. F. Kiseleva, P. S. Kislyy, M. S. Koval'chenko, T. Ya. Kosolapova, Ya. S. Malakhov, A. D. Panasyuk, V. I. Slavuta, and N. I. Tkachenko, Fiziko-Khimicheskie Svoystva Okislov [Physical-Chemical Properties of Oxides] (Moscow: Metallurgiya: 1978) (in Russian).
CommonCrawl
I have a set of $n$ agents and a set of $n$ tasks, and I need to assign each agent to exactly one task such that a cost is minimised. Some agents are incompatible with some tasks. I have an implementation of the Hungarian Algorithm which takes about a minute to solve for my $640 \times 640$ matrix. For forbidden assignments, I set the cost to $\infty$. (There always exists a feasible solution in my problem). I've also set it up as a binary program in CPLEX, which takes about 9 seconds to solve for the same problem. The BIP model excludes forbidden assignments outright by omitting those variables. I haven't yet investigated setting it up as a networking model in CPLEX, but that will likely be my next step. There is, however, a performance cost with communicating with CPLEX, so I'm sure a dedicated algorithm should get better performance. This bipartite matching problem is a kernel within another iterative search algorithm, so it must run as fast as possible. Are there any algorithms that I can implement that will outperform the Hungarian Algorithm in this case? Or do you have any other suggestions on how I can improve the performance of this kernel? You might try one of the auction-based algorithms for bipartite matchings. (See e.g. lecture notes describing a simple variant here: https://staff.fnwi.uva.nl/n.s.walton/Notes/Bertsekas_Auction.pdf but more optimizations are possible). Not the answer you're looking for? Browse other questions tagged graph-algorithms optimization matching or ask your own question.
CommonCrawl
A system of expressing the number of rows and columns of a matrix in mathematical form is called the order of a matrix. The order of a matrix denotes the arrangement of elements as number of rows and columns in a matrix. So, it is known as dimension of a matrix. It is usually expressed as the number of rows is multiplied by the number of columns in mathematics but it is read as number of rows by number of columns. One important factor is, the dimension of the matrix tells the number of elements of the matrix. It can be obtained by multiplying the rows by columns. In general form matrix, the elements are arranged as $m$ rows and $n$ columns. So, the order of the matrix is $m \times n$ and it is read as $m$ by $n$. There is only one row and column in matrix $A$. The matrix $A$ is called a matrix of order $1 \times 1$ and read as one by one matrix. Simply, it is called as a matrix of order $1$. There is one row and three columns in matrix $B$. So, it is called a matrix of order $1 \times 3$ and read as one by three matrix. The elements are arranged in matrix $C$ in $2$ rows and $2$ columns. It is called a matrix of order $2 \times 2$ and read as two by two matrix but simply a matrix of order $2$. Matrix $D$ is a matrix and formed by $3$ rows and $4$ columns. Therefore, the order of the matrix is $3 \times 4$. It is read as three by four matrix. The total number of elements in matrix $D = 3 \times 4 = 12$.
CommonCrawl
We study dark matter (DM) which is cosmologically long-lived because of standard model (SM) symmetries. In these models an approximate stabilizing symmetry emerges accidentally, in analogy with baryon and lepton number in the renormalizable SM. Adopting an effective theory approach, we classify DM models according to representations of $SU(3)_C\times SU(2)_L\times U(1)_Y \times U(1)_B\times U(1)_L$, allowing for all operators permitted by symmetry, with weak scale DM and a cutoff at or below the Planck scale. We identify representations containing a neutral long-lived state, thus excluding dimension four and five operators that mediate dangerously prompt DM decay into SM particles. The DM relic abundance is obtained via thermal freeze-out or, since effectively stable DM often carries baryon or lepton number, asymmetry sharing through the very operators that induce eventual DM decay. We also incorporate baryon and lepton number violation with a spurion that parameterizes hard breaking by arbitrary units. However, since proton stability precludes certain spurions, a residual symmetry persists, maintaining the cosmological stability of certain DM representations. Finally, we survey the phenomenology of effectively stable DM as manifested in probes of direct detection, indirect detection, and proton decay. CC is supported by a DOE Early Career Award DESC0010255 and a Sloan Research Fellowship. DS is supported in part by U.S. Department of Energy grant DE– FG02–92ER40701 and by the Gordon and Betty Moore Foundation through Grant No. 776 to the Caltech Moore Center for Theoretical Cosmology and Physics.
CommonCrawl
Let $\frak m$ be an infinite cardinal. We denote by $C_\frak m$ the collection of all $\frak m$-representable Boolean algebras. Further, let $C_\frak m^0$ be the collection of all generalized Boolean algebras $B$ such that for each $b\in B$, the interval $[0,b]$ of $B$ belongs to $C_\frak m$. In this paper we prove that $C_\frak m^0$ is a radical class of generalized Boolean algebras. Further, we investigate some related questions concerning lattice ordered groups and generalized $MV$-algebras. Scott, D.: A new characterization of $\alpha$-representable Boolean algebras. Bull. Amer. Math. Soc. 61 (1955), 522-523.
CommonCrawl
In the summer of 2018, Unit 42 released reporting regarding activity in the Middle East surrounding a cluster of activity using similar tactics, tools, and procedures (TTPs) in which we named the adversary group DarkHydrus. This group was observed using tactics such as registering typosquatting domains for security or technology vendors, abusing open-source penetration testing tools, and leveraging novel file types as anti-analysis techniques. Since that initial reporting, we had not observed new activity from DarkHydrus until recently, when 360TIC published a tweet and subsequent research discussing delivery documents that appeared to be attributed to DarkHydrus. In the process of analyzing the delivery documents, we were able to collect additional associated samples, uncover additional functionality of the payloads including the use of Google Drive API, and confirm the strong likelihood of attribution to DarkHydrus. We have notified Google of our findings. We collected a total of three DarkHydrus delivery documents installing a new variant of the RogueRobin trojan. These three documents were extremely similar to each other and are all macro enabled Excel documents with .xlsm file extensions. None of the known documents contain a lure image or message to instruct the recipient to click the Enable Content button necessary to run the macro, as seen in Figure 1. While we cannot confirm the delivery mechanism, it is likely that the instructions to click the Enable Content button were provided during delivery, such as in the body of a spear-phishing email. Without the delivery mechanism we cannot confirm the exact time these delivery documents were used in an attack; however, the observed timestamps within these three delivery documents gives us an idea when the DarkHydrus actors created them. While the creation times were timestomped to a default time of 2006-09-16 00:00:00Z commonly observed in malicious documents, the Last Modified times were still available and suggest that DarkHydrus created these documents in December 2018 and January 2019. Table 1 shows the breakdown of timestamps and their associated sample hashes. The use of the legitimate regsvr32.exe application to run a .sct file is an AppLocker bypass technique originally discovered by Casey Smith (@subtee), which eventually resulted in a Metasploit module. The WINDOWSTEMP.ps1 script is a dropper that decodes an embedded executable using base64 and decompresses it with the System.IO.Compression.GzipStream object. The script saves the decoded and decompressed executable to %APPDATA%\Microsoft\Windows\Templates\WindowsTemplate.exe and creates an LNK shortcut at %APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup\OneDrive.lnk to persistently run WindowsTemplate.exe each time Windows starts up. The WindowsTemplate.exe executable is a new variant of RogueRobin written in C#. In our original blog on DarkHydrus, we analyzed a PowerShell-based payload we named RogueRobin. While performing the analysis on the delivery documents using the .sct file AppLocker bypass, we noticed the C# payload was functionally similar to the original RogueRobin payload. The similarities between the PowerShell and C# variants of RogueRobin suggests that the DarkHydrus group ported their code to a compiled variant. The C# variant of RogueRobin attempts to detect if it is executing in a sandbox environment using the same commands as in the PowerShell variant of RogueRobin. The series of commands, as seen in Table 2, include checks for virtualized environments, low memory, and processor counts, in addition to checks for common analysis tools running on the system. The Trojan also checks to see if a debugger is attached to its processes and will exit if it detects the presence of a debugger. gwmi win32_computersystem Uses this query to check the system information for the string "VMware". gwmi -query "Select TotalPhysicalMemory from Win32_ComputerSystem" Uses this query to check to see if the total physical memory is less than 2,900,000,000 bytes. gwmi -Class win32_Processor | select NumberOfCores Uses this query to check to see if the total number of CPU cores is less than 1. Get-Process | select Company Checks to see if any running processes have "Wireshark" or "Sysinternals" as the company name. Like the original version, the C# variant of RogueRobin uses DNS tunneling to communicate with its C2 server using a variety of different DNS query types. Just like in the sandbox checks, the Trojan checks for an attached debugger each time it issues a DNS query; if it does detect a debugger it will issue a DNS query to resolve 676f6f646c75636b.gogle[.]co. The domain is legitimate and owned by Google. The subdomain 676f6f646c75636b is a hex encoded string which decodes to goodluck. This DNS query likely exists as a note to researchers or possibly as an anti-analysis measure, as it will only trigger if the researcher has already patched the initial debugger check to move onto the C2 function. Figure 2 shows the code responsible for detecting the attached debugger and issuing the corresponding DNS request. Additionally, the RogueRobin Trojan uses the regular expressions in Table 3 to confirm that the DNS response contains the appropriate data for it to extract information from. The C# variant, like its PowerShell relative, will issue DNS queries to determine which query types can successfully communicate with its C2 servers. Figure 3 shows the RogueRobin payload issuing DNS requests to resolve custom crafted subdomains of its C2 domains using TXT, SOA, MX, CNAME, SRV, A and AAAA query types. The domains in the test queries, such as aqhpc.akdns[.]live have subdomains that are generated by substituting the digits in the Trojan's process ID with characters seen in Table 4 (for example qhp for the PID 908) and surrounding these characters with the static characters a and c. The C2 server can respond to any of the query types to provide a unique identifier value that the Trojan will store in a variable and use in future DNS requests. The generated subdomain is then subjected to a number-to-character substitution function that is the inverse of the Table 4, which effectively converts all the digits in the subdomain into characters. The Trojan checks the response to this query using the regular expressions in Table 3. If it received a non-cancelling response, the Trojan will extract data from the DNS responses and treat it as commands. Table 5 shows the commands that the C# variant of RogueRobin can handle, which is extremely similar to the previously analyzed PowerShell variant. ^\$x_mode Turns on the alternative mode of 'x_mode' on to use the alternative C2 channel. If preceded by "OFF", it turns 'x_mode' off, otherwise the command is newline delimited with settings to use this alternative C2 functionality. ^\$fileUpload This command should be followed by a string that will be used as a path to save a new file to the system. This command will then reach out to the C2 server to obtain the data to save to this file path. ^showconfig Creates a pipe delimited ("|") string that contains the sample's settings, including the list of C2 domains and available DNS query types. A command that was not available in the original PowerShell variant of RogueRobin but is available with the new C# variant is the x_mode. This command is particularly interesting as it enables an alternative command and control channel that uses the Google Drive API. The x_mode command is disabled by default, but when enabled via a command received from the DNS tunneling channel, it allows RogueRobin to receive a unique identifier and to get jobs by using Google Drive API requests. In x_mode, RogueRobin uploads a file to the Google Drive account and continually checks the file's modification time to see if the actor has made any changes to it. The actor will first modify the file to include a unique identifier that the Trojan will use for future communications. The Trojan will treat all subsequent changes to the file made by the actor as jobs and will treat them as commands, which it will handle with the same command handler seen in Table 5. To use Google Drive, the x_mode command received from the C2 server via DNS tunneling will be followed by a newline-delimited list of settings needed to interact with the Google Drive account. Figure 4 shows the code in RogueRobin that handles the x_mode command, specifically splitting the command data on newlines and using the resulting array to set variables used as x_mode settings. As seen in Figure 4, the settings are stored in variables seen in Table 6, which are used to authenticate to the actor-controlled Google account before uploading and downloading files from Google Drive. To obtain an OAUTH access token to authenticate to the actor provided Google account, the Trojan sends an HTTP POST request to a URL stored in the gdo2t variable with grant_type, client_id, client_secret, and refresh_token fields added to the HTTP header and in the POST data. As seen in Figure 5, the values for these fields are set to variables initially set upon issuing of the x_mode command. In one RogueRobin sample (SHA256: f1b2bc0831…), the author did not use the Google Drive URL provided by the actor when issuing the x_mode command, and instead included a hardcoded Google Drive URL, as seen in Figure 6. This is the only instance we observed where a hardcoded Google Drive URL was included in RogueRobin, which may suggest that the author may have overlooked this during testing. The Trojan function splits the matching data, specifically the subdomain on a separator that is a character between r and v and uses the data before the separator to get the sequence number and a Boolean value (0 or 1) if more data is expected. It will use the data after the separator as the string that it will subject to the command handler seen in Table 5. The initial list of C2 domains released by 360TIC associated with 513813af15… appeared thematically very similar to previous DarkHydrus activity, using domain names visually similar to well-known technology vendors or service providers. This list was further expanded upon by ClearSky Security (here, here and here) in a series of tweets that provided additional similar domain names also likely linked to DarkHydrus. To better understand how these domains are related to DarkHydrus, we began visually mapping the relationships between the list of domains, which can be seen in Figure 7. The diagram shows the DarkHydrus group using a consistent naming schema and structure in their infrastructure. They register a multitude of domains and set up nameservers to use as their primary DNS for their C2 domains. For this campaign, we are able to cluster the adversary infrastructure via the specific nameservers that were deployed for C2s. The brackets in Figure 7 shows the distinct clustering of infrastructure into three groups. We were able to retrieve live payloads associated with two of the clusters. A third cluster was also shared by ClearSky Security, but we were unable to associate a live payload to them. Although the third cluster does not appear to have any direct relationships to the other two clusters, it is still highly probable that this cluster is related to the two other clusters via the structuring of domains with custom nameservers. In addition, the domain names themselves were extremely similar, with some examples being exactly the same but on a different top level domain. The two sets of nameservers we were able to associate with the retrieved payloads were tbs1/tbs2.microsoftonline.services and tvs1/tvs2.trafficmanager.live. The distribution of C2 domains and their nameservers can be seen in Table 7. The third cluster of domains had six different nameservers associated with them, but unlike the other two clusters, were all directly tied to each other. Each of the domains appeared to have rotated through the six nameservers but oddly, one of the nameservers that several of the domains had rotated through did not appear to be currently registered. Examining historical IP resolutions revealed a common IP between the active nameservers, 107.175.75[.]123. This IP is of particular interest as historical domain resolutions of this IP revealed that it had resolved to the domain hotmai1l[.]com in the past as well, which was a domain we had previously identified as having a high likelihood of association with DarkHydrus infrastructure. This IP also belongs to the same service provider and class B network range as another IP we had associated with DarkHydrus, 107.175.150[.]113 which specifically resolved to a domain name containing a victim organization's name. The DarkHydrus group continues their operations and adds new techniques to their playbook. Recent DarkHydrus delivery documents revealed the group abusing open-source penetration testing techniques such as the AppLocker bypass. The payloads installed by these delivery documents show that the DarkHydrus actors ported their previous PowerShell-based RogueRobin code to an executable variant, which is behavior that has been commonly observed with other adversary groups operating in the Middle East, such as OilRig. Lastly, the new variant of RogueRobin is capable of using the Google Drive cloud service for its C2 channel, suggesting that DarkHydrus may be shifting to abusing legitimate cloud services for their infrastructure.
CommonCrawl
Convex functions is an important and fundamental class of objective functions in the study of mathematical optimization. Their properties make them the simplest, yet non-trivial, objective functions for optimization. Convex functions are those functions whose local minima is also a global minima. Furthermore, strictly convex functions have at most one local minima, which is also the global minima. Given that most of the machine learning algorithms end up optimizing some cost function (i.e., objective function), knowing about convexity would help understanding the properties of these algorithms. For example, support vector machines (SVMs), linear regression, ridge regression, and lasso regression generate the convex cost functions, and maximum likelihood estimation of logistic regression generates a concave cost function. On the other hand, neural network based models deal with non-convex optimization (i.e., neither convex nor concave). This post presents the definition and properties of convex functions. The r.h.s. of the above equation is the $y$ value corresponding to $x_i$ on the line segment joining points $(x_1, f(x_1))$ and $(x_2, f(x_2))$. Following figure shows the property of convex functions graphically. $y_s$ is the r.h.s. of the inequality and $y_c$ is the l.h.s. This should hold for the whole segment and for all such possible segments. We can simplify the equation by substituting $x_i$ with $t x_1 + (1-t) x_2$ for $t \in [0, 1]$. For $t=0$ and $t=1$, the value of $x_i$ is $x_2$ and $x_1$ respectively. For the other values of $t \in (0, 1)$ , $x_i$ ranges in $(x_1, x_2)$. Note: the above inequality is called Jensen's inequality. We can symmetrically define concave functions as those where the line segment lies below the curve. Similarly, we can also define strictly concave functions. $f(x)$ is a convex function $iff$ $-f(x)$ is concave function. For strictly convex functions, local minimum is also the global minimum. For convex functions there can be multiple local minimums, however, all of them would function would map them to same value. Moreover, these would also be the global minimums. Iterative algorithms like Gradient descent would always approximate the global minimum, as long as other requirements of Gradient descent are met, e.g., existence of gradient. All linear functions are both convex and concave. Requirements of convex and as well as concave functions are met for linear functions. Though, they are neither strictly convex nor strictly concave. If a function is of single variable and its twice derivative exists, then it is greater than or equal to zero for the whole domain for concave functions. Symmetrically, for concave functions, its twice derivative is less than or equal to zero for all values in domain. For the strict counterparts, equality with zero is omitted.
CommonCrawl
We compute the entropy of entanglement in the ground states of a general class of quantum spin-chain Hamiltonians --- those that are related to quadratic forms of Fermi operators --- between the first $N$ spins and the rest of the system in the limit of infinite total chain length. We show that the entropy can be expressed in terms of averages over the classical compact groups and establish an explicit correspondence between the symmetries of a given Hamiltonian and those characterizing the Haar measure of the associated group. These averages are either Toeplitz determinants or determinants of combinations of Toeplitz and Hankel matrices. Recent generalizations of the Fisher-Hartwig conjecture are used to compute the leading order asymptotics of the entropy as $N\rightarrow\infty$. This is shown to grow logarithmically with $N$. The constant of proportionality is determined explicitly, as is the next (constant) term in the asymptotic expansion. The logarithmic growth of the entropy was previously predicted on the basis of numerical computations and conformal-field-theoretic calculations. In these calculations the constant of proportionality was determined in terms of the central charge of the Virasoro algebra. Our results therefore lead to an explicit formula for this charge. We also show that the entropy is related to solutions of ordinary differential equations of Painlev\'e type. In some cases these solutions can be evaluated to all orders using recurrence relations.
CommonCrawl
This question concerns a system of equations that arise in the study of one-soliton solutions to the Davey-Stewartson equation. Can one prove that $n_1(z)=n_2(z)=0$ if one assumes a priori that $n_1$ and $n_2$ belong to $L^p(R^2)$ for all $p>2$ (including $p=\infty$)? For this purpose one can assume that the limits above exist. Browse other questions tagged ap.analysis-of-pdes mp.mathematical-physics integrable-systems or ask your own question.
CommonCrawl
Let $R$ be any ring with identity. An element $a \in R$ is called nil-clean, if $a=e+n$ where $e$ is an idempotent element and $n$ is a nil-potent element. In this paper we give necessary and sufficient conditions for a $2\times 2$ matrix over an integral domain $R$ to be nil-clean. D. Alpern, Generic Two Integer Variable Equation Solver, 2018, available at www.alpertron.com.ar/QUAD.HTM. T. Andreescu, D. Andrica, Quadratic Diophantine Equations, Springer, New York, 2015. D. Andrica, G. Calugareanu, A nil–clean 2x2 matrix over the integers which is not clean, J. Algebra Appl. 13(6) (2014) 1450009. D. K. Basnet, J. Bhattacharyya, Nil clean graph of rings, arXiv:1701.07630 [math.RA], https://arxiv.org/abs/1701.07630. A. T. Block Gorman, Generalizations of Nil Clean to Ideals, Wellesley College, Honors Thesis Collection, (388) 2016. S. Breaz, G. Calugareanu, P. Danchev, T. Micu, Nil–clean matrix rings, Linear Algebra Appl. 439(10) (2013) 3115–3119. A. J. Diesl, Nil–clean rings, J. Algebra 383(1) (2013) 197–211. Diophantine Equation ax + by + cz = d Solver, www.mathafou.free.fr/ex e_en/exedioph3.html. S. Hadjirezaei, S. Karimzadeh, On the nil–clean matrix over a UFD, J. Alg. Struc. Appl. 2(2) (2015) 49–55. W. K. Nicholson, Lifting idempotents and exchange rings, Trans. Amer. Math. Soc. 229 (1977) 269–278. I. Niven, H. S. Zuckerman, An Introduction to the Theory of Numbers, JohnWiley–Sons, 3rd edition, 1972. S. Sahinkaya, G. Tang, Y. Zhou, Nil–clean group rings, J. Algebra Appl. 16(7) (2017) 1750135. F. Smarandache, Existence and number of solutions of Diophantine quadratic equations with two unkowns in ZZ and IN, arXiv:0 704.3716 [math.GM], http://arxiv.org/abs/0704.3716.
CommonCrawl
The index of refraction of air for optical frequencies is around 1.0002, so the speed of light in air is about \$2.9985\times10^8\$ m/s, as compared to \$2.9979\times10^8\$ m/s in vacuum. how to find a family doctor near me The same is true for a gas like air- compressing it raises the refractive index. For any particular material the relation between density and refractive index is very clear and nearly linear. Also, if you look at a number of liquids like alcohol, water and ether the denser liquids often have higher refractive indexes. How to compute the refractive index of a thin film?
CommonCrawl
We analyze the nonlinear stochastic heat equation driven by heavy-tailed noise in free space and arbitrary dimension. The existence of a solution is proved even if the noise only has moments up to an order strictly smaller than its Blumenthal-Getoor index. In particular, this includes all stable noises with index $\alpha<1+2/d$. Although we cannot show uniqueness, the constructed solution is natural in the sense that it is the limit of the solutions to approximative equations obtained by truncating the big jumps of the noise or by restricting its support to a compact set in space. Under growth conditions on the nonlinear term we can further derive moment estimates of the solution, uniformly in space. Finally, the techniques are shown to apply to Volterra equations with kernels bounded by generalized Gaussian densities. This includes, for instance, a large class of uniformly parabolic stochastic PDEs.
CommonCrawl
Vol. Contents of Tohoku Math. J. Abstract. If $P,Q:[0,\infty)\to$ are increasing functions and $T$ is the Calderon operator defined on positive or decreasing functions, then optimal modular inequalities $\int P(Tf)\leq C\int Q(f)$ are proved. If $P=Q$, the condition on $P$ is both necessary and sufficient for the modular inequality. In addition, we establish general interpolation theorems for modular spaces. 1991 Mathematics Subject Classification. Primary 46M35; Secondary 46E30. Copyright c 2007 Tohoku Mathematical Journal. All rights reserved.
CommonCrawl
Determine the angular speed at which the sprinkler will rotate free. Question: Determine the angular speed at which the sprinkler will rotate free. Lawn sprinkler has two nozzles of diameters 3 mm each is connected across a tap of water. The nozzles are at distance of 40 cm and 30cm from the centre of tap. The rate of water through the tap is $100 cm^3/s$. The nozzle discharge water in the downword directions. Determine the angular speed at which the sprinkler will rotate free. The jet of water coming out from nozzles A and b is having velocity 7.074 m/s. These jets of water will exerts force in the opposite direction i.e force exerted by the jets will be in the upward direction. the torque exerted will also be in the opposite direction. Hence torque at B will be in the anti-clockwise direction and A in the clockwise direction. But torque at B is more than the torque at A and hence sprinkle, if free, will rotate in the anticlockwise direction. Here $\omega \times x_A$ is added to $V_A$ as $V_A$ and tangential velocity due rotation $(\omega \times x_A)$ are ion the same direction. The moment of momentum of the fluid entering sprinkler is given zero and also there is no external torque applied on the sprinkler. Hence resultant external torque is zero.
CommonCrawl
Each week we will discuss a research problem in the field of geometric analysis. In this talk, I will discuss a problem which originates in complex analysis but is really a problem in non-linear elliptic PDE. We study the least gradient problem in two special cases. Let $M$ be a von Neumann algebra and let $(L_p(M), \|.\|_p)$, $1 \leq p < \infty$ be Haagerup's $L_p$-space on $M$. I am going to define functions $f(A,B)$ of noncommuting self-adjoint operators $A$ and $B$.
CommonCrawl
A man sells a table for Rs. 4200 at 25% loss. At what price must he sell to get a profit of 25%? Find the odd statement out in relation to a triangle. The longest side is opposite to the greatest angle. Exterior angle of a triangle = the sum of interior opposite angles. The sum of any 2 sides is greater than the 3rd side. If P means '$$\div$$', R means '$$\times$$', Q means '$$+$$' and S means '$$-$$' then 48 P 8 Q 6 R 9 S 31 =?
CommonCrawl
You are asked to dissect an $N \times N$ square into polyomino pieces such that each piece shares portion of its boundary with exactly $D$ other pieces, and no piece has area exceeding $N$. This can be achieved for $D \le 5$. Find the smallest square for $D=5$. Credit: inspired by this puzzle. It was definitely fun to try and find this! Took me a while. Excellent puzzle. - It has been proven in the other puzzle that at least 12 tiles would be needed. - It was clear, that each piece has to have at least 2 squares. - I was intuitively convinced that the solution would have a 4-fold rotational symmetry. (No proof for that whatsoever. More a 'feeling'.) So I set out with trying patterns in this symmetry. If I get 1/4th of the pieces in place, the other 3/4th would be correct automatically. - I figured the rim would be the difficult part, as long-stretched tiles are needed, and their length is limited by the square-size. - So I first tried to create a 7x7 square with fitting tiles to the border and leaving room for 5 connections. - And then it was just a matter of extending inwards and realizing that the number of tiles is not yet enough. Adding 4 colours did the trick. It is also possible with rectangles. Not the answer you're looking for? Browse other questions tagged geometry dissection graph-theory polyomino or ask your own question.
CommonCrawl
I've learnt to roughly draw graphs of various functions like isoquants of Cobb Douglas function, i.e., $k=√q/L$. Here first derivative is negative so it's downward sloping and second derivative is positive so convex to origin. Now if Short-Run cost function is $C = (w/k)q^2 + (rk)$ then average cost is $AVC= (w/k)q +(rk)/q$. First derivative is $(w/k)-(rk)/q^2$ but how do I know if it's positive or negative? This term is drops as $q$ increases, and diverges when $q$ is small. This is a linear term with slope $\alpha$: it is small for small $q$ and large for large $q$. In this particular case the function one of the terms grows while the other shrinks. So in extreme cases only one matter. The question is where is the point in which one becomes more relevant than the other. If you notice above I always use the expressions small and large, but these are relative words. You can actually find a value $q^*$ at which these two terms are equal, and this defines in which each term dominates. Not the answer you're looking for? Browse other questions tagged microeconomics production-function cost or ask your own question.
CommonCrawl
Abstract: The nonperturbative analysis of the one-particle excitation of the electron-positron field is made in the paper. The standard form of quantum electrodynamics (QED) is used but the coupling constant $\alpha_0$ is supposed to be of a large value ($\alpha_0 \gg 1$). It is shown that in this case the quasi-particle excitation can be produced together with the non-zero scalar component of the electromagnetic field. Self-consistent equations for spatially localized charge distribution coupled with an electromagnetic field are derived. Soliton-like solution with a nonzero charge for these equations are calculated numerically. The solution proves to be unique if the coupling constant is fixed. It leads to the condition of charge quantization if the non-overlapping $n$-soliton states are considered. It is also proved that the dispersion law of the soliton-like excitation is consistent with Lorentz invariance of the QED equations.
CommonCrawl
Epidemiology is the study of the distribution and spread of disease or illness at the population level. In a survival study, is interval censoring simplifiable to midtime imputation? how to model variable of no interest to account for interactions with that variable? Can variable be both a moderator and partial mediator? How to calculate the odds ratio if one of the groups is "0" in a case-control study? Can I use rates (e.g., crude rates) as an explanatory variable (covariate) in regression? Propensity score adjustment for multilevel exposure? A situation where to consider row totals in a $2\times 2$ contingency table fixed or not? In epidemiology, how to perform direct standardization when demographics don't exist in some populations? Is there a name for disingenuous correlation? Can I use logistic (or Poisson) regression to model epidemiological data with incomplete (but generalizable) knowledge about the population? Why is it important to have a large cohort or sample size in epidemiological studies? How is the prevalence of a rare disease estimated? Age-adjustment (and its realization with mgcv): add age as covariate to the regression or do an internal standardization?
CommonCrawl
Abstract: We consider the problem of clustering a graph $G$ into two communities by observing a subset of the vertex correlations. Specifically, we consider the inverse problem with observed variables $Y=B_G x \oplus Z$, where $B_G$ is the incidence matrix of a graph $G$, $x$ is the vector of unknown vertex variables (with a uniform prior) and $Z$ is a noise vector with Bernoulli$(\varepsilon)$ i.i.d. entries. All variables and operations are Boolean. This model is motivated by coding, synchronization, and community detection problems. In particular, it corresponds to a stochastic block model or a correlation clustering problem with two communities and censored edges. Without noise, exact recovery (up to global flip) of $x$ is possible if and only the graph $G$ is connected, with a sharp threshold at the edge probability $\log(n)/n$ for Erdős-Rényi random graphs. The first goal of this paper is to determine how the edge probability $p$ needs to scale to allow exact recovery in the presence of noise. Defining the degree (oversampling) rate of the graph by $\alpha =np/\log(n)$, it is shown that exact recovery is possible if and only if $\alpha >2/(1-2\varepsilon)^2+ o(1/(1-2\varepsilon)^2)$. In other words, $2/(1-2\varepsilon)^2$ is the information theoretic threshold for exact recovery at low-SNR. In addition, an efficient recovery algorithm based on semidefinite programming is proposed and shown to succeed in the threshold regime up to twice the optimal rate. For a deterministic graph $G$, defining the degree rate as $\alpha=d/\log(n)$, where $d$ is the minimum degree of the graph, it is shown that the proposed method achieves the rate $\alpha> 4((1+\lambda)/(1-\lambda)^2)/(1-2\varepsilon)^2+ o(1/(1-2\varepsilon)^2)$, where $1-\lambda$ is the spectral gap of the graph $G$.
CommonCrawl
Kavitha, V and Dutt, Narayana D and Pradhan, N (1995) Nonlinear Prediction of Chaotic Electrical Activity of the Brain. In: 14th Conference of the Biomedical Engineering Society, Engineering in Medicine and Biomedical Engineering, An International Meeting, 15-18 Febrauary 1995, New Delhi, India, 3/39-3/40. The random looking brain electrical activity patterns recorded as EEG is currently understood to be the outcome of a chaotic process. This study addresses the problem of nonlinear prediction of chaotic EEG data using a simplex method. A fixed length of EEG data is taken and a multidimensional attractor in phase-space is reconstructed from the time series. The first N points on the attractor serve as the base for making prediction for the next points. For a given point $x_i$ (i>N), the E+1 closest neighbors are determined. The predicted value is obtained by keeping track of where the neighbors moved, giving them an exponential weight depending upon the original distance. As the embedding dimension is increased, the predicted time series was a better correlation to the real signal until about twice the expected value of the correlation dimension D and then the correlation falls off. The results indicate that the EEG prediction falls off rapidly for longer prediction lengths, thus validating the chaotic nature of the EEG signal. Also, as the frequency of the signal increases, the prediction length decreases. The effect of the predictive parameters on different EEG activities is discussed.
CommonCrawl
A fair coin is tossed repeatedly until 5 consecutive heads occurs. What is the expected number of coin tosses? Is the product of two Gaussian random variables also a Gaussian? Say I have $X \sim \mathcal N(a, b)$ and $Y\sim \mathcal N(c, d)$. Is $XY$ also normally distributed? Is the answer any different if we know that $X$ and $Y$ are independent? Suppose we have $v$ and $u$, both are independent and exponentially distributed random variables with parameters $\mu$ and $\lambda$, respectively. How can we calculate the pdf of $v-u$? Let $X_1$ and $X_2$ be two continuous r.v., my question is: what is the p.d.f of $Z=X_1/X_2$? Boy Born on a Tuesday - is it just a language trick? Probability that a random binary matrix is invertible? Given an infinite number of monkeys and an infinite amount of time, would one of them write Hamlet? What is the probability of a coin landing tails 7 times in a row in a series of 150 coin flips? If you were to flip a coin 150 times, what is the probability that it would land tails 7 times in a row? How about 6 times in a row? Is there some forumula that can calculate this probability? Suppose I have a line segment of length $L$. I now select two points at random along the segment. What is the expected value of the distance between the two points, and why? If I have two variables $X$ and $Y$ which randomly take on values uniformly from the range $[a,b]$ (all values equally probable), what is the expected value for $\max(X,Y)$? Basically, on average, how many times do you have to roll a fair six-sided die before getting two consecutive sixes? how to show convergence in probability imply convergence a.s. in this case? Assume that $X_1,\cdots,X_n$ are independent r.v., not necessarily iid, Let $S_n=X_1+\cdots+X_n$, suppose that $S_n$ converges in probability, how can we show that $S_n$ converges a.s.?
CommonCrawl
Abstract: Many applications of stereo depth estimation in robotics require the generation of accurate disparity maps in real time under significant computational constraints. Current state-of-the-art algorithms force a choice between either generating accurate mappings at a slow pace, or quickly generating inaccurate ones, and additionally these methods typically require far too many parameters to be usable on power- or memory-constrained devices. Motivated by these shortcomings, we propose a novel approach for disparity prediction in the anytime setting. In contrast to prior work, our end-to-end learned approach can trade off computation and accuracy at inference time. Depth estimation is performed in stages, during which the model can be queried at any time to output its current best estimate. Our final model can process 1242$ \times $375 resolution images within a range of 10-35 FPS on an NVIDIA Jetson TX2 module with only marginal increases in error -- using two orders of magnitude fewer parameters than the most competitive baseline. The source code is available at this https URL .
CommonCrawl
Registration and tea breaks will be held in the Allen Barton Forum (also in the CBE building). A list of all participants, if you're as bad at remembering names as I am, is available here. The MSI Workshop on Low-Dimensional Topology & Quantum Algebra will be held October 31-November 4 (Monday-Friday) at the Australian National University. The workshop is part of the MSI's Special Year program on Algebra and Topology. The conference poster is available. Registration is now open. Please register to let us know you're coming! Computation, experimentation and conjectures have played a driving role in low-dimensional topology right from the cradle of modern topology in the work of Poincare. There is a successful history of replacing non-constructive existence proofs by practical solutions, and heuristic methods by rigorous algorithms. In this introductory talk, I will describe some of the key techniques used to study a 3-dimensional space, including finding essential surfaces in the space, constructing a geometric structure on it, and computing key invariants. If time permits, I will also outline some open problems on 3-manifolds and related fields. This talk is an introduction to the Vassiliev filtration on knots, chord diagram spaces, finite type invariants and universal finite type invariants, and the universal finite type invariant of knots called the Kontsevich integral. In a cooriented contact manifold, a positive Legendrian isotopy is a Legendrian isotopy evolving in the positive transverse direction to the contact plane. Their global behavior differs from the one of Legendrian isotopy and is closer to the one of propagating waves. In this talk I will explain how to use information in the Floer complex associated to a pair of Lagrangian cobordisms (recently constructed in a collaboration with G. Dimitroglou Rizell, P. Ghiggini and R. Golovko) to give obstructions to certain positive loops of some Legendrian submanifolds. This will recover previously known obstructions and exhibit more examples. This is work in progress with V. Colin and G. Dimitroglou Rizell. Topological recursion is a machinery that arose in physics but has recently found widespread application to diverse areas of mathematics. Whenever a problem is governed by the topological recursion, there is usually a quantum curve lurking about as well. We will demonstrate this circle of ideas using a toy example from very low-dimensional topology before discussing connections with the volume and AJ conjectures from quantum topology. This talk is based on joint work with Dror Bar-Natan. I will explain the parallels and differences between the theories of finite type invariants for classical and welded knotted objects. In particular, a version of the Alexander polynomial is a universal finite type invariant for welded knots. The 3d-index of a hyperbolic knot complement $M$ is a powerful invariant introduced by the physicists Dimofte, Gaiotto and Gukov, giving a collection of formal power series in q with integer coefficients, indexed by a pair of integers. When both integers are zero, the constant term is 1 and we show that the coefficient of $q$ can be expressed in terms of the numbers of genus 2 normal and almost normal surfaces in a suitable ideal triangulation of $M$. Further, this coefficient also has a purely topological description, in terms of the numbers of isotopy classes of genus 2 Heegaard surfaces and genus 2 incompressible surfaces in $M$. We will give examples illustrating these results, and sketch an approach to the proof, which also gives algorithms for counting isotopy classes of the above genus 2 surfaces. The slope conjectures of Garoufalidis and Kalfagianni-Tran predict that the highest and lowest degrees of the coloured Jones polynomial contain topological information about essential surfaces properly embedded in the knot exterior. Uniformly twisted knots were introduced recently by Ozawa, and these knots contain a pair of essential surfaces in the knot exterior; one a spanning surface, and the other a coiled surface. We show that these two surfaces coincide with the two Jones surfaces for many semi-adequate knots with integral Jones slopes. This allows us to complete the proof that all knots with up to 9 crossings satisfy the strong slope conjecture. Every contact 3-manifold is locally contactomorphic to the standard contact $\mathbb R^3$, but this fact does not necessarily produce large charts that cover the manifold efficiently. I'll describe joint work with Dave Gay which uses an open book decomposition of a contact manifold to produce a particularly efficient collection of such contactomorphisms, together with simple combinatorial data describing how to reconstruct the contact 3-manifold from these charts. We use this construction to define front projections for Legendrian knots and links in arbitrary contact 3-manifolds, generalising existing constructions of front projections for Legendrian knots in $S^3$ and universally tight lens spaces. The symmetric group $S_n$ has a faithful $(n-1)$-dimensional representation, and much of the rich combinatorics of permutations (e.g. the study of reduced expressions and Bruhat order) can be understood via linear algebra using this representation. One can study the braid group $B_n$ somewhat analogously, at the cost of embracing one more level of categorical abstraction. The goal of this talk will be to explain (to topologists!) what one learns about the braid group $B_n$ by studying a faithful action of $B_n$ on a triangulated category. In the Bordered Heegaard Floer homology theory of Lipshitz-Oszvath-Szabo, an algebra known as a "strand algebra" is associated to a surface. Zarev defined a generalisation of this algebra for Bordered Sutured Heegaard Floer homology. These algebras, while describing the behaviour of holomorphic discs near the boundary of a 3-manifold, can be defined purely combinatoriallly as a differential graded algebra of combinatorial "strand diagrams". In recent work, we have shown that this algebra can be given an elementary description in terms of contact geometry. In particular, its homology is the algebra of a contact category. In this talk we will describe some of the ideas involved. Using the Kontsevich integral, we construct a functor $Z$ from the category $B$ of "bottom tangles in handlebodies" to a certain category $\mathcal A$ of "Jacobi diagrams in handlebodies". Thus we show that the completion of $B$ with respect to the Vassiliev filtration is isomorphic to $\mathcal A$, and we give a presentation of the latter as a symmetric monoidal category. If time allows, we will also explain how $Z$ relates to the LMO functor, which is a kind of TQFT derived from the Le-Murakami-Ohtsuki invariant of homology 3-spheres. Expository talk about the curve complex and its relevance to the study of 3-manifolds. When a 3-manifold with torus boundary is Dehn filled, and the result is not hyperbolic, the Dehn filling is called exceptional. Exceptional Dehn fillings have been studied topologically and geometrically for many years. The 6-theorem implies that if a Dehn filling is exceptional, then the length of the slope of the filling is at most 6. Agol showed that 6 is a sharp bound for toroidal fillings. However, there are several other types of exceptional Dehn fillings, and the optimal bounds on the lengths of corresponding slopes are unknown. In this talk, we will construct hyperbolic 3-manifolds with reducible Dehn fillings with the longest known slopes, and discuss lengths of other exceptional Dehn fillings. We give conjectured and experimental bounds on their slope lengths. This is joint work with Neil Hoffman. The diagrammatic version of the Jones polynomial, based on the Kauffman bracket skein module, extends to knots in any 3-manifold. In the case of thickened surfaces, it can be endowed with the structure of an algebra by stacking. The case of the torus is of particular interest, and C. Frohman and R. Gelca exhibited in 1998 a basis of the skein module for which the multiplication is governed by the particularly simple "product-to-sum" formula. I'll present a diagrammatic proof of this formula that highlights the role of the Chebyshev's polynomials, before turning to categorification perspectives and their interactions with representation theory. This is joint work with Nathan Dunfield, Stavros Garoufalidis, Craig Hodgson, Neil Hoffman, and Henry Segerman. The 3d index is an amazing set of $q$-series associated to a 1-efficient ideal triangulation of a cusped 3-manifold. Since it is a topological invariant, a key problem is to understand what topological information is contained in the coefficients. Currently we are writing up a proof that the coefficient of the linear term of the simplest version of the 3d index counts the number of isotopy classes of incompressible surfaces of genus 2 minus the number of isotopy classes of Heegaard splittings of genus 2. I will attempt to explain several key technical issues in the proof. The first is that normal and almost normal representatives of incompressible surfaces and Heegaard splittings in isotopy classes can be organised into graphs. For the special case of genus 2, in the presence of a strict angle structure on the triangulation, each such graph is a tree and can be viewed as a Morse complex for the isotopy class. The challenge is to extend this approach to all genera, which involves multi-parameter sweepout theory. We have some ideas about the quadratic term of the index, involving isotopy classes of genus 2 and 3 surfaces of various types and sweepouts up to 4 parameters. In particular, the number of isotopy classes of genus 3 splittings should be part of this count. Experimental computations show that the Khovanov homology of a link tends to have an abundance of torsion. However, torsion of order two appears more frequently than torsion of other orders. We give a partial explanation of this observation, at least in the first and/or last few homological gradings of Khovanov homology. There is a partial isomorphism between the Khovanov homology of a link and the chromatic polynomial categorification of a certain graph related to a diagram of the link. We show that the chromatic polynomial categorification contains only torsion of order two, and consequently, Khovanov homology can only contain torsion of order two in the gradings where the partial isomorphism is defined. This is joint work with Adam Lawrence. It is well-known that any two $n$-vertex triangulations of the 2-sphere are connected by a sequence of edge flips. In other words, the Pachner graph of $n$-vertex 2-sphere triangulations is connected. In this article, we study various induced subgraphs of this graph. In particular, we prove that the subgraph induced by the set of $n$-vertex flag 2-spheres distinct from the double cone is still connected. In contrast, we show that the subgraph induced by the $n$-vertex stacked spheres has at least as many components as there are cubic trees on $n/3$ vertices. Dave Gay and Rob Kirby recently introduced trisections of smooth 4-manifolds arising from their study of broken Lefschetz fibrations and Morse 2-functions. Dave asked us if this could be established using triangulations. We have done this and extended the theory to all dimensions. The idea is to split a $2k$- or $(2k+1)$-manifold into $k$ handlebodies, such that intersections of the handlebodies have special properties. The splitting can be viewed as mapping the manifold into a $k$-simplex and pulling back a decomposition into dual cubes. I'll outline the construction, give some applications and conclude with open questions. This is joint work with Hyam Rubinstein. This event is sponsored by the Australian Mathematical Sciences Institute (AMSI) and the Australian Mathematical Society. AMSI allocates a travel allowance annually to each of its member universities. Students or early career researchers from AMSI member universities without access to a suitable research grant or other source of funding may apply to the Head of Mathematical Sciences for subsidy of travel and accommodation out of the departmental travel allowance. For more information about AMSI travel funding click here. Please contact the organisers if you have any questions about funding! Contact Joan Licata at [email protected] or Scott Morrison at [email protected] for more information.
CommonCrawl
It is often the case that there are several studies measuring the same parameter. Naturally, it is of interest to provide a systematic way to combine the information from these studies. Examples of such situations include clinical trials, key comparison trials and other problems of practical importance. Singh et al. (2005) provide a compelling framework for combining information from multiple sources using the framework of confidence distributions. In this paper we investigate the feasibility of using the Dempster-Shafer recombination rule on this problem. We derive a practical combination rule and show that under assumption of asymptotic normality, the Dempster-Shafer combined confidence distribution is asymptotically equivalent to one of the method proposed in Singh et al. (2005). Numerical studies and comparisons for the common mean problem and the odds ratio in $2\times 2$ tables are included. Electron. J. Statist., Volume 6 (2012), 1943-1966. Bender, R., Berg, G. and Zeeb, H. (2005) Tutorial: Using confidence curves in medical research., Biometrical Journal, 47, 237–247. Birnbaum, A. (1961) Confidence curves: An omnibus technique for estimation and testing statistical hypotheses., J. Amer. Statist. Assoc., 56, 246–249. Casella, G. and Berger, R. L. (2002), Statistical Inference. Pacific Grove, CA: Wadsworth and Brooks/Cole Advanced Books and Software, 2nd edn. Cox, D. (1958) Some problems with statistical inference., The Annals of Mathematical Statistics, 29, 357–372. Dempster, A. P. (2008) The Dempster-Shafer Calculus for Statisticians., International Journal of Approximate Reasoning, 48, 365–377. Durrett, R. (2005), Probability: theory and examples. Duxburry Advanced Series. Brooks/Cole, third edn. Efron, B. (1993) Bayes and likelihood calculations from confidence intervals., Biometrika, 80, 3–26. Efron, B. (1998) R. A. Fisher in the 21st century., Statist. Sci., 13, 95–122. With comments and a rejoinder by the author. Fisher, R. (1973), Statistical Methods and Scientific Inference (3rd edition). New York: Hafner Press. Hannig, J. (2009) On Generalized Fiducial Inference., Statistica Sinica, 19, 491–544. Hannig, J. (2012) Generalized Fiducial Inference via Discretizion., Statistica Sinica. Hannig, J., Iyer, H. K. and Patterson, P. (2006) Fiducial generalized confidence intervals., Journal of American Statistical Association, 101, 254–269. Liu, D., Liu, R. and Xie, M. (2011) Exact meta-analysis approach for the common odds ratio of 2 by 2 tables with rare events., Technical Report, Department of Statistics, Rutgers University. Normand, S.-L. (1999) Meta-analysis: formulating, evaluating, combining, and reporting., Statistics in Medicine, 18, 321–359. Schweder, T. and Hjort, N. L. (2002) Confidence and likelihood., Scand. J. Statist., 29, 309–332. Large structured models in applied sciences; challenges for statistics (Grimstad, 2000). Shafer, G. (1976), A mathematical theory of evidence. Princeton, New Jersey: Princeton University Press. Singh, K., Xie, M. and Strawderman, W. E. (2001) Confidence distributions – concept, theory and applications., Tech. rep., Department of Statistics, Rutgers University. Updated 2004. Singh, K., Xie, M. and Strawderman, W. E. (2005) Combining information from independent sources through confidence distributions., The Annals of Statistics, 33, 159–183. Singh, K., Xie, M. and Strawderman, W. E. (2007) Confidence distribution (cd)-distribution estimator of a parameter., IMS Lecture Notes-Monograph Series, 54, 132–150. Strawderman, W. and Rukhin, A. (2010) Simultaneous estimation and reduction of nonconformity in inter-laboratory studies., J. R. Statist. Soc. B, 72, 219–234. Tian, L., Cai, T., Pfeffer, M. A., Piankov, N., Cremieux, P. Y. and Wei, L. J. (2009) Exact and efficient inference procedure for meta-analysis and its application to the analysis of independent $2\times 2$ tables with all available data but without artificial continuity correction., Biostatistics, 10, 275–281. Wang, J. C.-M., Hannig, J. and Iyer, H. K. a. (2012) Fiducial prediction intervals., Journal of Statistical Planning and Inference, 142, 1980–1990. Webb, K. S., Carter, D. and Wolff Briche, C. S. J. (2003) Ccqm-k21: Key comparison of the determination of pp$'$-ddt in fish oil, final report., Metrologia, 40, tech. suppl. 08004. Xie, M. and Singh, K. (2012) Confidence distribution, the frequentist distribution estimator of a parameter – a review., International Statistical Reviews. To appear (Invited review article with discussions). Xie, M., Singh, K. and Strawderman, W. E. (2011) Confidence distributions and a unified framework for meta-analysis., Journal of the American Statistical Association, 106, 320–333. Zhang, J. and Liu, C. (2011) Dempster-Shafer inference with weak beliefs., Statistica Sinica, 21, 475–494.
CommonCrawl
De Rham cohomology and homotopy Frobenius manifoldsMar 22 2012May 25 2012We endow the de Rham cohomology of any Poisson or Jacobi manifold with a natural homotopy Frobenius manifold structure. This result relies on a minimal model theorem for multicomplexes and a new kind of a Hodge degeneration condition. Central invariants revisitedNov 28 2016We give a new proof of the statement of Dubrovin-Liu-Zhang that the Miura-equivalence classes of the deformations of semi-simple bi-Hamiltonian structures of hydrodynamic type are parametrized by the so-called central invariants. Special cases of the orbifold version of Zvonkine's $r$-ELSV formulaMay 30 2017We prove the orbifold version of Zvonkine's $r$-ELSV formula in two special cases: the case of $r=2$ (complete $3$-cycles) for any genus $g\geq 0$ and the case of any $r\geq 1$ for genus $g=0$. Quantum spectral curve for the Gromov-Witten theory of the complex projective lineDec 18 2013Feb 11 2014We construct the quantum curve for the Gromov-Witten theory of the complex projective line. Combinatorics of binomial decompositions of the simplest Hodge integralsOct 31 2003We reduce the calculation of the simplest Hodge integrals to some sums over decorated trees. Since Hodge integrals are already calculated, this gives a proof of a rather interesting combinatorial theorem and a new representation of Bernoulli numbers. Intersections in genus 3 and the Boussinesq hierarchyJul 24 2003In this note we prove that the enlarged Witten's conjecture is true in the case of the Boussinesq hierarchy for correlators in genus 3 with descendants only at one point. Bs Mixing, Lifetime Difference and Rare Decays at the TevatronMay 16 2005Recent results on Bs mixing, lifetime difference and rare decays obtained by the CDF and DO collaborations using the data samples collected at the Tevatron Collider in the period 2002 - 2005 are presented. The growth of polynomials orthogonal on the unit circle with respect to a weight w that satisfies w,1/w \in L^\infty(T)Nov 01 2016We consider the weight w: 1<w<T on the unit circle and prove that the corresponding orthonormal polynomials can grow.
CommonCrawl
I'm analyzing a large problem with a large $N \times M$ data matrix $A$, where $N$ is the number of observations, $M$ is the number of explanatory variables, and $N \gg M$. I'd like to perform single-pass linear regression on this data set against a scalar response variable. But the challenge is that I'm only able to load $1 \times M$ columns of $A$ at a time (and loading row-by-row is not possible for this problem). So the question is, is there some way to compute $A^T A$ for linear regression using less than $O(NM)$ memory by loading columns of $A$ at a time? Edit: Thanks for your interest. I'm looking for an efficient algorithm that makes a single pass through the data. I'd like to compute $A^T A$ exactly if possible. Each column contains 200 million rows and is stored in separate compressed files that disallow partial read. Production system needs the data in this format. There are 60 thousand columns. The data is on a storage server, and it takes 10 seconds to load one column on a compute server in the same data center and a week to make a single pass through the data. Each compute server has 1 TB of memory. I'm currently working in parallel batches of 2 M rows. Browse other questions tagged regression machine-learning algorithms or ask your own question. How would you modify your model according to that confusion matrix? Machine learning on non-fixed-length sequential data?
CommonCrawl
Meireles, B., A. Usié, P. Barbosa, A. Margarida Fortes, A. Folgado, I. Chaves, I. Carrasquinho, R. Lourenço Costa, S. Gonçalves, R. Teresa Teixeira, et al., "Characterization of the cork formation and production transcriptome in Quercus cerris$\times$ suber hybrids", Physiology and molecular biology of plants, vol. 24, no. 4: Springer, pp. 535–549, 2018. Ramos, A. Marcos, A. Usié, P. Barbosa, P. M. Barros, T. Capote, I. Chaves, F. Simões, I. Abreu, I. Carrasquinho, C. Faro, et al., "The draft genome sequence of cork oak", Scientific data, vol. 5: Nature Publishing Group, pp. 180069, 2018. Usié, A., F. Simões, P. Barbosa, B. Meireles, I. Chaves, S. Gonçalves, A. Folgado, M. H. Almeida, J. Matos, and A. M. Ramos, "Comprehensive analysis of the cork oak (Quercus suber) transcriptome involved in the regulation of bud sprouting", Forests, vol. 8, no. 12: Multidisciplinary Digital Publishing Institute, pp. 486, 2017.
CommonCrawl
Riccardo Adami, Diego Noja, Nicola Visciglia. Constrained energy minimization and ground states for NLS with point defects. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1155-1188. doi: 10.3934\/dcdsb.2013.18.1155. Fran\u00E7ois Alouges, Antonio DeSimone, Luca Heltai, Aline Lefebvre-Lepot, Beno\u00EEt Merlet. Optimally swimming stokesian robots. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1189-1215. doi: 10.3934\/dcdsb.2013.18.1189. Brahim Amaziane, Leonid Pankratov, Andrey Piatnitski. The existence of weak solutions to immiscible compressible two-phase flow in porous media: The case offields with different rock-types. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1217-1251. doi: 10.3934\/dcdsb.2013.18.1217. Yanzhao Cao, Song Chen, A. J. Meir. Analysis and numerical approximations of equations of nonlinear poroelasticity. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1253-1273. doi: 10.3934\/dcdsb.2013.18.1253. Shu Dai, Dong Li, Kun Zhao. Finite-time quenching of competing species with constrained boundary evaporation. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1275-1290. doi: 10.3934\/dcdsb.2013.18.1275. Wei Ding, Wenzhang Huang, Siroj Kansakar. Traveling wave solutions for a diffusive sis epidemic model. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1291-1304. doi: 10.3934\/dcdsb.2013.18.1291. Daniel Ginsberg, Gideon Simpson. Analytical and numerical results on the positivity of steady state solutions of a thin film equation. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1305-1321. doi: 10.3934\/dcdsb.2013.18.1305. Giovanni F. Gronchi, Chiara Tardioli. The evolution of the orbit distance in the double averaged restricted 3-body problem with crossing singularities. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1323-1344. doi: 10.3934\/dcdsb.2013.18.1323. Christopher Grumiau, Marco Squassina, Christophe Troestler. On theMountain-Pass algorithm for the quasi-linear Schr\u00F6dinger equation. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1345-1360. doi: 10.3934\/dcdsb.2013.18.1345. Christian Klein, Benson Muite, Kristelle Roidot. Numerical study of blow-up in the Davey-Stewartson system. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1361-1387. doi: 10.3934\/dcdsb.2013.18.1361. Guirong Liu, Yuanwei Qi. Sign-changing solutions of a quasilinear heat equation with a source term. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1389-1414. doi: 10.3934\/dcdsb.2013.18.1389. Tadele Mengesha, Qiang Du. Analysis of a scalar nonlocal peridynamic model with a sign changing kernel. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1415-1437. doi: 10.3934\/dcdsb.2013.18.1415. Ben Niu, Weihua Jiang. Dynamics of a limit cycle oscillator with extended delay feedback. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1439-1458. doi: 10.3934\/dcdsb.2013.18.1439. Weiran Sun, Min Tang. A relaxation method for one dimensional traveling waves of singular and nonlocal equations. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1459-1491. doi: 10.3934\/dcdsb.2013.18.1459. Huiqing Zhu, Runchang Lin. $L^\\infty$ estimation of the LDG method for 1-d singularly perturbed convection-diffusion problems. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1493-1505. doi: 10.3934\/dcdsb.2013.18.1493. Xiaoling Zou, Dejun Fan, Ke Wang. Stationary distribution and stochastic Hopf bifurcation for a predator-prey system with noises. Discrete & Continuous Dynamical Systems - B, 2013, 18(5): 1507-1519. doi: 10.3934\/dcdsb.2013.18.1507.
CommonCrawl
Abstract : We analyze $$L^2$$ -regularization of a class of linear-quadratic optimal control problems with an additional $$L^1$$ -control cost depending on a parameter $$\beta $$ . To deal with this nonsmooth problem we use an augmentation approach known from linear programming in which the number of control variables is doubled. It is shown that if the optimal control for a given $$\beta ^*\ge 0$$ is bang-zero-bang, the solutions are continuous functions of the parameter $$\beta $$ and the regularization parameter $$\alpha $$ . Moreover we derive error estimates for Euler discretization.
CommonCrawl
For statistical questions involving the Jacobian matrix (or determinant) of first partial derivatives. For purely mathematical questions about the Jacobian it is better to ask at math SE https://math.stackexchange.com/. When transforming 2+ continuous random variables, you use a Jacobian matrix and compute the determinant. Do you also compute the Jacobian for discrete random variables? How to understand Jacobian Matrix from the geometric perspective? Shouldn't it be Jacobian Descent? What are some interesting parameterizations of $4 \times 4$ correlation matrices, and also perhaps their associated jacobians? Derivation of change of variables of a probability density function? Change of Variable technique for two variables?
CommonCrawl
to generate the pattern in question. Unless otherwise specified, file format is inferred from the extension. the embedded LifeViewers in the latter case. - Shifting, rotating, and reflecting patterns. - Convolutions (either using inclusive or exclusive disjunction). - Getting and setting individual cells or arrays thereof. - Pattern-matching capabilities such as find and replace. - Two patterns can be added (elementwise bitwise OR) by using `|` or `+`. patterns viewed as sets of cells, this coincides with union / disjunction. through the operator `&` and its in-place form `&=`. - The Kronecker product of two patterns can be perfomred using `*`. rotation using `pattern("rccw", 0, 0)`. the specified number of generations. keys in the dictionary, and values in the proportions in the dictionary. to fill that rectangle with 30-percent density. third returns and supports assignment from a numpy array of the same length. the other two syntaxes allow pairs of arbitrarily large ints to be used. pattern (called Lidka) and runs it 30000 generations in Conway's Game of Life. It prints both the initial and final populations. Now, let us walk through what happens in the code. so does not take a perceptible amount of time. into the `lifelib` source code. is Windows, it's actually a DLL masquerading under the extension .so). stored once in the compressed container. pattern by recursively walking the quadtree. That's it! You've now simulated your very first pattern in Hashlife! of more complex and interesting applications of `lifelib`. you run this for the first time. running `lifelib` in Windows or POSIX. Make sure your machine has the correct system requirements before commencing. getting started with `lifelib` is straightforward. on Windows, Mac OS X, and Linux. You can use either Python 2 or 3. `lifelib` into a user-local directory. `lifelib` usage. Other features are documented in this README file. generality, this is recommended as the default algorithm. City', and borrows much of its terminology. tile-based) are fully compatible with every `lifelib` iterator. from a small $`32 \times 32`$ universe to an unbounded universe. instruction set supported by your processor. at a time using a lookup table. with a radius up to 7 (i.e. $`15 \times 15`$). uses the vectorised lifelike iterator as a subroutine. - **isogeny**: A multistate generalisation of isotropic. - **gltl**: A multistate generalisation of Larger than Life. Generations rules and outer-totalistic cellular automata. neighbourhoods with a radius up to 5 (i.e. $`11 \times 11`$).
CommonCrawl
Abstract : In this paper we study a particular multidimensional deconvolution problem. The distribution of the noise is assumed to be of the form $G(dx) = (1 − \alpha)\delta(dx) + \alpha g(x)dx$, where $\delta$ is the Dirac mass at $0\in R^d$ , $g : R^d → [0, \infty)$ is a density and $\alpha \in [0, 1 2 [$. We propose a new estimation procedure, which is not based on a Fourier approach, but on a fixed point method. The performances of the procedure are studied over isotropic Besov balls for Lp loss functions, $1\leq p<\infty$. A numerical study illustrates the method.
CommonCrawl
APS -20th Biennial Conference of the APS Topical Group on Shock Compression of Condensed Matter - Event - Phase transitions of titanium under dynamic loading. Abstract: J7.00003 : Phase transitions of titanium under dynamic loading. Information on sound velocity, which characterizes substance behavior under conditions of shock compression followed by release, is required for formulation of substance equations of state. The kink in the dependence of sound velocity on pressure is associated with structural transitions including its melting in shock-compressed substance. Investigations of the ($\alpha $-$\omega )$ titanium phase transition revealed significant discrepancy in the measured values of the transition. Pressures of phase transition completion varied from $\approx $17.5 to 22~GPa under dynamic compression. Interaction of Ti with majority of elements gives opportunity to produce many alloys with various properties. In structure, which is formed when annealing, titanium alloy VT-20 is classified as a pseudo $\alpha $-alloy with its structure presented by the $\alpha $-phase and insignificant quantity of the $\beta $-phase. The authors present results of sound velocity measurement in shock-compressed samples of VT1-0 titanium and VT-20. In titanium, kinks were recorded at the dependence of sound velocity on pressure at the pressures of 20-40 and 60-90~GPa. These kinks can be explained by phase transitions. X-ray structural analysis revealed presence of the $\omega $-phase in the samples, which had been recovered after loading by pressures in steel ampoules in the range from 9 to 23~GPa. Beginning of VT-20 alloy melting relates to pressure of 130 GPa at shock adiabat.
CommonCrawl
We introduce a variational approach to the Hele-Shaw flow $D_t k=\triangle u+f\chi$, $f\geq 0$ in $R^N$, where $k$ is the characteristic function of an open set $O(t)$ in $R^N$ and $u(t,\cdot)$ in $H^1_0(O(t))$ solves $-\triangle u(t,\cdot)=f$ in $O(t)$. By choosing a time step $\tau_j=\tau/2^j$ and iteratively solving a variational problem in $R^N$, we construct a staircase family of opens sets and a corresponding family of functions: as $j\to\infty$, both sets and functions converge increasingly, at fixed time, to a weak solution of the problem. When the latter is not unique, the solution thus obtained is characterized by a minimality property, with respect to set inclusion, at fixed time. We also prove several monotonicity results of the solutions thus obtained, with respect to both the initial set and the forcing term $f$. In particular, these monotonicity properties imply that $O(t)$ has finite perimeter for every $t$, provided that $O(0)$ has finite perimeter. Finally, under very mild assumptions, we prove that the number of connected components in non increasing, that $O(t)$ is connected for large $t$ and that it tends to fill the whole of $R^N$.
CommonCrawl
Proof by contradiction is a valid deduction sequent in propositional logic. If, by making an assumption $\phi$, we can infer a contradiction as a consequence, then we may infer $\neg \phi$. The conclusion does not depend upon the assumption $\phi$. If we know that by making an assumption $\phi$ we can deduce a contradiction, then it must be the case that $\phi$ cannot be true. Thus it provides a means of introducing a negation into a sequent. Proof by Contradiction is also known as not-introduction, and can be seen abbreviated as $\neg \mathcal I$ or $\neg_i$. However, there are technical reasons why this form of abbreviation are suboptimal on this website, and PBC (if abbreviation is needed at all) is to be preferred. It is also known as proof of negation, i.e., proof that some (positive!) assumption is not true. Some sources do not explicitly distinguish between Proof by Contradiction and Reductio ad Absurdum, which starts with a negative assumption ($\neg \phi$). Both can be referred to as indirect proof, but Reductio ad Absurdum is rejected by the intuitionistic school.
CommonCrawl
diverges to $\infty$ a.s.\ as $t\dto 0$. The typical requirement here is that $X$ be of bounded variation with non-zero drift. %% correspond to a stronger singularity in the measure at 0. An important component in the analyses is the way the largest positive and negative jumps interact with each other. The analysis allows for the possibility of ties in large jump sizes.
CommonCrawl
by Hugh Duncan. Published on 15 November 2018. Back in 2012 I searched the Internet for polygons that lacked a bit of an edge, but found nothing, so I worked on them myself. Let us have a look at the thinking process behind these little-known polygons. Look! The internal angle has now become negative! A (regular) 3/2-sided polygon contains angles, each of which is -60°! What new monster have we created?! This is where we go down the rabbit hole, if we haven't already gone there that is! Okay let's draw it! Consider the first side as being drawn horizontally from left to right (see above). The second side would start with a dotted line extending from the right end of the first line to find the original direction (east) and hence, next, the external angle. To draw this angle, one can now measure anti-clockwise from the dotted line until 240° is reached. Note the second side has now crossed over the first side and ends 60° below the first line. The second side can now be drawn 60° below the first. The crossing over explains why the internal angle is -60°. I have left a 'loop' at the vertex, to show that it has overturned the first side. The third side can be drawn as before, creating another angle of -60°, including the loop of course. Before we join the second and third, we first have to turn again through 240° anti-clockwise, creating a third loop. Three sides have been drawn and taken two turns (3 lots of 240° = 720° = 2 turns), to complete a closed polygon, hence that is 3/2 sides per turn or 1.5 as expected. This polygon looks like an equilateral triangle. It is like taking an iron bar and over bending it into three equal parts. From left to right: the triangle, the 3/2-agon, the anti-triangle and the anti-3/2-agon. Compared to the normal +3 sided equilateral triangle, the triamisagon looks the same, and if one ignores the loops at each vertex, the +3/2-agon is the 3-sider but flipped upside down. A triangle of +3 sides has its arrows pointing anti-clockwise, the convention used in maths for a positive rotation. At first glance, the 3/2-agon has the arrows seemingly pointing clockwise, but this is because of the loops at each vertex. If one was a car following a route of this shape, then the car always turns anti-clockwise. Does anything like these shapes exist out there in the real world? Well yes it does! Who remembers the old Knitting Nancy? See figure below of me in action! Slip roads off motorways and most model car racing kits allow one to include a few loops. Clothes pegs, the Adobe logo and hand exercisers contain them too. A complete turn. We start drawing with one straight, horizontal edge going right (east). At the end, now we turn through 360° anticlockwise. This is a complete revolution, so we are actually facing the original direction. Not only that, we have created a little loop at this end (see diagram below). As we have not (yet) met up with the end of any edge, we draw a second edge from this point. This 'second' side continues in the same direction as the first, to the right (east). Now we turn through 360° again and once more we are still facing east, with a second loop. If we repeat this, the process will continue, forever adding another side, doing a little pirouette then edge, loop, edge loop, edge loop. We find that we made one side after one complete turn, two sides after two turns and so on, so $n$ sides in $n$ revolutions. In other words, this is one side per turn, 1/1, hence it is a one-sided polygon by our chosen definition! We could reduce the loops to their infinitely small size and our one side per turn shape will indeed look like a one long side that goes off into the distance in each direction. As we look at it, the area below the looping line would be the 'inside' of the shape while the area above it would be the 'outside'. See diagram below: I have tried to picture this in some other real world way. The closest I have come is to imagine that our two dimensional surface is perhaps a cylindrical surface. Imagine taking a sheet of A4 paper and we draw a horizontal line across the middle of the paper. We then take the sheet and roll it into a cylinder, such that the right hand edge of the paper joins onto the left hand edge. Our 2D universe is the outside surface of this paper. The line we drew now makes a simple 'belt' or equator round the middle. Voila, a one-sided polygon. Above the line is the outside, while below this line is the inside. See below left. Would the real one-sided polygon stand up? According to Wolfram, a polygon with one side is called a henagon. There is no shape that exists in Euclidean space with one (straight) edge. It is typically drawn as a circular line that joins back up with itself (see right). The circumference of the henagon is just one edge and when drawn like this, it clearly has an inside and an outside. I would like to suggest that my equator around a cylindrical 2D world version and my infinite line with equally spaced loops should also be an acceptable alternative. This is turning through more than one revolution between sides. Crazy! Draw a horizontal line pointing east. At the end, turn anti-clockwise through 480° and draw a second line. Note we covered a complete revolution and then managed another 120°. See below, second shape from the left. Notice also that the little loop we drew to distinguish these low-sided shapes is now on the inside! Now turn through another 480° anti-clockwise and draw a third side. Like the bending of the iron bar, we have made a full loop plus a bit more and this loop is also inside the shape. All we have to do is make our final spin of 480 and join onto the start of the first side (see below, second shape from the left). We have made 3 sides but we have turned through $3 \times 480 = 1440°$. This is four compete turns (1440/360 = 4), so 3 sides in four turns would be written as ¾ or ¾ of a side per turn. This looks like the equilateral triangle (and our 3/2 sided polygon, which was also looking like an equilateral triangle), but this time with the little loops inside the vertices. If we were to repeat the same process for other simple fractions between 2/3 and 1, we get the next section of polygons that look like our original ones with more than two sides, but with the identifying internal loops. See a selection of them above. Not surprisingly, there will be a similar set of anti-polygons between -1 and -2/3 on the other side of zero that will look like the set above, but as 'holes' with the equivalent shapes. And turn again (2/3 down to ½) and so on….. Let's keep going before the maths police stop us. Take $n = 3/5$ and the external angle is $360 \times 5/3 = 600°$. So we draw our starting line and turn 600° anti-clockwise, which is one complete turn of 360° and then a further 240°. We do this two more times and we get a shape with 3 sides which has turned $3 \times 600 = 1800°$ or five revolutions, i.e. three sides in five turns or 3/5 sides per turn. See below. Note we have what looks like a triangle again but the loops are on the outside again and they are double loops. These double outside loopers are found between 2/3 sides and ½ sides. A selection of shapes from this range is shown below. This system ends at ½ with a string of one siders with double loops between them. There's a pattern to these fractional polygons below two sides and all the way to zero. The next batch would have double loops inside and the one after that triple outside loops, then triple inside loops and so on. This continues all the way down to zero sides, each time adding an outer or inner loop and each stage taking up less and less of the polygonic number line. A few of those shapes closer to zero are shown below. These systems flip flop between outside and inside loopers, increasing an additional loop each time and the switching happens more frequently as we approach zero, but hopefully the trend is clear. It is now time to bring the topic to a close. A zeragon is a point, a vertex, and has no sides, so technically it can have no angle 'inside' as there is no inside. As with the monagon and those negative sided polygons, the outside angle is taken to be negative. As seen in the diagram above, there is a complete 360º all the way round the outside of the point, so being outside it is -360º rather than +360º and this too obeys the general equation. Hugh Duncan graduated from UCL in 1980 having studied astronomy. He teaches physics and maths in the International School of Nice and is currently writing a popular science book on the topic of fractograms. A polygon with four and a half sides?!
CommonCrawl
if $\alpha = I$, the problem becomes a classical problem in variable projection for nonlinear least square, which has been studied for decades. The benefit of $\alpha = I$ is that, the nonlinear and linear part becomes separable through pseudo-inverse. I can simply (blindly) use an optimization package to generally solve this problem by do naive gradient descent, it looks like a neural network training process, without leveraging the structure of the problem. It works for toy problem, like when n,r less than 3. The downside is that, since it doesn't assume any structure to leverage, the computational time is costly. I want to know if any one know which category does this problem belongs to, and if there is better algorithm existing to solve this. Browse other questions tagged optimization convex-optimization nonlinear-programming or ask your own question. Differences between "least square", "mean square" and "least mean square"?
CommonCrawl
Abstract: We study the thermal properties of QCD in the presence of a small quark chemical potential $\mu$. Derivatives of the phase transition point with respect to $\mu$ are computed at $\mu=0$ for 2 and 3 flavors of p-4 improved staggered fermions on a $16^3\times4$ lattice. Moreover we contrast the case of isoscalar and isovector chemical potentials, quantify the effect of $\mu\not=0$ on the equation of state, and comment on the screening effect by dynamical quarks and the complex phase of the fermion determinant in QCD with $\mu\not=0$.
CommonCrawl
Here the method I took was to employ integration by parts and then call to special functions, but can this equally be achieved with say a Feynman Trick? or another form integral transform? Edits: Correction of original limit observation (now removed) Correction of not stating region of convergence for $\alpha$. Correction of 1/sqrt to sqrt in final line. Thanks to those commentators for pointing out. Here is a method that relies on using a double integral.
CommonCrawl
Consider the conjugacy classes of the symmetric group $S_N$. Each conjugacy class consists of permutations that have the same cycle structure. We see that the number of possible cycle structures is given by the number of ways of partitioning N. Given some cycle structure, how can one calculate the number of elements, n, in the class pertaining to said structure? Is this formula found by considering a binomial coefficient? I would very much appreciate it if someone could explain to me each term in this equation, specifically the denominator. There are at least two possible proofs, one of them by enumeration and another one using the exponential formula. and hence we are extracting from an EGF. Not the answer you're looking for? Browse other questions tagged group-theory permutations binomial-coefficients symmetric-groups or ask your own question. What is the group generated by the conjugacy class containing $(12\ldots n)$ in $S_n$? How do I find the conjugacy classes of $A_4$?
CommonCrawl
Q. Is it possible to trap all the light from one point source by a finite collection of two-sided disjoint segment mirrors? I posed this question in several forums before (e.g., here and in an earlier MO question), and it has remained unsolved. But I've recently become re-interested in it. Let me first clarify the question. It seems best to treat the mirrors as open segments (i.e., not including their endpoints), but insist that they are disjoint as closed segments. And the point source of light should be disjoint from the closed segments. $6$ mirrors. Lightray starts at center, exits (green) after $46$ reflections. Of course a finite number of rays can be trapped periodically, and less obviously a finite number of rays can be trapped nonperiodically. But it seems quite impossible to trap all rays from a single fixed point. Because of segment disjointness, there are paths to $\infty$, and it seems likely that some ray will hew closely enough to some path to escape to $\infty$. So I believe the answer to my question is No. Perhaps application of Poincaré recurrence could lead to a proof, but I cannot see it. Related: Can we trap light in a polygonal room?. Browse other questions tagged mg.metric-geometry ds.dynamical-systems ergodic-theory billiards or ask your own question. Can we trap light in a polygonal room?
CommonCrawl
In many healthcare settings, intuitive decision rules for risk stratification can help effective hospital resource allocation. This paper introduces a novel variant of decision tree algorithms that produces a chain of decisions, not a general tree. Our algorithm, $\alpha$-Carving Decision Chain (ACDC), sequentially carves out ``pure'' subsets of the majority class examples. The resulting chain of decision rules yields a pure subset of the minority class examples. Our approach is particularly effective in exploring large and class-imbalanced health datasets. Moreover, ACDC provides an interactive interpretation in conjunction with visual performance metrics such as Receiver Operating Characteristics curve and Lift chart. Overview on "The State of Predictive Analytics in U.S. Healthcare"
CommonCrawl
In mathematics, a differential operator is an operator defined as a function of the differentiation operator. What is the significance of differential operators over other operators in group theory? Definition of the principal symbol of a differential operator on a real vector bundle. When do Harmonic polynomials constitute the kernel of a differential operator? What is the meaning of $1/(D+a)$, where $D$ is the derivative operator? Relationship between divergence operators defined with respect to two different volume forms. Given $g$ find an $f$ which is solution for $L f = g$. How do I do this? Besov–Zygmund spaces and the Inverse Function Theorem, is the Inverse Zygmund? Does the left shift operator, as defined here, satisfy an analogue of the product rule, and if so, what? Is the kernel of the product of two commuting differential operators the sum of the kernels? How to prove convergent function imply its derivative equals to zero? Let $f\colon (0,\infty) \to\Bbb R$ be differentiable and let $A$ and $B$ be real numbers. Prove that if $f(t) \to A$ and $f′(t) \to B$ as $t \to \infty$ then $B = 0$. Change of variables in a differential operator.
CommonCrawl
What is the role of rotational symmetry in the diagonalization process of spin-1/2 Isotropic Heisenberg Hamiltonian on 1D lattice using coordinate Bethe ansatz? $\left(H^\dagger H\right)^2$ is invariant under $U(1)\times SU(2)$? Is the U(1) gauge theory in 2+1D dual to a U(1) or an integer XY model? I understand that the current is conserved for an obvious reason. But why is the flux corresponding to a global $U(1)$ symmetry? What is this global $U(1)$ symmetry? If we go to the dual version, i.e. Wilson-Fisher without gauge field but just global symmetry, then U(1) symmetry simply corresponds to boson number conservation. Now go back to Abelian-Higgs, you can ask what's the dual version of the previous global U(1) symmetry: you would then notice that the role of previous U(1) symmetry is played by the flux conservation now. This means your U(1) gauge field is non-compact, and monopoles operators are forbidden (at least in this simple duality of continuous theory). Just remember here monopole operator is dual to previous boson operator, then other things are clear. I think the topological current is conserved automatically by its definition. Noether current is conserved on-shell, but here it is conserved automatically. Thank you. Could you explain what you mean by non-compact gauge field? "Non-campact"... OK... Are you from high energy? I am just a new student in theoretical physics. I know what compact means in topology but I don't know what non-compact gauge field is. I see. You might find Page 8 and the footnote 4 helpful in this paper: arXiv 1703.02426. which explains lots of details, and importantly, the origin of non-compactness in duality. Note that the meaning of noncompact Kite is using is different from how high energy theorists use it and is different from the fact that $U(1)$ is compact. Whether you have monopoles in the theory or not has nothing to do with the fact that the gauge group is $U(1)$ or $R$. $U(1)$ vs. $R$ is the difference between having quantized electric charges or not. In theories with rotational symmetry (and therefore quantized angular momentum since SO(n) is compact), having a magnetic monopoles implies electric charge quantization, but that's the only relation.
CommonCrawl
Seven mathematicians met up one week. The first mathematician shook hands with all the others. The second one shook hands with all the others apart from the first one (since they had already shaken hands). The third one shook hands with all the others apart from the first and the second mathematicians, and so on, until everyone had shaken hands with everyone else. How many handshakes were there altogether? The next week, eight mathematicians met. How many handshakes took place this time? Sam is trying to work out how many handshakes there would be if 20 mathematicians met. He says that since each mathematician shakes hands 19 times, there must be $20 \times 19$ handshakes altogether. Helen disagrees; she worked out $19 + 18 + 17 + ... + 2 + 1$ and got a different answer. What is wrong with Sam's reasoning? How should he modify his method? One day, 161 mathematicians met. How many handshakes took place this time? Can you describe a quick way of working out the number of handshakes for any size of mathematical gathering? Could there be exactly 4851 handshakes at a gathering where everyone shakes hands? How many mathematicians would there be? What about the following numbers of handshakes? You may wish to try the problems Picturing Triangle Numbers and Mystic Rose. Can you see why we chose to publish these three problems together? You may also be interested in reading the article Clever Carl, the story of a young mathematician who came up with an efficient method for adding lots of consecutive numbers. Curious. Visualising. Interactivities. Games. Mathematical reasoning & proof. Triangle numbers. Generalising. Creating and manipulating expressions and formulae. Patterned numbers. Factors and multiples.
CommonCrawl
The bounded arithmetic theory \mathof Johannsen and Pollett (LICS'98), which is closely related to the complexity class DLogTime- uniform \math, is extended by a function symbol and axioms for integer division, which is not known to be in DLogTime-uniform \math. About this extended theory \math, two main results are proved:1. The \math-definable functions of \math[div]$ are exactly Constable's class K, a function algebra whose precise complexity- theoretic nature is yet to be determined. This also yields the new upper bound \mathuniform \math.2. The \math-theorems of \math[div]$ do not have Craig-interpolants of polynomial circuit size, unless the Diffie-Hellman key exchange protocol is insecure.
CommonCrawl
Specifically, devise schedules and an arrangement for two electromechanical 24 - hour light timers to control the flow of power from an outlet to a light bulb. The challenge is to obtain the following repeated lighting pattern with the largest possible number D, while N is a nonzero constant other than 24 −  D , beginning when the outlet's power is switched on. The example above shows much of what is allowed, although one timer is missing. Each timer repeatedly cycles through its schedule of 24 intervals that last an hour each. A socket multiplier is not strictly a one-way component. Power can flow out through its plug and in through its sockets. This even allows for power to flow between pieces plugged into a multiplier's sockets, whether or not the multiplier is itself plugged into anything. When the timer is ON (in a time interval where the switch is closed), then the light can be powered in reverse (supplying power the "wrong" way). When the timer is OFF (in a time interval where the switch is open), supplying power the "wrong" way does nothing. Whether or not the timer continues to click will depend entirely on whether or not the timer is powered from the "right" direction. When the timer is transitioning from ON to OFF or vice versa, if the timer is not powered from the "right" direction, then maybe it will just blow up. So it always has to be powered from the right direction during a transition, but may be powered from the wrong direction also. D=12, N=4, although I'm not sure that this is the best solution you can get. Which involves 2 socket multipliers, 1 wire (male-to-male), and the outlet-timers-light setup. In this setup, the light will be on when either timer A OR B is set ON, and only off when both A and B are set OFF. This basically puts timer A on a 32 hour cycle, since it will become unpowered for four hours at 2am and 2pm every "day". And since timer B is essentially on an 8 hour cycle, it behaves exactly the same through each cycle of timer A, we don't need to worry about them becoming de-synced. The 32 hour cycle of timer A is below. I've put dots above each hour when timer B will be off, and parentheses around all the hours that timer A will be off; the light bulb will be on only when these coincide (the hours indicated by a hyphen). We start with an on-time for 10 hours. Next timer1 will be off for 12 hours so timer2 won't move. Then timer1 will switch on for 12 hours (starting at intervall24) and timer2 will go through 12 hours off-state. Timer1 will switch off for another 12 hours. Finally timer1 will switch on again, run for 1 hour during intervall24 and starting with intervall1 timer2 will also switch on thereby closing the cycle. This obviously works because one timer acts as a wire and the other as a 11-on, 1-off timer with 12-hour period. I claim that no period of daylight over 11 hours can be done with a non-24 hour cycle. First, let's look at the possible circuit diagrams up to circuit topology (assuming that the timers are connected to the main circuit at both sockets). Let's work backwards. The 5th diagram shows the light being completely on - $D=\infty$, $N=0$ (but I don't think that counts). The directionality of the timer doesn't matter. The 4th diagram shows the light always on as well - $D=\infty$, $N=0$ (again, I don't think that counts). And again, directionality doesn't matter. The 3rd diagram shows one timer being bypassed - so the light is dependent on only one timer. This can give us at most $D=11$, $N=1$ (given how we set the non-bypassed timer). If the non-bypassed timer was backwards with respect to the image at the top of this post, it wouldn't reliably be able to turn itself off and the light would be constantly on. The 2nd diagram shows two timers in parllel - this means that the light will be on with the effect of a single timer - the result of timer A && timer B. This can give us at most $D=11$, $N=1$ (given the net effect of timer A && timer B). If one of the timers was backwards, again, it wouldn't be able to turn itself off and the light would be constantly on. Finally, we have just one circuit to work with - the one with two timers in series. None of the timers can be backwards because otherwise they couldn't reliably turn themselves on/off and they wouldn't be able to switch out of their initial state-leading to either a one timer problem ($D=11$, $N=1$ best) or a permanently fixed state light. Assume $D>11$. Then because hours are discrete, $D\ge12$. First, the two timers each have just one ON period (maybe spanning 24-1) - we will now prove this. If the light is on during an ON period, the period must be at least 12 hours long. So there can't be two ON periods where the light is on. So - what is the point of the first timer having an ON period without the light actually being on? To advance the second timer! But if that was the case, we could just tack the length of the 'useless' ON period onto the end of the previous period - this actually reduces or keeps the same the amount of collisions between the first and second timers switching on/off at the same timer. Since the second timer only operates when the first timer does, the light has to be on for at least part of the time the second timer is in an ON period. The computer search didn't return any results with $D>11$. I do realise how the motor works now, so the impossibility proof is for posterity. Not the answer you're looking for? Browse other questions tagged optimization real time jigsaw-puzzle circuitry or ask your own question.
CommonCrawl
Ecares - Eric Marchand, Sherbrooke U. Abstract : This talk will address the estimation of predictive densities and their efficiency as measured by frequentist risk. For Kullback-Leibler, $\alpha-$divergence, $L_1$ and $L_2$ losses, we review several recent findings that bring into play improvements by scale expansion, as well as duality relationships with point estimation and point prediction problems. A range of models are studied and include multivariate normal with both known and unknown covariance structure, scale mixture of normals, Gamma, as well as models with restrictions on the parameter space.
CommonCrawl
given a space $X$, find the minimal amount of area needed to enclose a fixed volume $v$. If the space $X$ has a simple structure or many symmetries, the problem can be completely solved and the "optimal shapes" can be explicitly described (e.g. Euclidean space and the sphere). In the general case, however, one cannot hope to obtain a complete solution to the problem and a comparison result is already satisfactory. Probably, the most popular result in this direction is the Levy-Gromov isoperimetric inequality. During the talk, we will show that a sharp isoperimetric inequality à la Levy-Gromov holds true in the class of essentially non-branching metric measure spaces $(X,\mathsf d,\mathfrak m)$ with $\mathfrak m(X)=1$ satisfying the so called Measure-Contraction property, with the latter being a condition that, in a synthetic way, encodes bounds on the Ricci curvature and on the dimension of the space. Measure theoretic rigidity is also obtained. This is a joint work with prof. Fabio Cavalletti.
CommonCrawl
Abstract: Recently attention has been drawn to practical problems with the use of unbounded Pareto distributions, for instance when there are natural upper bounds that truncate the probability tail. Aban, Meerschaert and Panorska (2006) derived the maximum likelihood estimator for the Pareto tail index of a truncated Pareto distribution with a right truncation point $T$. The Hill (1975) estimator is then obtained by letting $T \to \infty$. The problem of extreme value estimation under right truncation was also introduced in Nuyts (2010) who proposed a similar estimator for the tail index and considered trimming of the number of extreme order statistics. Given that in practice one does not always know whether the distribution is truncated or not, we discuss estimators for the Pareto index and extreme quantiles both under truncated and non-truncated Pareto-type distributions. We also propose a truncated Pareto QQ-plot in order to help deciding between a truncated and a non-truncated case. In this way we extend the classical extreme value methodology adding the truncated Pareto-type model with truncation point $T \to \infty$ as the sample size $n \to \infty$. Finally we present some practical examples, asymptotics and simulation results.
CommonCrawl
Are you a veteran rumba dancer? Have you never danced before in your life? No worries--all are welcome at Cuban Rumba Jam! Come dance like no one's watching in the streets of Berkeley. We will meet at the MLK Student Union just outside the Cal Student Store at 3 PM to head down together. Look for the person with the BIO sign! This year, the EECS Career Fair is Wednesday September 5th - just a week away! In partnership with HKN, this resume critique event will give you the opportunity to have your resume reviewed by a recruiter or engineer in a safe and supportive environment. Tech employers/recruiters will provide you a 'speed' 10-15 minute review of your printed resume in prep for the fair the next day. In the first hour, we discuss the problem of interpolation for curves in projective space: When does there exist a curve of degree d and genus g passing through n general points in $\mathbb P^r$? Come and learn about Microsoft Internships from your classmates who have experienced them. Experience the Garden in a whole new way as you reconnect with your body and with nature in this 60-minute yoga walk. Through gentle movements, standing poses, and breathing exercises, we will walk through the Garden paths, pause at vistas and groves, and awaken your senses. This class is open to all bodies and led by Eugenia Park, a yoga instructor-Ayurveda wellness counselor, mediator, and dancer. Come learn about corn at this month's Discovery Station. This event is co-sponsored by HKN and organized in collaboration with the EECS department. Nicaragua is in the middle of its worst political crisis in decades. The Solidarity Caravan is a group of Nicaraguan activists touring the United States to educate about the current situation and inspire support for the people of Nicaragua. They will speak on the continuing peaceful civic resistance to the human rights violations occurring throughout Nicaragua. Members of the Solidarity Caravan for Peace speak in Europe. This talk will examine several paleoenvironmental studies from the Maya lowlands as a basis for developing a broader context from which to view the rise and fall of prehispanic Maya settlement. Join us for our weekly coffee break! Celebrate the start of September and the Bay Area's 'Indian Summer' with a fresh cup of coffee and conversation at the International House Cafe. The right to a job has been part of U.S. policy debates before. In this talk, Spriggs will discuss what a job guarantee would solve, and what problems would remain. Woon-Yeoung Cheon has been acclaimed as one of South Korea's most daring and provocative literary voices. In Farewell, Circus (2018), Cheon's nightmarish, grotesque style is movingly mixed with a dreamy tone to create a story as much about an individual woman's personal quest for freedom as it is about disability, marginalization, and transnational migration. This is the first of eight papers that will make up a lecture series entitled "Digital Humanities and the Ancient World." The event is co-sponsored by the AHMA Colloquium and the Townsend Center for the Humanities. Please join us for our Annual Reception! Come meet the Institute staff and find out about our upcoming events. Riot Cal alumni, former Riot tech interns, and engineers from various teams are here to share their journey into the games industry and their tips, tricks, lessons learned throughout their experiences. We'll also cover aspects of the application and interview process so you're prepared for the next step in your career! Join Karen Ross, Secretary of the California Department of Food and Agriculture; the Goldman School of Public Policy; and the Berkeley Food Institute for a discussion on immigration and the future of agriculture in California. WED, SEPT 5, 6:30pm. Join us for a talk co-sponsored by the Center for Japanese Studies. Go Hasegawa will speak about his practice & approach of exploring new possibilities & building new connections. Also live streaming in 106 Wurster. Open to all! Biologically Inspired Design is becoming a leading paradigm for the development of new technologies.
CommonCrawl
A plane graph is a finite simple graph with a fixed embedding into the two-sphere. The embedding induces an embedding on a minors of a plane graph (i.e. a graph obtained by successive removal of vertices, removal of edges, and contraction of edges). In other words, we may consider minors of plane graphs to be plane graphs themselves. These minors are called surface minors. The surface-minor relation is an ordering on plane graphs, which is finer than the minor relation: a plane graph may be a minor of another plane graph, without being a surface minor (indeed, if a graph has several non-isomorphic embeddings into the two-sphere, then these are examples of that behavior). Is the surface-minor ordering of plane graphs a well-quasi-ordering? That is to say, is there among any infinite collection of plane graphs a pair of two plane graphs, one of which is a surface minor of the other? This seems a very natural question to me, yet I couldn't find an answer in the literature. One would assume that this question must be answered in one of Robertson and Seymour's papers, since both the notion of well-quasi-orderings and the notion of surface minors are pretty central in them. If one restricts oneself to plane trees, the answer is positive: this is a version of Kruskal's Tree Theorem. The answer is also positive if one restricts oneself to 3-connected plane graphs, since Whitney's Theorem says that they have a unique embedding, and so the surface-minor and the minor relation coincide. Of course, one can ask this question not only about the two-sphere, but about any other fixed (not necessarily orientable) closed surface. Please note this question is not a direct corollary of the Robertson-Seymour theorem that the minor ordering on finite graphs is a well-quasi-ordering; and it does not have a direct relation to Kuratowski's Theorem, which gives the two forbidden minors for planar graphs (not to be confused with the plane graphs this question is about). This is a partial answer, for the case when the given sequence $G_1,G_2,\dots$ of plane graphs has unbounded treewidth. In such a case, for every $n$ there is an $i$ such that $G_i$ contains the $n\times n$ grid as a minor, and thus also as a plane minor. The rest follows from the simple fact that $G_1$ is a plane minor of a sufficiently large plane grid. Not the answer you're looking for? Browse other questions tagged graph-theory graph-minors or ask your own question. Do graphs with large number of paths contain large chain minor? Is there an established notion of 'signed treewidth' for signed graphs?
CommonCrawl
[1502.05992] What is the probability that a random integral quadratic form in $n$ variables has an integral zero? Title:What is the probability that a random integral quadratic form in $n$ variables has an integral zero? Abstract: We show that the density of quadratic forms in $n$ variables over $\mathbb Z_p$ that are isotropic is a rational function of $p$, where the rational function is independent of $p$, and we determine this rational function explicitly. When real quadratic forms in $n$ variables are distributed according to the Gaussian Orthogonal Ensemble (GOE) of random matrix theory, we determine explicitly the probability that a random such real quadratic form is isotropic (i.e., indefinite). As a consequence, for each $n$, we determine an exact expression for the probability that a random integral quadratic form in $n$ variables is isotropic (i.e., has a nontrivial zero over $\mathbb Z$), when these integral quadratic forms are chosen according to the GOE distribution. In particular, we find an exact expression for the probability that a random integral quaternary quadratic form has an integral zero; numerically, this probability is approximately $98.3\%$.
CommonCrawl
It's well known that R is a memory based software, meaning that datasets must be copied into memory before being manipulated. For small or medium scale datasets, this doesn't cause any troubles. However, when you need to deal with larger ones, for instance, financial time series or log data from the Internet, the consumption of memory is always a nuisance. Just to give a simple illustration, you can put in the following code into R to allocate a matrix named x and a vector named y. If I try to run a regression on x and y with the built-in function lm(), I get the error. In R, each numeric number occupies 8 Bytes, so we can estimate that x and y will only occupy 5000000 * 7 * 8 / 1024 ^ 2 Bytes = 267 MB, far less than the total memory size of 2GB. However, the memory is still used up since lm() will compute many variables apart from x and y, for example, the fitted values and residuals. This runs successfully on my machine and the process is very fast of only about 0.6 seconds (I use an optimized Rblas.dll, download here). Nevertheless, if the sample size is larger, the matrix operation may also be unavailable. To provide an estimation, when the sample size is as large as 2GB / 7 / 8 Bytes = 38347922, x and y themselves will swallow all the memory, let alone other temporary variables created in the computation. So how can we cope with this problem? One approach to avoid too much consumption of memory is to use a database system and excute SQL statements on it. Database restores data on the hard disk and uses a small buffer to run SQL, so you don't need to worry about the memory; it's just a matter of how long it takes to accomplish the computation. R supports many database systems among which SQLite is the lightest and the most convenient. There is an RSQLite package in R that allows you to read/write data from/to an SQLite database, as well as executing SQL statements on it and fetching results back to R. Therefore, if we can "translate" our algorithm into SQL statements, then the size of data we can deal with will only depend on the hard disk size and the execution time we can tolerate. I use a lot of rm() and gc() functions to remove unused temporary variables and cleanse the memory. When all is done, you'll find a regression.db file in your working directory whose size is about 320M. Then it comes the most important step – translate the regression algorithm into SQL. Also note that however large the sample size $n$ is, $X^\prime X$ and $X^\prime y$ are always of the size $(p+1) \times (p+1)$. If the number of variables is not very large, the inverse and multiplication of matrices of that size could be easily handled by R, so our main target is to compute $X^\prime X$ and $X^\prime y$ in SQL. A difference of rounding errors. The computation takes about 17 seconds, far more than the matrix operation, but it consumes nearly no extra memory to accomplish the computation, which is a typical example of "trade time for space". Furthermore, you may have noticed that the computation of sum(x0*x0), sum(x0*x1), ..., sum(x5*x5) can be parallelized by opening several connections to the database simultaneously, so if you have a multi-processor server, you may drastically reduce the time after some arrangement. The whole source code could be downloaded here.
CommonCrawl
I am calculating value-weighted returns with monthly dividends reinvested and for some reason when I sum the daily returns some are a little bit off with monthly returns. Is this normal? I've solved part a, but am struggling with b and c. $x_m$ is the market portfolio vector, and I think $T$ should be a diagonal matrix. Any hints greatly appreciated! Can anyone explain to how Hull get's from the stock returns to continuously compounded stock returns? Does anyone know why the SMB data published in the 3-factor and 5-factor data files on French's website are different? Which one should be used then? Does it make sense to combine different modified durations?
CommonCrawl
When they meet each month and share their progress, they are discouraged if another friend fails their resolution, making it less likely they keep their own resolution. Suppose that once a friend fails, it reduces the chance others succeed by $10\%$. For example, if $2$ friends have failed so far during the year, the chance each of the others keep their resolution during the next month is $90\%−2\times 10\%=70%$. Challenge: In answering the second question, you'll notice that a lot of casework is needed. For those of you with some programming experience, try to write a program to help answer/approximate the chance that all five friends report failing during the April, May, June, etc, meeting. In general try to find the probability that all five friends report failing after $12$ months. Your program can either simulate the problem (to come up with an approximate answer) or calculate the cases directly (to come up with an exact answer). Please click here to view and participate in this weeks discussion!
CommonCrawl
Here $y$ is the response vector, $X$ is the design matrix with (say) $p+1$ columns (the first column in this matrix is a vector of all ones corresponding to the intercept), $\beta=(\beta_0,\beta_1,\ldots,\beta_p)^T$ is the vector of regression coefficients and $\varepsilon$ is the random error. In this model with $p$ regressors or predictor variables, the parameters $\beta_j,\,j=0,1,\ldots,p$ are simply called the regression coefficients. In fact, $\beta_j$ represents the expected change in the response $y$ per unit change in $x_j$ when all of the remaining regressor variables $x_i\,(i\ne j)$ are held constant. For this reason, the parameters $\beta_j,\,j=1,2,\ldots,p$ are often called partial regression coefficients. The parameter $\beta_0$ is of course separately called the intercept. In simple linear regression we have $p=1$ and the regression coefficient $\beta_1$ is simply called the slope. Not the answer you're looking for? Browse other questions tagged regression terminology or ask your own question. What's the difference between regression coefficients and partial regression coefficients? What test is used for the significance of slope and intercept of linear regression models? What to call the violation of Occam's razor? What do you call models that are not invariant to predictor order? How do you transform coefficients from a linear regression on normalized data in Octave so they can be used with the un-normalized data?
CommonCrawl
Where do we need the axiom of choice in Riemannian geometry? A friend of mine is a differential geometer, and keeps insisting that he doesn't need the axiom of choice for the things he does. I'm fairly certain that's not true, though I haven't dug into the details of Riemannian geometry (or the real analysis it's based on, or the topology, or the theory of vector spaces, etc...) to try and find a theorem or construction that uses the axiom of choice, or one of its logical equivalences. So do you know of any result in Riemannian geometry that needs the axiom of choice? They should be there somewhere, I particularily suspect that one or more is hidden in the basic topology results one uses. It looks to me like the Arzelà--Ascoli theorem needs at least some weak form of choice. (I have started an MO question to clarify this.) One often uses this in geometry; for example, to guarantee the existence of minimizing geodesics connecting pairs of points. Edit: See Andres Caicedo's answer on MO (at above link). The answer is affirmative. Also, the database list of equivalents he mentions contains some very innocuous-looking statements that I bet your friend has never thought twice about using. It is very possible that your colleague generally avoids using the axiom of choice, because manifolds are relatively concrete objects. Assuming that your friend is studying only separable manifolds, or better yet only compact ones, that makes it possible to do many constructions very explicitly, without using the axiom of choice. This is similar to the way that various principles of analysis that require the axiom of choice in general do not require the axiom of choice when they are applied to Euclidean spaces. So you're right that it may be easier to find AC in the background results. The trouble is that many of these background results are studied in far more generality than they are used. For example, suppose an analyst uses the fact that $[0,1]\times[0,1]$ is compact. This fact follows from Tychonoff's theorem, which for general topological spaces does require the axiom of choice. But we could prove the compactness of the unit square more directly, avoiding the axiom of choice altogether (this relies on the separability of the square, in particular). So sometimes the axiom of choice is used for convenience, by the invocation of a very general result, but it could be avoided if necessary. If we were to study only smooth manifolds that had already been embedded into Euclidean space, I suspect that we would be able to do pretty much everything in Zermelo-Fraenkel set theory without the axiom of choice. But it would take careful attention in the proofs to make sure that we replace choice-based techniques with alternative methods. I know this isn't a direct answer to the question, but I think it's relevant since it explains a caveat with possible answers: just because a general result requires AC doesn't mean that AC is required for all consequences of that result. Your friend will probably weasel out of this example by claiming that it isn't strictly "Riemannian" geometry, but it's definitely differential geometry. The example is the Hodge theorem, which asserts that every cohomology class of a Riemannian manifold is represented by a unique harmonic form (where "harmonic" is with respect to the Laplace-Beltrami operator). I highly doubt that a proof can be crafted without some reasonably serious functional analysis; the standard approach uses elliptic theory and Sobolev theory which requires the Banach-Alaoglu theorem which in turn requires the Tychonoff theorem. In general I would wager that any result which involves geometric PDE theory (including some results in, say, minimal surface theory which genuinely are Riemmannian) is going to demand the axiom of choice at some level. Other than that, you might check out the work of Alexander Nabutovsky. He has obtained some really serious results about the structure of geodesics and on the moduli space of Riemannian metrics using techniques from logic and computability theory - I wouldn't be surprised if AC is hiding somewhere. Not the answer you're looking for? Browse other questions tagged general-topology logic differential-geometry axiom-of-choice or ask your own question. Where is the Axiom of choice used? Need for axiom of choice? How do we know we need the axiom of choice for some theorem? How does any map have a "pseudosection" (assuming axiom of choice)? Why isn't Axiom of Choice a trivial result? Is it needed to prove existence of this recursive function? Do I need axiom of choice to construct $\sqrt x$?
CommonCrawl
Abstract. We use Renormalization Group methods to prove detailed long time asymptotics for the solutions of the complex Ginzburg-Landau equations with initial data approaching, as $x\rightarrow\pm\infty$, different spiralling stationary solutions. A universal pattern is formed, depending only on this asymptotics at spatial infinity.
CommonCrawl
The determinant and trace (and characteristic polynomial coefficients) are well-known similarity invariants of a matrix. There are more if we only allow permutation similarities (swapping a pair of rows, and swapping the corresponding pair of columns). How many independent invariants does an $n \times n$ matrix have? I assume it would be $n^2$. How many invariants (polynomials of the matrix's elements) must be known to determine the matrix up to permutation-similarity? I don't know if it's too much to ask for a general formula for all the polynomials. This equation cannot be used, in general, to reduce the system to 4 equations, because of the possibility that one factor on the left is zero, putting the other factor out of reach. (This is similar to the equation of a line, $Ax + By = C$; there are 3 parameters, but only 2 degrees of freedom. This is fixed by $x \cos\alpha + y \sin\alpha = D$, but I don't want to use trig functions in the matrix context.) So all 5 invariants are necessary to determine the matrix. In the $n \times n$ case, $n$ invariants will be the elementary symmetric polynomials of the matrix's diagonal elements, others will depend only on the off-diagonal elements, and others will be mixed. This is one way of classifying the invariants. We could refine this classification by specifying the number of on-diagonal and off-diagonal factors in each term. Browse other questions tagged matrices permutations systems-of-equations invariance or ask your own question. What are norms of sub-matrices invariant under a block diagonal similarity transformation of a block matrix? Do elementary row operations give a similar matrix transformation? Rotation invariants for higher degree homogeneous polynomials (like Tr$(P^m)$ for degree 2)? How to extract a positive definite submatrix from a PSD matrix?
CommonCrawl
In this paper we first study the stability of Ritz-Volterra projection (see below) and its maximum norm estimates, and then we use these results to derive some $L^\infty$ error estimates for finite element methods for parabolic integro-differential equations. Qun Lin, Qi-ding Zhou: Superconvergence Theory of Finite Element Methods. Book to appear.
CommonCrawl
When reporting on laptop or desktop screens, most catalogues only give the diagonal length. Additionally, we know that nearly all computer screens these days are (approximately) 16 to 9. How can we use this information to figure out width and height of the screen? We use trigonometry. Consider the graphic on the right: We want to know $w$ and $h$, we know $d$ and that $w/h = 16/9$. But we have to know $\alpha$... or do we? After all $\cot(\alpha)$ returns $w/h$. Since we know that $w/h = 16/9$, we can take the inverse cotangent to get $\alpha$. Pretty awesome, isn't it? Even more awesome, this angle is true for all screen sizes, as long as they have the ratio 16/9. However, just for the fun of it, let's also do it another way. Note that the triangle above has a right angle. Therefore, it follows from the Pythagorean theorem that the square of the hypothenuse is equal to the sum of squares of the cathetuses (the other sides). This is all you need to calculate $w$ and $h$. This only holds for 16/9 screens! I have calculated some of these values for you. Enjoy!
CommonCrawl
There are several approaches for using computers in deriving mathematical proofs. For their illustration, we provide an in-depth study of using computer support for proving one complex combinatorial conjecture -- correctness of a strategy for the chess KRK endgame. The final, machine verifiable, result presented in this paper is that there is a winning strategy for white in the KRK endgame generalized to $n \times n$ board (for natural $n$ greater than $3$). We demonstrate that different approaches for computer-based theorem proving work best together and in synergy and that the technology currently available is powerful enough for providing significant help to humans deriving complex proofs.
CommonCrawl
We report deduction of the Eliashberg function $\alpha^2 F(\theta,\omega)$ at energy $\omega$ and along momentum cuts at angles $\theta$ normal to the Fermi surface from the high resolution laser angle resolved photoemission spectroscopy on slightly underdoped Bi2212 in the normal and superconducting states. Our principle result is that despite the angle dependence of the extracted single-particle self-energy, the Eliashberg function in the normal state collapses onto a single function of $\omega$ independent of the angle. It has a peak around 0.05 eV, flattens out above 0.1 eV with the angle dependent cut-off. The cut-off energy is given by the intrinsic value of about 0.4 eV or the energy of the band bottom in direction $\theta$, whichever is lower. These results are consistent only with fluctuation spectra which have the correlation length of the lattice constant or shorter. In the superconducting state, the deduced $\alpha^2 F(\theta,\omega)$ exhibits a new peak around 0.015 eV in addition to the 0.05 eV peak and flat spectrum as in the normal state. Both peaks become enhanced as $T$ is lowered or the angle moves away from the nodal direction. The implication of these findings is discussed.
CommonCrawl
I'm new to Sage, and I've been struggling to get started with (what I thought) should be a basic construction. I have an $8$-element commutative ring $R$ which is constructed as a quotient of a polynomial ring in two variables. I need to examine all of the quotient of the right $R$ module $R\times R$. I tried to use M=R^2 and got something that looked promising, but when I tried to use the quotient_module method, I kept getting errors. I saw in the docs for that method that quotient_module isn't fully supported, so I started looking at the CombinatorialFreeModule class too. Can someone recommend an idiomatic way to accomplish the task? I have been plagued by NotImplemented errors and a myriad of other error messages every step of the way, even when just attempting to find a method to list all elements of my $8$ element ring. All the examples I've seen really look like they stick to basic linear algebra, or free $\mathbb Z$ modules. I just want to do something similar for my small ring of $8$ elements. list(S) # <-- NotImplementedError("object does not support iteration") I noticed it worked for the univariate case though. What's a good way to recover the elements? Had the same problem with a univariate polynomial ring over $F_2$ mod $(x^3)$. Obviously the messages are informative enough about what they think is wrong. But this seems like such an elementary task... is there some other class that can handle such a construction? Indeed it would be helpful to have your example at hand. @vdelecroix I've improved the question with more details, now that I have the thing in front of me. Indeed, the class QuotientRing_generic that is used in your example is not intended to work well in the finite situation. It has be thought with polynomial ring over ZZ or QQ in mind. This is just a lack of interest from the current programmers of Sage. As you might know, Sage is developed by volunteers and any contribution is more than welcome, see http://doc.sagemath.org/html/en/developer/. @vdelecroix I would be happy to contribute a module to better handle finite rings and their free modules (I am a professional Python programmer), but it will be a while before I understand the conventions in the codebase. Thanks for taking the time to comment. My function is too general to be efficient. In your situation, you have a vector space and there might be some ways to take advantage of it. In particular, you can manually implement an isomorphism between the vector space GF(2)^4 and your ring and start playing with that.
CommonCrawl
Abstract: A distance-based method to reconstruct a phylogenetic tree with $n$ leaves takes a distance matrix, $n \times n$ symmetric matrix with $0$s in the diagonal, as its input and reconstructs a tree with $n$ leaves using tools in combinatorics. A safety radius is a radius from a tree metric (a distance matrix realizing a true tree) within which the input distance matrices must all lie in order to satisfy a precise combinatorial condition under which the distance-based method is guaranteed to return a correct tree. A stochastic safety radius is a safety radius under which the distance-based method is guaranteed to return a correct tree within a certain probability. In this paper we investigated stochastic safety radii for the neighbor-joining (NJ) method and balanced minimal evolution (BME) method for $n = 5$.
CommonCrawl
Warped accretion discs are expected in many protostellar binary systems. In this paper, we study the long-term evolution of disc warp and precession for discs with dimensionless thickness $H/r$ larger than their viscosity parameter $\alpha$, such that bending waves can propagate and dominate the warp evolution. For small warps, these discs undergo approximately rigid-body precession. We derive analytical expressions for the warp/twist profiles of the disc and the alignment timescale for a variety of models. Applying our results to circumbinary discs, we find that these discs align with the orbital plane of the binary on a timescale comparable to the global precession time of the disc, and typically much smaller than its viscous timescale. We discuss the implications of our finding for the observations of misaligned circumbinary discs (such as KH 15D) and circumbinary planetary systems (such as Kepler-413); these observed misalignments provide useful constraints on the uncertain aspects of the disc warp theory. On the other hand, we find that circumstellar discs can maintain large misalignments with respect to the plane of the binary companion over their entire lifetime. We estimate that inclination angles larger than $\sim 20^\circ$ can be maintained for typical disc parameters. Overall, our results suggest that while highly misaligned circumstellar discs in binaries are expected to be common, such misalignments should be rare for circumbinary discs. These expectations are consistent with current observations of protoplanetary discs and exoplanets in binaries, and can be tested with future observations.
CommonCrawl
A fundamental operation in computational geometry is determining whether two objects touch. For example, in a game that involves shooting, we want to determine if a player's shot hits a target. A shot is a two dimensional point, and a target is a two dimensional enclosed area. A shot hits a target if it is inside the target. The boundary of a target is inside the target. Since it is possible for targets to overlap, we want to identify how many targets a shot hits. The figure above illustrates the targets (large unfilled rectangles and circles) and shots (filled circles) of the sample input. The origin $(0, 0)$ is indicated by a small unfilled circle near the center. Input starts with an integer $1 \leq m \leq 30$ indicating the number of targets. Each of the next $m$ lines begins with the word rectangle or circle and then a description of the target boundary. A rectangular target's boundary is given as four integers $x_1~ y_1~ x_2~ y_2$, where $x_1<x_2$ and $y_1<y_2$. The points $(x_1,y_1)$ and $(x_2,y_2)$ are the bottom-left and top-right corners of the rectangle, respectively. A circular target's boundary is given as three integers $x~ y~ r$. The center of the circle is at $(x,y)$ and the $0<r\leq 1\, 000$ is the radius of the circle. After the target descriptions is an integer $1 \leq n \leq 100$ indicating the number of shots that follow. The next $n$ lines each contain two integers $x~ y$, indicating the coordinates of a shot. All $x$ and $y$ coordinates for targets and shots are in the range $[-1\, 000,1\, 000]$. For each of the $n$ shots, print the total number of targets the shot hits.
CommonCrawl
How many keys are used for symmetric key encryption? How many keys are required for public key encryption? The encryption-by-XOR method is known to be extremely difficult to use properly (basically, the key has to be as long as the data to encrypt, which is very often highly impractical; but using a shorter key means reusing some parts of the key, which opens many deadly weaknesses). Asymmetric encryption (or public-key cryptography) uses a separate key for encryption and decryption. The Private key is used to decrypt messages from other users. Asymmetric Encryption Primer In asymmetric or public key encryption, different keys are used for encryption and decryption. Usually only 2 keys are used - a public key. Asymmetric cryptography using key pairs for each of the users needs ' n ' number of key for n users. Asymmetric encryption uses at least 2 keys - hence the asymmetry. Symmetric Cryptography, it needs n(n-1)/2 keys. The sender encrypts the message using the recipient's public key. Each subject S has a publicly disclosed key K. Diffie- Hellman Which of the ff. One … of the two keys is a public key, which anyone can use to encrypt a message for the owner of that key. An encryption system in which the sender and receiver of a message share a single, common key that is used to encrypt and decrypt the message. The Basics – While Symmetric encryption makes use of a single key for both encryption and decryption, Asymmetric encryption uses different keys for encryption and decryption. Real-time password sync tool for AD, Office 365, and more. The Data Encryption Standard (DES) is a symmetric-key block cipher published by the National Institute of Standards and Technology (NIST). Contrast this with public-key cryptology, which utilizes two keys - a public key to encrypt messages and a private key to decrypt them. ECC is a public key encryption technique based on. DES is an implementation of a Feistel Cipher. And the secret cryptographic key is called symmetric key. Symmetric-key algorithms are algorithms for cryptography that use the same cryptographic keys for both encryption of plaintext and decryption of ciphertext. The keys may be identical or there may be a simple transformation to go between the two keys. . Asymmetric key encryption algorithm is used. My guess: For symmetric they each need to maintain and transfer their own key, so probably $1000 \times 1000$, and for asymmetric maybe just $2000$, each having one public one private. For every user, there is 1 Private key and 1 Public key. Which cryptography system generates encryption keys that could be used with DES, AES, IDEA, RC5 or any other symmetric cryptography solution. One key is used for encryption and the other is used for decrypt … ion. The two keys are referred to as the Public key and the Private key. The Public key is published for the world to see and. Public key encryption refers to a type of cypher or code architecture known as public key cryptography that utilizes two keys, or a key pair), to encrypt and decrypt data. The first major symmetric algorithm developed for computers in the United States was the Data Encryption Standard (DES), approved for use in the 1970s. The DES uses a 56-bit key. Diffie-Hellman How many keys are used with asymmetric or public key cryptography? The keys for encryption and decryption are not the same, so they are not "symmetric". In 3DES, 3 distinct keys are used as K1, K2 and K3. First encrypt with K1, then Decrypt with K2 and finally Encrypt with K3 so actual key length used in 3DES is 168 excludin 8 bit party from each 64 bits means (56+56+56 becomes 168). Public key cryptography is a kind of asymmetric cryptography. Usual symmetric encryption systems are much more complex, and strive at providing "your money worth" from the key. Automatically sync Active Directory passwords in real-time across Office 365, Salesforce and more. Kousik Nandy rightly stated that DHE (Diffie-Hellman Exchange) can be used for this purpose with protection of RSA keys …. Anyone can use the encryption key (public key) to encrypt a message. However, decryption keys (private keys) are secret. This way only the intended receiver can decrypt the message. The most common asymmetric encryption algorithm is RSA; however, we will discuss algorithms later in this …. Symmetric-key cryptography In symmetric-key cryptography, we encode our plain text by mangling it with a secret key. Decryption requires knowledge of the same key, and reverses the mangling. A secret key, which can be a number, a word, or just a string of random letters, is applied to the text of a message to change the content in a particular way. It applies a public key for encryption, while a private key is used for decryption. Symmetric Encryption Don't #6: Don't share a single key across many devices A wise man once said that a secret is something you tell one other person. I'm not sure he realized it, but what he was saying is this: don't put the same symmetric key into a large number of devices (or software instances, etc.) if you want it to remain secret. Symmetric key cryptography is any cryptographic algorithm that is based on a shared key that is used to encrypt or decrypt text/cyphertext, in contract to asymmetric key cryptography, where the encryption and decryption keys are linked by different. The symmetric key will then be transmitted to the receiver where the receiver will first use its private key, and then the senders public key to decrypt the original message, which is the shared symmetric key. At this pont, the sender and the receiver are able to encrypt data by using the shared symmetric key, and transfer files very efficiently. It uses 16 round Feistel structure. Though, key length is 64-bit, DES has an effective. Symmetric encryption, also referred to as conventional encryption or single key encryption was the only type of encryption in use prior to the development of public-key encryption in 1976. How many keys would Public-key Encryption require to protect group N. Public Key Encryption requires 2n keys or two keys per person in group N. Public key encryption also does not require 'pre sharing' the secret key before communication may start. Symmetric key encryption is a type of encryption in which the same cryptographic key is used for both encryption and decryption. The requirement for this encryption method is, both the parties need to have access to the cryptographic key using which the data is encrypted and decrypted. Elliptic Curve Cryptography (ECC) is gaining favor with many security experts as an alternative to RSA for implementing public-key cryptography. Public key cryptography involves two keys: a private key that can be used to encrypt, decrypt, and digitally sign files, and a public key that can be used to encrypt and a verify digital signatures. More on this in the Symmetric and Asymmetric keys section. The public key is included in the encryption of the message, and the private key is used to decrypt it. Public-key cryptography involves two related keys for each recipient involved - a private key which is a secret known only by the recipient, and a related public key which is known by all senders. Keys used in this algorithm may be up to 256 bits in length and as a symmetric technique, only one key is needed. Twofish is regarded as one of the fastest of its kind, and ideal for use in both hardware and software environments. What type of key or keys are used in symmetric cryptography How many keys are used with symmetric key cryptography Which of the following is not true concerning symmetric key cryptography This preview has intentionally blurred sections. Tech giant IBM partnered with shipping leviathan Maersk to develop a blockchain-based shipping platform over a year ago. Our IBM SPSS Statistics Level 1 v2 exam prep has taken up a large part of market. The IBM SPSS® software platform offers advanced statistical analysis, a vast library of machine-learning algorithms, text analysis, open-source extensibility, integration with big data and seamless deployment into applications. Which statement indicates that IBM SPSS Statistics is the best fit.
CommonCrawl
Edit (Nov 1, 2015): Bounty awarded, but the full question (i.e., what is the optimal strategy) remains open at the time of this update. Given a list of positive integers $1, \ldots, n$, two players (red and blue) alternate turns. On each turn, the player picks and circles (in their color) any un-circled number $k$ that has at least one proper divisor yet to be circled; their opponent responds by circling (in their own color) all remaining proper divisors of $k$. Then they switch (colors adjusted accordingly) and the game continues until there are no legal moves remaining. At that point, each player adds up their circled numbers and the greater sum wins. Question: What is the optimal strategy in the Factor Game? My guess is that, for large enough $n$, picking the number that nets the most points on each individual turn is optimal. For example, if $n = 49$, then the first player would pick $47$ (to net $46$) and their opponent would respond with $49$ (to net $42$). But this strategy is admittedly short-sighted. I am hoping for a description of an optimal strategy and a proof that it is optimal (for $n$ large enough), but would be interested in special cases, heuristic arguments for what would be an optimal strategy, and heuristic arguments for why this problem is difficult. Motivation: This game is sometimes played in elementary schools (or, in my classes, with elementary school teachers) to explore concepts like prime, composite, divisor, proper divisor, etc. In such a case, the numbers are usually presented in an array - often a square array, and so my particular interest is when $n$ is a square. You can play the Factor Game through the NCTM (National Council of Teachers of Mathematics) Illuminations site if you would like to experience it for yourself. The greedy strategy may be described, as stated in the OP, for large enough $n$, as locally optimal number-picking so as to net the most points on each individual turn. With this approach, it is possible to avoid the "play low" game as described below, along with its obvious pitfalls. gives $5, 7, 11, 25$ (hence, choose $25$ as the opening play since it is the largest odd semiprime in the given range) as the winning opening numbers for player $1$. results in all positive values. Modifying the first choice to $2$ (as opposed to the largest prime in the range) where the "short-sighted" solution fails appears to be the winning strategy. strat[n] for any $n\geq30$ should then force a win for player $1$ for most $n$. Note that player $2$ plays the local optimal strategy, but, though I would be surprised if an alternative strategy could force a win for player $2$, this possibility is yet to be explored. Of course, since the game will be reversed if player $1$ chooses $1$ to start, and in so doing, allows player $2$ to go first, if player $2$ plays by the same strategy, he will choose $2$, etc., making the game rather absurd. A rule that either player may not forfeit a turn should then be adhered to, if a sensible solution is to be arrived at. Not the answer you're looking for? Browse other questions tagged elementary-number-theory reference-request recreational-mathematics combinatorial-game-theory algorithmic-game-theory or ask your own question. Optimal strategy for this Nim generalisation? Has a simple optimal or provably near optimal strategy been shown for backgammon bearing off?
CommonCrawl
Is the Couvreur et al. polynomial time attack on McEliece practical? "We give a polynomial time attack on the McEliece public key cryptosystem based on algebraic geometry codes. Roughly speaking, this attacks runs in $O(n^4)$ operations in $\mathbb F_q$, where $n$ denotes the code length. Compared to previous attacks, allows to recover a decoding algorithm for the public key even for codes from high genus curves." Is this a practical attack against currently used implementations of the McEliece cryptosystem, with security parameters such as those recommended by Bernstein, Lange and Peters (2008)? If the answer is yes, then how much do we need to increase the security parameters $n, k, t$ to be safe? Or do we need to switch another code / curve entirely? My understanding is that the attack only works against McEliece with algebraic geometry codes. The paper by Bernstein, Lange and Peters recommends parameters for McEliece with binary Goppa codes, so the attack does not apply against those parameters. Not the answer you're looking for? Browse other questions tagged encryption public-key post-quantum-cryptography mceliece or ask your own question. How to choose McEliece's parameters?
CommonCrawl
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
1
Edit dataset card