source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
16,832 | I am looking for a lightweight source control system for use on "hobby" projects with only one person (myself) working on the project. Does anyone have any suggestions? Ideally it should interface with Visual Studio either naively or through another plug-in, outside of that, anything that works would be nice to be replace Gmail as source control. | You can use assembla.com to host your project. They offer subversion, git and mercurial hosting. I personally use their subversion hosting for a free and private one-man project. As an added bonus, you also get a wiki and a ticketing system. Which can help you manage your stuff. And the best thing is that you don't have to setup your subversion server and it is hosted off-site. It's really good for a free service. Personnaly, i use TortoiseSVN as my client but it isn't integrated in visual studio. For the integration, you can try VisualSVN (not free) or AnkhSVN (free) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/16832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1185/"
]
} |
16,833 | I need to periodically download, extract and save the contents of http://data.dot.state.mn.us/dds/det_sample.xml.gz to disk. Anyone have experience downloading gzipped files with C#? | To compress: using (FileStream fStream = new FileStream(@"C:\test.docx.gzip", FileMode.Create, FileAccess.Write)) { using (GZipStream zipStream = new GZipStream(fStream, CompressionMode.Compress)) { byte[] inputfile = File.ReadAllBytes(@"c:\test.docx"); zipStream.Write(inputfile, 0, inputfile.Length); }} To Decompress: using (FileStream fInStream = new FileStream(@"c:\test.docx.gz", FileMode.Open, FileAccess.Read)) { using (GZipStream zipStream = new GZipStream(fInStream, CompressionMode.Decompress)) { using (FileStream fOutStream = new FileStream(@"c:\test1.docx", FileMode.Create, FileAccess.Write)) { byte[] tempBytes = new byte[4096]; int i; while ((i = zipStream.Read(tempBytes, 0, tempBytes.Length)) != 0) { fOutStream.Write(tempBytes, 0, i); } } }} Taken from a post I wrote last year that shows how to decompress a gzip file using C# and the built-in GZipStream class. http://blogs.msdn.com/miah/archive/2007/09/05/zipping-files.aspx As for downloading it, you can use the standard WebRequest or WebClient classes in .NET. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/16833",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1786/"
]
} |
16,860 | Unit testing is, roughly speaking, testing bits of your code in isolation with test code. The immediate advantages that come to mind are: Running the tests becomes automate-able and repeatable You can test at a much more granular level than point-and-click testing via a GUI Rytmis My question is, what are the current "best practices" in terms of tools as well as when and where to use unit testing as part of your daily coding? Lets try to be somewhat language agnostic and cover all the bases. | Ok here's some best practices from some one who doesn't unit test as much as he should...cough. Make sure your tests test one thing and one thing only. Write unit tests as you go. Preferably before you write the code you are testing against. Do not unit test the GUI. Separate your concerns . Minimise the dependencies of your tests. Mock behviour with mocks . | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/16860",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/149/"
]
} |
16,861 | What is the best way to sanitize user input for a Python-based web application? Is there a single function to remove HTML characters and any other necessary characters combinations to prevent an XSS or SQL injection attack? | Here is a snippet that will remove all tags not on the white list, and all tag attributes not on the attribues whitelist (so you can't use onclick ). It is a modified version of http://www.djangosnippets.org/snippets/205/ , with the regex on the attribute values to prevent people from using href="javascript:..." , and other cases described at http://ha.ckers.org/xss.html . (e.g. <a href="ja	vascript:alert('hi')"> or <a href="ja vascript:alert('hi')"> , etc.) As you can see, it uses the (awesome) BeautifulSoup library. import refrom urlparse import urljoinfrom BeautifulSoup import BeautifulSoup, Commentdef sanitizeHtml(value, base_url=None): rjs = r'[\s]*(&#x.{1,7})?'.join(list('javascript:')) rvb = r'[\s]*(&#x.{1,7})?'.join(list('vbscript:')) re_scripts = re.compile('(%s)|(%s)' % (rjs, rvb), re.IGNORECASE) validTags = 'p i strong b u a h1 h2 h3 pre br img'.split() validAttrs = 'href src width height'.split() urlAttrs = 'href src'.split() # Attributes which should have a URL soup = BeautifulSoup(value) for comment in soup.findAll(text=lambda text: isinstance(text, Comment)): # Get rid of comments comment.extract() for tag in soup.findAll(True): if tag.name not in validTags: tag.hidden = True attrs = tag.attrs tag.attrs = [] for attr, val in attrs: if attr in validAttrs: val = re_scripts.sub('', val) # Remove scripts (vbs & js) if attr in urlAttrs: val = urljoin(base_url, val) # Calculate the absolute url tag.attrs.append((attr, val)) return soup.renderContents().decode('utf8') As the other posters have said, pretty much all Python db libraries take care of SQL injection, so this should pretty much cover you. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/16861",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2019/"
]
} |
16,891 | As you may know, in VS 2008 ctrl + tab brings up a nifty navigator window with a thumbnail of each file. I love it, but there is one tiny thing that is annoying to me about this feature: the window stays around after releasing the ctrl key . When doing an alt + tab in windows, you can hit tab to get to the item you want (while still holding down the alt key), and then when you find what you want, lifting up on the alt key selects that item. I wish VS 2008 would do the same. For me, when I lift off of ctrl , the window is still there. I have to hit enter to actually select the item. I find this annoying. Does anyone know how to make VS 2008 dismiss the window on the release of the ctrl key? | You probably have the text-to-speech narrator enabled. http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2467648&SiteID=1&mode=1 Just uncheck all checkboxes under "Text-To-Speech" narrator software. --> To open Narrator using the keyboard, press CTRL + ESC , press R , type narrator, and then press Enter. This one drove me crazy for several months until I found this posting. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/16891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1789/"
]
} |
16,918 | I've been looking for a decent guide to Haskell for some time, but haven't been able to find one that seems interesting enough to read through and/or makes sense. I've had prior exposure to Haskell a few years back, but I can't remember much about it. I remember the "Aha!"-feeling was incredible when I finally got it, and it was actually fun to play with, so I'm looking to rediscover the lost art of Haskell. I'm familiar with Ruby and its functional programming tricks, so I think I'm not completely in the dark. Any links? | Some good places to start are: The Gentle Introduction To Haskell Problem Solving in Haskell Happy Learn Haskell Tutorial Other resources: Interesting blog entry on a Study plan for Haskell via the Wayback Machine HaskellWiki Generic Haskell User Guide (PDF) | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/16918",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2018/"
]
} |
16,939 | I am reading a binary file into a parsing program. I will need to iterate through the file and look for certain markers so I can split the file up and pass those parts into their respective object’s constructors. Is there an advantage to holding the file as a stream, either MemoryStream or FileStream, or should it be converted into a byte[] array? Keith | A byte[] or MemoryStream will both require bringing the entire file into memory. A MemoryStream is really a wrapper around an underlying byte array. The best approach is to have two FileStream (one for input and one for output). Read from the input stream looking for the pattern used to indicate the file should be separated while writing to the current output file. You may want to consider wrapping the input and output files in a BinaryReader and BinaryWriter respectively if they add value to your scenario. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/16939",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1048/"
]
} |
16,963 | Curious if others feel the same as me. To me, controls such as datagrid/gridview/formview/etc. are great for presentations or demo's only. To take the time and tweak this controls, override their default behavior (hooking into their silly events etc.) is a big headache. The only control that I use is the repeater, since it offers me the most flexibility over the others. In short, they are pretty much bloatware. I'd rather weave my own html/css, use my own custom paging queries. Again, if you need to throw up a quick page these controls are great (especially if you are trying to woo people into the ease of .NET development). I must be in the minority, otherwise MS wouldn't dedicated so much development time on these types of controls... | Anyone that thinks nobody uses *Grid controls has clearly never worked on an internal corporate webapp. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/16963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1368/"
]
} |
16,971 | I would like to have a nice template for doing this in development. How do I reset an increment identity's starting value in SQL Server? | DBCC CHECKIDENT('TableName', RESEED, 0) | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/16971",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1976/"
]
} |
16,991 | I've been using Eclipse with RDT (not RadRails) a lot lately, and I'm quite happy with it, but I'm wondering if you guys know any decent alternatives. I know NetBeans also supports Ruby these days, but I'm not sure what it has to offer over Eclipse. Please, list any features you think are brilliant or useful when suggesting an IDE, makes it easier to compare. Also, I said Ruby, not Rails. While Rails support is a plus, I prefer things to be none Rails-centric. It should also be available on Linux and optionally Solaris. | Have you tried Aptana ? It's based on Eclipse and they have a sweet Rails plugin. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/16991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2018/"
]
} |
17,017 | How do I convert a DateTime structure to its equivalent RFC 3339 formatted string representation and/or parse this string representation back to a DateTime structure? The RFC-3339 date-time format is used in a number of specifications such as the Atom Syndication Format . | This is an implementation in C# of how to parse and convert a DateTime to and from its RFC-3339 representation. The only restriction it has is that the DateTime is in Coordinated Universal Time (UTC). using System;using System.Globalization;namespace DateTimeConsoleApplication{ /// <summary> /// Provides methods for converting <see cref="DateTime"/> structures to and from the equivalent RFC 3339 string representation. /// </summary> public static class Rfc3339DateTime { //============================================================ // Private members //============================================================ #region Private Members /// <summary> /// Private member to hold array of formats that RFC 3339 date-time representations conform to. /// </summary> private static string[] formats = new string[0]; /// <summary> /// Private member to hold the DateTime format string for representing a DateTime in the RFC 3339 format. /// </summary> private const string format = "yyyy-MM-dd'T'HH:mm:ss.fffK"; #endregion //============================================================ // Public Properties //============================================================ #region Rfc3339DateTimeFormat /// <summary> /// Gets the custom format specifier that may be used to represent a <see cref="DateTime"/> in the RFC 3339 format. /// </summary> /// <value>A <i>DateTime format string</i> that may be used to represent a <see cref="DateTime"/> in the RFC 3339 format.</value> /// <remarks> /// <para> /// This method returns a string representation of a <see cref="DateTime"/> that /// is precise to the three most significant digits of the seconds fraction; that is, it represents /// the milliseconds in a date and time value. The <see cref="Rfc3339DateTimeFormat"/> is a valid /// date-time format string for use in the <see cref="DateTime.ToString(String, IFormatProvider)"/> method. /// </para> /// </remarks> public static string Rfc3339DateTimeFormat { get { return format; } } #endregion #region Rfc3339DateTimePatterns /// <summary> /// Gets an array of the expected formats for RFC 3339 date-time string representations. /// </summary> /// <value> /// An array of the expected formats for RFC 3339 date-time string representations /// that may used in the <see cref="DateTime.TryParseExact(String, string[], IFormatProvider, DateTimeStyles, out DateTime)"/> method. /// </value> public static string[] Rfc3339DateTimePatterns { get { if (formats.Length > 0) { return formats; } else { formats = new string[11]; // Rfc3339DateTimePatterns formats[0] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffffffK"; formats[1] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'ffffffK"; formats[2] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffffK"; formats[3] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'ffffK"; formats[4] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffK"; formats[5] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'ffK"; formats[6] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fK"; formats[7] = "yyyy'-'MM'-'dd'T'HH':'mm':'ssK"; // Fall back patterns formats[8] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffffffK"; // RoundtripDateTimePattern formats[9] = DateTimeFormatInfo.InvariantInfo.UniversalSortableDateTimePattern; formats[10] = DateTimeFormatInfo.InvariantInfo.SortableDateTimePattern; return formats; } } } #endregion //============================================================ // Public Methods //============================================================ #region Parse(string s) /// <summary> /// Converts the specified string representation of a date and time to its <see cref="DateTime"/> equivalent. /// </summary> /// <param name="s">A string containing a date and time to convert.</param> /// <returns>A <see cref="DateTime"/> equivalent to the date and time contained in <paramref name="s"/>.</returns> /// <remarks> /// The string <paramref name="s"/> is parsed using formatting information in the <see cref="DateTimeFormatInfo.InvariantInfo"/> object. /// </remarks> /// <exception cref="ArgumentNullException"><paramref name="s"/> is a <b>null</b> reference (Nothing in Visual Basic).</exception> /// <exception cref="FormatException"><paramref name="s"/> does not contain a valid RFC 3339 string representation of a date and time.</exception> public static DateTime Parse(string s) { //------------------------------------------------------------ // Validate parameter //------------------------------------------------------------ if(s == null) { throw new ArgumentNullException("s"); } DateTime result; if (Rfc3339DateTime.TryParse(s, out result)) { return result; } else { throw new FormatException(String.Format(null, "{0} is not a valid RFC 3339 string representation of a date and time.", s)); } } #endregion #region ToString(DateTime utcDateTime) /// <summary> /// Converts the value of the specified <see cref="DateTime"/> object to its equivalent string representation. /// </summary> /// <param name="utcDateTime">The Coordinated Universal Time (UTC) <see cref="DateTime"/> to convert.</param> /// <returns>A RFC 3339 string representation of the value of the <paramref name="utcDateTime"/>.</returns> /// <remarks> /// <para> /// This method returns a string representation of the <paramref name="utcDateTime"/> that /// is precise to the three most significant digits of the seconds fraction; that is, it represents /// the milliseconds in a date and time value. /// </para> /// <para> /// While it is possible to display higher precision fractions of a second component of a time value, /// that value may not be meaningful. The precision of date and time values depends on the resolution /// of the system clock. On Windows NT 3.5 and later, and Windows Vista operating systems, the clock's /// resolution is approximately 10-15 milliseconds. /// </para> /// </remarks> /// <exception cref="ArgumentException">The specified <paramref name="utcDateTime"/> object does not represent a <see cref="DateTimeKind.Utc">Coordinated Universal Time (UTC)</see> value.</exception> public static string ToString(DateTime utcDateTime) { if (utcDateTime.Kind != DateTimeKind.Utc) { throw new ArgumentException("utcDateTime"); } return utcDateTime.ToString(Rfc3339DateTime.Rfc3339DateTimeFormat, DateTimeFormatInfo.InvariantInfo); } #endregion #region TryParse(string s, out DateTime result) /// <summary> /// Converts the specified string representation of a date and time to its <see cref="DateTime"/> equivalent. /// </summary> /// <param name="s">A string containing a date and time to convert.</param> /// <param name="result"> /// When this method returns, contains the <see cref="DateTime"/> value equivalent to the date and time /// contained in <paramref name="s"/>, if the conversion succeeded, /// or <see cref="DateTime.MinValue">MinValue</see> if the conversion failed. /// The conversion fails if the s parameter is a <b>null</b> reference (Nothing in Visual Basic), /// or does not contain a valid string representation of a date and time. /// This parameter is passed uninitialized. /// </param> /// <returns><b>true</b> if the <paramref name="s"/> parameter was converted successfully; otherwise, <b>false</b>.</returns> /// <remarks> /// The string <paramref name="s"/> is parsed using formatting information in the <see cref="DateTimeFormatInfo.InvariantInfo"/> object. /// </remarks> public static bool TryParse(string s, out DateTime result) { //------------------------------------------------------------ // Attempt to convert string representation //------------------------------------------------------------ bool wasConverted = false; result = DateTime.MinValue; if (!String.IsNullOrEmpty(s)) { DateTime parseResult; if (DateTime.TryParseExact(s, Rfc3339DateTime.Rfc3339DateTimePatterns, DateTimeFormatInfo.InvariantInfo, DateTimeStyles.AdjustToUniversal, out parseResult)) { result = DateTime.SpecifyKind(parseResult, DateTimeKind.Utc); wasConverted = true; } } return wasConverted; } #endregion }} | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/17017",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2029/"
]
} |
17,032 | Resharper certainly thinks so, and out of the box it will nag you to convert Dooberry dooberry = new Dooberry(); to var dooberry = new Dooberry(); Is that really considered the best style? | It's of course a matter of style, but I agree with Dare: C# 3.0 Implicit Type Declarations: To var or not to var? . I think using var instead of an explicit type makes your code less readable.In the following code: var result = GetUserID(); What is result? An int, a string, a GUID? Yes, it matters, and no, I shouldn't have to dig through the code to know. It's especially annoying in code samples. Jeff wrote a post on this, saying he favors var . But that guy's crazy! I'm seeing a pattern for stackoverflow success: dig up old CodingHorror posts and (Jeopardy style) phrase them in terms of a question. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/17032",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1853/"
]
} |
17,106 | We are developing an application that involves a substantial amount of XML transformations. We do not have any proper input test data per se, only DTD or XSD files. We'd like to generate our test data ourselves from these files. Is there an easy/free way to do that? Edit There are apparently no free tools for this, and I agree that OxygenXML is one of the best tools for this. | In Visual Studio 2008 SP1 and later the XML Schema Explorer can create an XML document with some basic sample data: Open your XSD document Switch to XML Schema Explorer Right click the root node and choose "Generate Sample Xml" | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/17106",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1428/"
]
} |
17,125 | I know what yield does, and I've seen a few examples, but I can't think of real life applications, have you used it to solve some specific problem? (Ideally some problem that cannot be solved some other way) | I realise this is an old question (pre Jon Skeet?) but I have been considering this question myself just lately. Unfortunately the current answers here (in my opinion) don't mention the most obvious advantage of the yield statement. The biggest benefit of the yield statement is that it allows you to iterate over very large lists with much more efficient memory usage then using say a standard list. For example, let's say you have a database query that returns 1 million rows. You could retrieve all rows using a DataReader and store them in a List, therefore requiring list_size * row_size bytes of memory. Or you could use the yield statement to create an Iterator and only ever store one row in memory at a time. In effect this gives you the ability to provide a "streaming" capability over large sets of data. Moreover, in the code that uses the Iterator, you use a simple foreach loop and can decide to break out from the loop as required. If you do break early, you have not forced the retrieval of the entire set of data when you only needed the first 5 rows (for example). Regarding: Ideally some problem that cannot be solved some other way The yield statement does not give you anything you could not do using your own custom iterator implementation, but it saves you needing to write the often complex code needed. There are very few problems (if any) that can't solved more than one way. Here are a couple of more recent questions and answers that provide more detail: Yield keyword value added? Is yield useful outside of LINQ? | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17125",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1782/"
]
} |
17,140 | How do you run an external program and pass it command line parameters using C? If you have to use operating system API, include a solution for Windows, Mac, and Linux. | It really depends on what you're trying to do, exactly, as it's: OS dependent Not quite clear what you're trying to do. Nevertheless, I'll try to provide some information for you to decide. On UNIX, fork() creates a clone of your process from the place where you called fork. Meaning, if I have the following process: #include <unistd.h>#include <stdio.h>int main(){ printf( "hi 2 u\n" ); int mypid = fork(); if( 0 == mypid ) printf( "lol child\n" ); else printf( "lol parent\n" ); return( 0 );} The output will look as follows: hi 2 u lol child lol parent When you fork() the pid returned in the child is 0, and the pid returned in the parent is the child's pid. Notice that "hi2u" is only printed once... by the parent . execve() and its family of functions are almost always used with fork(). execve() and the like overwrite the current stackframe with the name of the application you pass to it. execve() is almost always used with fork() where you fork a child process and if you're the parent you do whatever you need to keep doing and if you're the child you exec a new process. execve() is also almost always used with waitpid() -- waitpid takes a pid of a child process and, quite literally, waits until the child terminates and returns the child's exit status to you. Using this information, you should be able to write a very basic shell; one that takes process names on the command line and runs processes you tell it to. Of course, shells do more than that, like piping input and output, but you should be able to accomplish the basics using fork() , execve() and waitpid() . NOTE: This is *nix specific! This will NOT work on Windows. Hope this helped. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17140",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/432/"
]
} |
17,170 | I know that IList is the interface and List is the concrete type but I still don't know when to use each one. What I'm doing now is if I don't need the Sort or FindAll methods I use the interface. Am I right? Is there a better way to decide when to use the interface or the concrete type? | There are two rules I follow: Accept the most basic type that will work Return the richest type your user will need So when writing a function or method that takes a collection, write it not to take a List, but an IList<T>, an ICollection<T>, or IEnumerable<T>. The generic interfaces will still work even for heterogenous lists because System.Object can be a T too. Doing this will save you headache if you decide to use a Stack or some other data structure further down the road. If all you need to do in the function is foreach through it, IEnumerable<T> is really all you should be asking for. On the other hand, when returning an object out of a function, you want to give the user the richest possible set of operations without them having to cast around. So in that case, if it's a List<T> internally, return a copy as a List<T>. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/17170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1560/"
]
} |
17,172 | I've tried to do this several times with no luck. After reading this post , it made me interested in doing this again. So can anyone tell me why the following doesn't work? <?php$guest = 1;$editor = 2;$admin = 4;$user = $editor;if( $user == ($editor | $admin) ) { echo "Test"; }?> | Use the bitwise OR operator (|) to set bits, use the AND operator (&) to check bits. Your code should look like this: <?php $guest = 1; $editor = 2; $admin = 4; $user = $editor; if( $user & ($editor | $admin) ) { echo "Test"; }?> If you don't understand binary and exactly what the bitwise operators do, you should go learn it. You'll understand how to do this much better. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17172",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/40/"
]
} |
17,175 | I'll take away the obvious one here: mic and webcam support. Other than that, if you ran the Silverlight team, what would your highest priority be for Silverlight v.Next? Disclaimer: If we get some good responses, I'll pass them along to folks I know on the Silverlight team. UPDATE : The best place to report Silverlight feature requests now is the UserVoice site: http://silverlight.uservoice.com/ | Full cross-platform support for Windows, Mac and Linux with complete feature parity for each OS. ;) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17175",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5/"
]
} |
17,228 | I develop C++ applications in a Linux environment. The tools I use every day include Eclipse with the CDT plugin, gdb and valgrind. What tools do other people use? Is there anything out there for Linux that rivals the slickness of Microsoft Visual Studio? | I use a bunch of terminal windows. I have vim running on interesting source files, make and g++ output on another for compiler errors or a gdb session for runtime errors. If I need help finding definitions I run cscope and use vim's cscope support to jump around. Eclipse CDT is my second choice. It's nice but huge, ungainly and slow compared to vim. Using terminal windows and vim is very flexible because I do not need to carry 400 MB of Java around with me I can use SSH sessions from anywhere. I use valgrind when I need to find a memory issue. I use strace to watch what my software is doing on a system call level. This lets me clean up really stupid code that calls time(0) four times in a row or makes too many calls to poll() or non-blocking read() or things like calling read() on a socket to read 1 byte at a time. (That is super inefficient and lazy!) I use objdump -d to inspect the machine code, especially for performance sensitive inner loops. That is how I find things like the slowness of the array index operator on strings compared to using iterators. I use oprofile to try to find hot spots in optimized code, I find that it often works a little better than gprof, and it can do things like look for data and instruction cache misses. That can show you where to drop some helpful prefetch hints using GCC's __builtin_prefetch . I tried to use it to find hot mis-predicted branches as well, but couldn't get that to work for me. Update: I've found that perf works way better than oprofile. At least on Linux. Learn to use perf and love it as I do. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/17228",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1449/"
]
} |
17,231 | I was looking at http://tldp.org/LDP/abs/html/why-shell.html and was struck by: When not to use shell scripts ... Mission-critical applications upon which you are betting the future of the company Why not? | Using shell scripts is fine when you're using their strengths. My company has some class 5 soft switches and the call processing code and the provisioning interface is written in java. Everything else is written in KSH - DB dumps for backups, pruning, log file rotation, and all the automated reporting. I would argue that all those support functions, though not directly related to call-path, are mission critical. Especially the DB interaction. If something went wrong with the DB-interaction code and dumped the call routing tables it could put us out of business. But nothing ever does go wrong, because shell scripts are the perfect language for stuff like this. They're small, they're well understood, manipulating files is their strength, and they're stable. It's not like KSH09 is going to be a complete rewrite because someone thinks it should compile to byte code, so it's a stable interface. Frankly, the provisioning interface written in Java goes wonky fairly often and the shell scripts have never messed up that I can remember. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17231",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1438/"
]
} |
17,250 | I am creating an ZIP file with ZipFile in Python 2.5, it works OK so far: import zipfile, oslocfile = "test.txt"loczip = os.path.splitext (locfile)[0] + ".zip"zip = zipfile.ZipFile (loczip, "w")zip.write (locfile)zip.close() But I couldn't find how to encrypt the files in the ZIP file.I could use system and call PKZIP -s , but I suppose there must be a more "Pythonic" way. I'm looking for an open source solution. | I created a simple library to create a password encrypted zip file in python. - here import pyminizipcompression_level = 5 # 1-9pyminizip.compress("src.txt", "dst.zip", "password", compression_level) The library requires zlib. I have checked that the file can be extracted in WINDOWS/MAC. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17250",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/394/"
]
} |
17,320 | I am a student studying software development, and I feel programming, in general, is too broad of a subject to try to know everything. To be proficient, you have to decide which areas to focus your learning and understanding. Certain skill sets synergize with each other, like data-driven web development and SQL experience. However, all the win32 API experience in the world may not directly apply to linux development. This leads me to believe, as a beginning programmer, I should start deciding where I want to specialize after I have general understanding of the basic principles of software development. This is a multi-part question really: What are the common specializations within computer programming and software development? Which of these specializations have more long-term value, both as a foundation for other specializations and/or as marketable skills? Which skill sets complement each other? Are there any areas of specialization that hinder your ability of developing other areas of specialization. | Ben, Almost all seasoned programmers are still students in programming. You never stops learning anything when you are a developer. But if you are really starting off on your career then you should be least worried about the specialization thing. All APIs, frameworks and skills that you expect that gives you a long term existence in the field is not going to happen. Technology seems changing a lot and you should be versatile and flexible enough to learn anything. The knowledge you acquire on one platform/api/framework doesn't die off. You can apply the skills to the next greatest platform/api/framework. That being said you should just stop worrying about the future and concentrate on the basics. DataStructures, Algorithm Analysis and Design, Compiler Design, Operating system design are the bare minimum stuff you need. And further you should be willing to go back and read tho books in those field any time in your career. Thats all is required. Good luck. Sorry if I sounded like a big ass advisor; but thats what I think. :-) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1880/"
]
} |
17,333 | What would be the most efficient way to compare two double or two float values? Simply doing this is not correct: bool CompareDoubles1 (double A, double B){ return A == B;} But something like: bool CompareDoubles2 (double A, double B) { diff = A - B; return (diff < EPSILON) && (-diff < EPSILON);} Seems to waste processing. Does anyone know a smarter float comparer? | Be extremely careful using any of the other suggestions. It all depends on context. I have spent a long time tracing bugs in a system that presumed a==b if |a-b|<epsilon . The underlying problems were: The implicit presumption in an algorithm that if a==b and b==c then a==c . Using the same epsilon for lines measured in inches and lines measured in mils (.001 inch). That is a==b but 1000a!=1000b . (This is why AlmostEqual2sComplement asks for the epsilon or max ULPS). The use of the same epsilon for both the cosine of angles and the length of lines! Using such a compare function to sort items in a collection. (In this case using the builtin C++ operator == for doubles produced correct results.) Like I said: it all depends on context and the expected size of a and b . By the way, std::numeric_limits<double>::epsilon() is the "machine epsilon". It is the difference between 1.0 and the next value representable by a double. I guess that it could be used in the compare function but only if the expected values are less than 1. (This is in response to @cdv's answer...) Also, if you basically have int arithmetic in doubles (here we use doubles to hold int values in certain cases) your arithmetic will be correct. For example 4.0/2.0 will be the same as 1.0+1.0 . This is as long as you do not do things that result in fractions ( 4.0/3.0 ) or do not go outside of the size of an int. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/17333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2057/"
]
} |
17,352 | In relation to this question on Using OpenGL extensions , what's the purpose of these extension functions? Why would I want to use them? Further, are there any tradeoffs or gotchas associated with using them? | The OpenGL standard allows individual vendors to provide additional functionality through extensions as new technology is created. Extensions may introduce new functions and new constants, and may relax or remove restrictions on existing OpenGL functions. Each vendor has an alphabetic abbreviation that is used in naming their new functions and constants. For example, NVIDIA's abbreviation (NV) is used in defining their proprietary function glCombinerParameterfvNV() and their constant GL_NORMAL_MAP_NV. It may happen that more than one vendor agrees to implement the same extended functionality. In that case, the abbreviation EXT is used. It may further happen that the Architecture Review Board "blesses" the extension. It then becomes known as a standard extension, and the abbreviation ARB is used. The first ARB extension was GL_ARB_multitexture, introduced in version 1.2.1. Following the official extension promotion path, multitexturing is no longer an optionally implemented ARB extension, but has been a part of the OpenGL core API since version 1.3. Before using an extension a program must first determine its availability, and then obtain pointers to any new functions the extension defines. The mechanism for doing this is platform-specific and libraries such as GLEW and GLEE exist to simplify the process. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/17352",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/803/"
]
} |
17,359 | I've created a script that runs every night on my Linux server that uses mysqldump to back up each of my MySQL databases to .sql files and packages them together as a compressed .tar file. The next step I want to accomplish is to send that tar file through email to a remote email server for safekeeping. I've been able to send the raw script in the body an email by piping the backup text file to mailx like so: $ cat mysqldbbackup.sql | mailx [email protected] cat echoes the backup file's text which is piped into the mailx program with the recipient's email address passed as an argument. While this accomplishes what I need, I think it could be one step better, Is there any way, using shell scripts or otherwise, to send the compressed .tar file to an outgoing email message as an attachment ? This would beat having to deal with very long email messages which contain header data and often have word-wrapping issues etc. | None of the mutt ones worked for me. It was thinking the email address was part of the attachment. Had to do: echo "This is the message body" | mutt -a "/path/to/file.to.attach" -s "subject of message" -- [email protected] | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/17359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1339/"
]
} |
17,370 | I've been using OpenGL extensions on Windows the painful way . Is GLEW the easier way to go? How do I get started with it? | Yes, the OpenGL Extension Wrangler Library (GLEW) is a painless way to use OpenGL extensions on Windows. Here's how to get started on it: Identify the OpenGL extension and the extension APIs you wish to use. OpenGL extensions are listed in the OpenGL Extension Registry . Check if your graphic card supports the extensions you wish to use. Download and install the latest drivers and SDKs for your graphics card. Recent versions of NVIDIA OpenGL SDK ship with GLEW. If you're using this, then you don't need to do some of the following steps. Download GLEW and unzip it. Add the GLEW bin path to your Windows PATH environment variable. Alternatively, you can also place the glew32.dll in a directory where Windows picks up its DLLs. Add the GLEW include path to your compiler's include directory list. Add the GLEW lib path to your compiler's library directory list. Instruct your compiler to use glew32.lib during linking. If you're using Visual C++ compilers then one way to do this is by adding the following line to your code: #pragma comment(lib, "glew32.lib") Add a #include <GL/glew.h> line to your code. Ensure that this is placed above the includes of other GL header files. (You may actually not need the GL header files includes if you include glew.h .) Initialize GLEW using glewInit() after you've initialized GLUT or GL. If it fails, then something is wrong with your setup. if (GLEW_OK != glewInit()){ // GLEW failed! exit(1);} Check if the extension(s) you wish to use are now available through GLEW. You do this by checking a boolean variable named GLEW _your_extension_name which is exposed by GLEW. Example: if (!GLEW_EXT_framebuffer_object){ exit(1);} That's it! You can now use the OpenGL extension calls in your code just as if they existed naturally for Windows. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/17370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1630/"
]
} |
17,411 | How can you make the display frames per second be independent from the game logic? That is so the game logic runs the same speed no matter how fast the video card can render. | I think the question reveals a bit of misunderstanding of how game engines should be designed. Which is perfectly ok, because they are damn complex things that are difficult to get right ;) You are under the correct impression that you want what is called Frame Rate Independence. But this does not only refer to Rendering Frames. A Frame in single threaded game engines is commonly referred to as a Tick. Every Tick you process input, process game logic, and render a frame based off of the results of the processing. What you want to do is be able to process your game logic at any FPS (Frames Per Second) and have a deterministic result. This becomes a problem in the following case: Check input: - Input is key: 'W' which means we move the player character forward 10 units: playerPosition += 10; Now since you are doing this every frame, if you are running at 30 FPS you will move 300 units per second. But if you are instead running at 10 FPS, you will only move 100 units per second. And thus your game logic is not Frame Rate Independent. Happily, to solve this problem and make your game play logic Frame Rate Independent is a rather simple task. First, you need a timer which will count the time each frame takes to render. This number in terms of seconds (so 0.001 seconds to complete a Tick) is then multiplied by what ever it is that you want to be Frame Rate Independent. So in this case: When holding 'W' playerPosition += 10 * frameTimeDelta; (Delta is a fancy word for "Change In Something") So your player will move some fraction of 10 in a single Tick, and after a full second of Ticks, you will have moved the full 10 units. However, this will fall down when it comes to properties where the rate of change also changes over time, for example an accelerating vehicle. This can be resolved by using a more advanced integrator, such as "Verlet". Multithreaded Approach If you are still interested in an answer to your question (since I didn't answer it but presented an alternative), here it is. Separating Game Logic and Rendering into different threads. It has it's draw backs though. Enough so that the vast majority of Game Engines remain single threaded. That's not to say there is only ever one thread running in so called single threaded engines. But all significant tasks are usually in one central thread. Some things like Collision Detection may be multithreaded, but generally the Collision phase of a Tick blocks until all the threads have returned, and the engine is back to a single thread of execution. Multithreading presents a whole, very large class of issues, even some performance ones since everything, even containers, must be thread safe. And Game Engines are very complex programs to begin with, so it is rarely worth the added complication of multithreading them. Fixed Time Step Approach Lastly, as another commenter noted, having a Fixed size time step, and controlling how often you "step" the game logic can also be a very effective way of handling this with many benefits. Linked here for completeness, but the other commenter also links to it: Fix Your Time Step | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17411",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
17,434 | I have been reading through the C++ FAQ and was curious about the friend declaration. I personally have never used it, however I am interested in exploring the language. What is a good example of using friend ? Reading the FAQ a bit longer I like the idea of the << >> operator overloading and adding as a friend of those classes. However I am not sure how this doesn't break encapsulation. When can these exceptions stay within the strictness that is OOP? | Firstly (IMO) don't listen to people who say friend is not useful. It IS useful. In many situations you will have objects with data or functionality that are not intended to be publicly available. This is particularly true of large codebases with many authors who may only be superficially familiar with different areas. There ARE alternatives to the friend specifier, but often they are cumbersome (cpp-level concrete classes/masked typedefs) or not foolproof (comments or function name conventions). Onto the answer; The friend specifier allows the designated class access to protected data or functionality within the class making the friend statement. For example in the below code anyone may ask a child for their name, but only the mother and the child may change the name. You can take this simple example further by considering a more complex class such as a Window. Quite likely a Window will have many function/data elements that should not be publicly accessible, but ARE needed by a related class such as a WindowManager. class Child{//Mother class members can access the private parts of class Child.friend class Mother;public: string name( void );protected: void setName( string newName );}; | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/17434",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/716/"
]
} |
17,469 | Try loading this normal .jpg file in Internet Explorer 6.0. I get an error saying the picture won't load. Try it in any other browser and it works fine. What's wrong? The .jpg file is just a normal picture sitting on the web server. I can even create a simple web page: <a href="http://www.zodiacwheels.com/images/wheels/blackout_thumb.jpg">blah</a> and use right click + save target as with IE6 to save it to my desktop, and it's a valid JPG file. However, it won't load in the browser! Why?! I even tried checking the header response and MIME type and it looks fine: andy@debian:~$ telnet www.zodiacwheels.com 80Trying 72.167.174.247...Connected to zodiacwheels.com.Escape character is '^]'.HEAD /images/wheels/blackout_thumb.jpg HTTP/1.1Host: www.zodiacwheels.comHTTP/1.1 200 OKDate: Wed, 20 Aug 2008 06:19:04 GMTServer: ApacheLast-Modified: Wed, 20 Aug 2008 00:29:36 GMTETag: "1387402-914ac-48ab6570"Accept-Ranges: bytesContent-Length: 595116Content-Type: image/jpeg The site needs to be able to work with IE6, how come it won't load a simple .jpg file? | The JPG you uploaded is in CMYK , IE and Firefox versions before 3 can't read these. Open it using Photoshop (or anything similar, I'm sure GIMP would work too) and resave it in RGB . edit: Further Googling makes me suspect that CMYK isn't really a part of the jpeg standard, but can be shoehorned in there. That's why some software does not consider the file valid. It does however open just fine in Photoshop CS3, and shows a cmyk colorspace. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/17469",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/432/"
]
} |
17,483 | Is anyone aware of a language feature or technique in C++ to prevent a child class from over riding a particular method in the parent class? class Base {public: bool someGuaranteedResult() { return true; }};class Child : public Base {public: bool someGuaranteedResult() { return false; /* Haha I broke things! */ }}; Even though it's not virtual, this is still allowed (at least in the Metrowerks compiler I'm using), all you get is a compile time warning about hiding non-virtual inherited function X. | A couple of ideas: Make your function private. Do not make your function virtual. This doesn't actually prevent the function from being shadowed by another definition though. Other than that, I'm not aware of a language feature that will lock away your function in such a way which prevents it from being overloaded and still able to be invoked through a pointer/reference to the child class. Good luck! | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17483",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1366/"
]
} |
17,533 | For my C# app, I don't want to always prompt for elevation on application start, but if they choose an output path that is UAC protected then I need to request elevation. So, how do I check if a path is UAC protected and then how do I request elevation mid-execution? | The best way to detect if they are unable to perform an action is to attempt it and catch the UnauthorizedAccessException . However as @ DannySmurf correctly points out you can only elevate a COM object or separate process. There is a demonstration application within the Windows SDK Cross Technology Samples called UAC Demo . This demonstration application shows a method of executing actions with an elevated process. It also demonstrates how to find out if a user is currently an administrator. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/17533",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1147/"
]
} |
17,576 | In C#, when you implement an interface, all members are implicitly public. Wouldn't it be better if we could specify the accessibility modifier ( protected , internal , except private of course), or should we just use an abstract class instead? | If an interface is internal, all its members will be internal to the assembly. If a nested interface is protected, only the subclasses of the outer class could access that interface. Internal members for an interface outside of its declaring assembly would be pointless, as would protected members for an interface outside of its declaring outer class. The point of an interface is to describe a contract between a implementing type and users of the interface. Outside callers aren't going to care and shouldn't have to care about implementation, which is what internal and protected members are for. For protected members that are called by a base class, abstract classes are the way to go for specifying a contract between base classes and classes that inherit from them. But in this case, implementation details are usually very relevant, unless it's a degenerate pure abstract class (where all members are abstract) in which case protected members are useless. In that case, go with an interface and save the single base class for implementing types to choose. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17576",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/718/"
]
} |
17,586 | Word wrap is one of the must-have features in a modern text editor. How word wrap be handled? What is the best algorithm for word-wrap? If text is several million lines, how can I make word-wrap very fast? Why do I need the solution? Because my projects must draw text with various zoom level and simultaneously beautiful appearance. The running environment is Windows Mobile devices. The maximum 600 MHz speed with very small memory size. How should I handle line information? Let's assume original data has three lines. THIS IS LINE 1.THIS IS LINE 2.THIS IS LINE 3. Afterwards, the break text will be shown like this: THIS ISLINE 1.THIS ISLINE 2.THIS ISLINE 3. Should I allocate three lines more? Or any other suggestions? | Here is a word-wrap algorithm I've written in C#. It should be fairly easy to translate into other languages (except perhaps for IndexOfAny ). static char[] splitChars = new char[] { ' ', '-', '\t' };private static string WordWrap(string str, int width){ string[] words = Explode(str, splitChars); int curLineLength = 0; StringBuilder strBuilder = new StringBuilder(); for(int i = 0; i < words.Length; i += 1) { string word = words[i]; // If adding the new word to the current line would be too long, // then put it on a new line (and split it up if it's too long). if (curLineLength + word.Length > width) { // Only move down to a new line if we have text on the current line. // Avoids situation where wrapped whitespace causes emptylines in text. if (curLineLength > 0) { strBuilder.Append(Environment.NewLine); curLineLength = 0; } // If the current word is too long to fit on a line even on it's own then // split the word up. while (word.Length > width) { strBuilder.Append(word.Substring(0, width - 1) + "-"); word = word.Substring(width - 1); strBuilder.Append(Environment.NewLine); } // Remove leading whitespace from the word so the new line starts flush to the left. word = word.TrimStart(); } strBuilder.Append(word); curLineLength += word.Length; } return strBuilder.ToString();}private static string[] Explode(string str, char[] splitChars){ List<string> parts = new List<string>(); int startIndex = 0; while (true) { int index = str.IndexOfAny(splitChars, startIndex); if (index == -1) { parts.Add(str.Substring(startIndex)); return parts.ToArray(); } string word = str.Substring(startIndex, index - startIndex); char nextChar = str.Substring(index, 1)[0]; // Dashes and the likes should stick to the word occuring before it. Whitespace doesn't have to. if (char.IsWhiteSpace(nextChar)) { parts.Add(word); parts.Add(nextChar.ToString()); } else { parts.Add(word + nextChar); } startIndex = index + 1; }} It's fairly primitive - it splits on spaces, tabs and dashes. It does make sure that dashes stick to the word before it (so you don't end up with stack\n-overflow) though it doesn't favour moving small hyphenated words to a newline rather than splitting them. It does split up words if they are too long for a line. It's also fairly culturally specific, as I don't know much about the word-wrapping rules of other cultures. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1556/"
]
} |
17,612 | Programmatic solution of course... | http://www.daveamenta.com/2008-05/c-delete-a-file-to-the-recycle-bin/ From above: using Microsoft.VisualBasic;string path = @"c:\myfile.txt";FileIO.FileSystem.DeleteDirectory(path, FileIO.UIOption.OnlyErrorDialogs, RecycleOption.SendToRecycleBin); | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/17612",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/580/"
]
} |
17,624 | I have a table with a 'filename' column.I recently performed an insert into this column but in my haste forgot to append the file extension to all the filenames entered. Fortunately they are all '.jpg' images. How can I easily update the 'filename' column of these inserted fields (assuming I can select the recent rows based on known id values) to include the '.jpg' extension? | The solution is: UPDATE tablename SET [filename] = RTRIM([filename]) + '.jpg' WHERE id > 50 RTRIM is required because otherwise the [filename] column in its entirety will be selected for the string concatenation i.e. if it is a varchar(20) column and filename is only 10 letters long then it will still select those 10 letters and then 10 spaces. This will in turn result in an error as you try to fit 20 + 3 characters into a 20 character long field. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/17624",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/364/"
]
} |
17,645 | Am I correct in assuming that the only difference between "windows files" and "unix files" is the linebreak? We have a system that has been moved from a windows machine to a unix machine and are having troubles with the format. I need to automate the translation between unix/windows before the files get delivered to the system in our "transportsystem". I'll probably need something to determine the current format and something to transform it into the other format.If it's just the newline thats the big difference then I'm considering just reading the files with the java.io. As far as I know, they are able to handle both with readLine. And then just write each line back with while (line = readline) print(line + NewlineInOtherFormat).... Summary: samjudson : This is only a difference in text files, where UNIX uses a single Line Feed (LF) to signify a new line, Windows uses a Carriage Return/Line Feed (CRLF) and Mac uses just a CR. to which Cebjyre elaborates: OS X uses LF, the same as UNIX - MacOS 9 and below did use CR though Mo There could also be a difference in character encoding for national characters. There is no "unix-encoding" but many linux-variants use UTF-8 as the default encoding. Mac OS (which is also a unix) uses its own encoding (macroman). I am not sure, what windows default encoding is. McDowell In addition to the new-line differences, the byte-order mark can cause problems if files are treated as Unicode on Windows. Cheekysoft However, another set of problems that you may come across can be related to single/multi-byte character encodings. If you see strange unexpected chars (not at end-of-line) then this could be the reason. Especially if you see square boxes, question marks, upside-down question marks, extra characters or unexpected accented characters. Sadie On unix, files that start with a . are hidden. On windows, it's a filesystem flag that you probably don't have easy access to. This may result in files that are supposed to be hidden now becoming visible on the client machines. File permissions vary between the two. You will probably find, when you copy files onto a unix system, that the files now belong to the user that did the copying and have limited rights. You'll need to use chown/chmod to make sure the correct users have access to them. There exists tools to help with the problem: pauldoo If you are just interested in the content of text files, then yes the line endings are different. Take a look at something like dos2unix, it may be of help here. Cheekysoft As pauldoo suggests, tools like dos2unix can be very useful. Note that these may be on your linux/unix system as fromdos or tofrodos, or perhaps even as the general purpose toolbox recode. Help for java coding Cheekysoft When writing to files or reading from files (that you are in control of), it is often worth specifying the encoding to use, as most Java methods allow this. However, also ensuring that the system locale matches can save a lot of pain | This is only a difference in text files, where UNIX uses a single Line Feed (LF) to signify a new line, Windows uses a Carriage Return/Line Feed (CRLF) and Mac uses just a CR. Binary files there should be no difference (i.e. a JPEG on a windows machine will be byte for byte the same as the same JPEG on a unix box.) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17645",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/86/"
]
} |
17,717 | We are currently using MySQL for a product we are building, and are keen to move to PostgreSQL as soon as possible, primarily for licensing reasons. Has anyone else done such a move? Our database is the lifeblood of the application and will eventually be storing TBs of data, so I'm keen to hear about experiences of performance improvements/losses, major hurdles in converting SQL and stored procedures, etc. Edit: Just to clarify to those who have asked why we don't like MySQL's licensing. We are developing a commercial product which (currently) depends on MySQL as a database back-end. Their license states we need to pay them a percentage of our list price per installation, and not a flat fee. As a startup, this is less than appealing. | Steve, I had to migrate my old application the way around, that is PgSQL->MySQL. I must say, you should consider yourself lucky ;-)Common gotchas are: SQL is actually pretty close to language standard, so you may suffer from MySQL's dialect you already know MySQL quietly truncates varchars that exceed max length, whereas Pg complains - quick workaround is to have these columns as 'text' instead of 'varchar' and use triggers to truncate long lines double quotes are used instead of reverse apostrophes boolean fields are compared using IS and IS NOT operators, however MySQL-compatible INT(1) with = and <> is still possible there is no REPLACE, use DELETE/INSERT combo Pg is pretty strict on enforcing foreign keys integrity, so don't forget to use ON DELETE CASCADE on references if you use PHP with PDO, remember to pass a parameter to lastInsertId() method - it should be sequence name, which is created usually this way: [tablename]_[primarykeyname]_seq I hope that helps at least a bit. Have lots of fun playing with Postgres! | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1693/"
]
} |
17,732 | There's a discussion going on over at comp.lang.c++.moderated about whether or not assertions, which in C++ only exist in debug builds by default, should be kept in production code or not. Obviously, each project is unique, so my question here is not so much whether assertions should be kept, but in which cases this is recommendable/not a good idea. By assertion, I mean: A run-time check that tests a condition which, when false, reveals a bug in the software. A mechanism by which the program is halted (maybe after really minimal clean-up work). I'm not necessarily talking about C or C++. My own opinion is that if you're the programmer, but don't own the data (which is the case with most commercial desktop applications), you should keep them on, because a failing asssertion shows a bug, and you should not go on with a bug, with the risk of corrupting the user's data. This forces you to test strongly before you ship, and makes bugs more visible, thus easier to spot and fix. What's your opinion/experience? Cheers, Carl See related question here Responses and Updates Hey Graham, An assertion is error, pure and simple and therefore should be handled like one. Since an error should be handled in release mode then you don't really need assertions. That's why I prefer the word "bug" when talking about assertions. It makes things much clearer. To me, the word "error" is too vague. A missing file is an error, not a bug, and the program should deal with it. Trying to dereference a null pointer is a bug, and the program should acknowledge that something smells like bad cheese. Hence, you should test the pointer with an assertion, but the presence of the file with normal error-handling code. Slight off-topic, but an important point in the discussion. As a heads-up, if your assertions break into the debugger when they fail, why not. But there are plenty of reasons a file could not exist that are completely outside of the control of your code: read/write rights, disk full, USB device unplugged, etc. Since you don't have control over it, I feel assertions are not the right way to deal with that. Carl Thomas, Yes, I have Code Complete, and must say I strongly disagree with that particular advice. Say your custom memory allocator screws up, and zeroes a chunk of memory that is still used by some other object. I happens to zero a pointer that this object dereferences regularly, and one of the invariants is that this pointer is never null, and you have a couple of assertions to make sure it stays that way. What do you do if the pointer suddenly is null. You just if() around it, hoping that it works? Remember, we're talking about product code here, so there's no breaking into the debugger and inspecting the local state. This is a real bug on the user's machine. Carl | Assertions are comments that do not become outdated. They document which theoretical states are intended, and which states should not occur. If code is changed so states allowed change, the developer is soon informed and needs to update the assertion. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/17732",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2095/"
]
} |
17,772 | This is probably best shown with an example. I have an enum with attributes: public enum MyEnum { [CustomInfo("This is a custom attrib")] None = 0, [CustomInfo("This is another attrib")] ValueA, [CustomInfo("This has an extra flag", AllowSomething = true)] ValueB,} I want to get to those attributes from an instance: public CustomInfoAttribute GetInfo( MyEnum enumInput ) { Type typeOfEnum = enumInput.GetType(); //this will be typeof( MyEnum ) //here is the problem, GetField takes a string // the .ToString() on enums is very slow FieldInfo fi = typeOfEnum.GetField( enumInput.ToString() ); //get the attribute from the field return fi.GetCustomAttributes( typeof( CustomInfoAttribute ), false ). FirstOrDefault() //Linq method to get first or null as CustomInfoAttribute; //use as operator to convert} As this is using reflection I expect some slowness, but it seems messy to convert the enum value to a string (which reflects the name) when I already have an instance of it. Does anyone have a better way? | This is probably the easiest way. A quicker way would be to Statically Emit the IL code using Dynamic Method and ILGenerator. Although I've only used this to GetPropertyInfo, but can't see why you couldn't emit CustomAttributeInfo as well. For example code to emit a getter from a property public delegate object FastPropertyGetHandler(object target); private static void EmitBoxIfNeeded(ILGenerator ilGenerator, System.Type type){ if (type.IsValueType) { ilGenerator.Emit(OpCodes.Box, type); }}public static FastPropertyGetHandler GetPropertyGetter(PropertyInfo propInfo){ // generates a dynamic method to generate a FastPropertyGetHandler delegate DynamicMethod dynamicMethod = new DynamicMethod( string.Empty, typeof (object), new Type[] { typeof (object) }, propInfo.DeclaringType.Module); ILGenerator ilGenerator = dynamicMethod.GetILGenerator(); // loads the object into the stack ilGenerator.Emit(OpCodes.Ldarg_0); // calls the getter ilGenerator.EmitCall(OpCodes.Callvirt, propInfo.GetGetMethod(), null); // creates code for handling the return value EmitBoxIfNeeded(ilGenerator, propInfo.PropertyType); // returns the value to the caller ilGenerator.Emit(OpCodes.Ret); // converts the DynamicMethod to a FastPropertyGetHandler delegate // to get the property FastPropertyGetHandler getter = (FastPropertyGetHandler) dynamicMethod.CreateDelegate(typeof(FastPropertyGetHandler)); return getter;} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17772",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/905/"
]
} |
17,806 | I am currently developing a .NET application, which consists of 20 projects. Some of those projects are compiled using .NET 3.5, some others are still .NET 2.0 projects (so far no problem). The problem is that if I include an external component I always get the following warning: Found conflicts between different versions of the same dependent assembly. What exactly does this warning mean and is there maybe a possibility to exclude this warning (like using #pragma disable in the source code files)? | This warning means that two projects reference the same assembly (e.g. System.Windows.Forms ) but the two projects require different versions. You have a few options: Recompile all projects to use the same versions (e.g. move all to .Net 3.5). This is the preferred option because all code is running with the versions of dependencies they were compiled with. Add a binding redirect . This will suppress the warning. However, your .Net 2.0 projects will (at runtime) be bound to the .Net 3.5 versions of dependent assemblies such as System.Windows.Forms . You can quickly add a binding redirect by double-clicking on error in Visual Studio. Use CopyLocal=true . I'm not sure if this will suppress the warning. It will, like option 2 above, mean that all projects will use the .Net 3.5 version of System.Windows.Forms. Here are a couple of ways to identify the offending reference(s): You can use a utility such as the one found at https://gist.github.com/1553265 Another simple method is to set Buildoutput verbosity (Tools, Options, Projects and Solutions, Build andRun, MSBuild project build output verbosity, Detailed) and afterbuilding, search the output window for the warning, and look at thetext just above it. (Hat tip to pauloya who suggested this in thecomments on this answer) . | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/17806",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2078/"
]
} |
17,870 | Is there a way to select data where any one of multiple conditions occur on the same field? Example: I would typically write a statement such as: select * from TABLE where field = 1 or field = 2 or field = 3 Is there a way to instead say something like: select * from TABLE where field = 1 || 2 || 3 Any help is appreciated. | Sure thing, the simplest way is this: select foo from bar where baz in (1,2,3) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/17870",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2116/"
]
} |
17,877 | Just looking for the first step basic solution here that keeps the honest people out. Thanks,Mike | Sure thing, the simplest way is this: select foo from bar where baz in (1,2,3) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/17877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/785/"
]
} |
17,878 | Suppose I have two applications written in C#. The first is a third party application that raises an event called "OnEmailSent". The second is a custom app that I've written that I would like to somehow subscribe to the "OnEmailSent" even of the first application. Is there any way that I could somehow attach the second application to an instance of the first application to listen for "OnEmailSent" event? So for further clarification, my specific scenario is that we have a custom third party application written in c# that raises an "OnEmailSent" event. We can see the event exists using reflector. What we want to do is have some other actions take place when this component sends an email. The most efficient way we can think of would be to be able to use some form of IPC as anders has suggested and listen for the OnEmailSent event being raised by the third party component. Because the component is written in C# we are toying with the idea of writing another C# application that can attach itself to the executing process and when it detect the OnEmailSent event has been raise it will execute it's own event handling code. I might be missing something, but from what I understand of how remoting works is that there would need to be a server defining some sort of contract that the client can subscribe to. I was more thinking about a scenario where someone has written a standalone application like outlook for example, that exposes events that I would like to subscribe to from another application. I guess the scenario I'm thinking of is the .net debugger and how it can attach to executing assemblies to inspect the code whilst it's running. | In order for two applications (separate processes) to exchange events, they must agree on how these events are communicated. There are many different ways of doing this, and exactly which method to use may depend on architecture and context. The general term for this kind of information exchange between processes is Inter-process Communication (IPC) . There exists many standard ways of doing IPC, the most common being files, pipes, (network) sockets, remote procedure calls (RPC) and shared memory. On Windows it's also common to use window messages . I am not sure how this works for .NET/C# applications on Windows, but in native Win32 applications you can hook on to the message loop of external processes and "spy" on the messages they are sending . If your program generates a message event when the desired function is called, this could be a way to detect it. If you are implementing both applications yourself you can chose to use any IPC method you prefer. Network sockets and higher-level socket-based protocols like HTTP, XML-RPC and SOAP are very popular these days, as they allow you do run the applications on different physical machines as well (given that they are connected via a network). | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/493/"
]
} |
17,880 | There is a rich scripting model for Microsoft Office, but not so with Apple iWork, and specifically the word processor Pages. While there are some AppleScript hooks, it looks like the best approach is to manipulate the underlying XML data. This turns out to be pretty ugly because (for example) page breaks are stored in XML. So for example, you have something like: ... we hold these truths to be self evident, that </page><page>all men are created equal, and are ... So if you want to add or remove text, you have to move the start/end tags around based on the size of the text on the page. This is pretty impossible without computing the number of words a page can hold, which seems wildly inelegant. Anybody have any thoughts on this? | In order for two applications (separate processes) to exchange events, they must agree on how these events are communicated. There are many different ways of doing this, and exactly which method to use may depend on architecture and context. The general term for this kind of information exchange between processes is Inter-process Communication (IPC) . There exists many standard ways of doing IPC, the most common being files, pipes, (network) sockets, remote procedure calls (RPC) and shared memory. On Windows it's also common to use window messages . I am not sure how this works for .NET/C# applications on Windows, but in native Win32 applications you can hook on to the message loop of external processes and "spy" on the messages they are sending . If your program generates a message event when the desired function is called, this could be a way to detect it. If you are implementing both applications yourself you can chose to use any IPC method you prefer. Network sockets and higher-level socket-based protocols like HTTP, XML-RPC and SOAP are very popular these days, as they allow you do run the applications on different physical machines as well (given that they are connected via a network). | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17880",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1854/"
]
} |
17,906 | I have a rather classic UI situation - two ListBoxes named SelectedItems and AvailableItems - the idea being that the items you have already selected live in SelectedItems , while the items that are available for adding to SelectedItems (i.e. every item that isn't already in there) live in AvailableItems . Also, I have the < and > buttons to move the current selection from one list to the other (in addition to double clicking, which works fine). Is it possible in WPF to set up a style/trigger to enable or disable the move buttons depending on anything being selected in either ListBox? SelectedItems is on the left side, so the < button will move the selected AvailableItems to that list. However, if no items are selected ( AvailableItems.SelectedIndex == -1 ), I want this button to be disabled ( IsEnabled == false ) - and the other way around for the other list/button. Is this possible to do directly in XAML, or do I need to create complex logic in the codebehind to handle it? | Here's your solution. <Button Name="btn1" >click me <Button.Style> <Style> <Style.Triggers> <DataTrigger Binding ="{Binding ElementName=list1, Path=SelectedIndex}" Value="-1"> <Setter Property="Button.IsEnabled" Value="false"/> </DataTrigger> </Style.Triggers> </Style> </Button.Style> </Button> | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/17906",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2122/"
]
} |
17,944 | I'm thinking in particular of how to display pagination controls, when using a language such as C# or Java. If I have x items which I want to display in chunks of y per page, how many pages will be needed? | Found an elegant solution: int pageCount = (records + recordsPerPage - 1) / recordsPerPage; Source: Number Conversion, Roland Backhouse, 2001 | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/17944",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2084/"
]
} |
17,960 | Has anyone worked out how to get PowerShell to use app.config files? I have a couple of .NET DLL's I'd like to use in one of my scripts but they expect their own config sections to be present in app.config / web.config . | Cross-referencing with this thread, which helped me with the same question: Subsonic Access To App.Config Connection Strings From Referenced DLL in Powershell Script I added the following to my script, before invoking the DLL that needs config settings, where $configpath is the location of the file I want to load: [appdomain]::CurrentDomain.SetData("APP_CONFIG_FILE", $configpath)Add-Type -AssemblyName System.Configuration See this post to ensure the configuration file specified is applied to the running context. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/17960",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/419/"
]
} |
17,965 | I have a process in Linux that's getting a segmentation fault. How can I tell it to generate a core dump when it fails? | This depends on what shell you are using. If you are using bash, then the ulimit command controls several settings relating to program execution, such as whether you should dump core. If you type ulimit -c unlimited then that will tell bash that its programs can dump cores of any size. You can specify a size such as 52M instead of unlimited if you want, but in practice this shouldn't be necessary since the size of core files will probably never be an issue for you. In tcsh, you'd type limit coredumpsize unlimited | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/17965",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1084/"
]
} |
17,980 | I've searched for this a little but I have not gotten a particularly straight answer. In C (and I guess C++), how do you determine what comes after the % when using printf ?. For example: double radius = 1.0;double area = 0.0;area = calculateArea( radius );printf( "%10.1f %10.2\n", radius, area ); I took this example straight from a book that I have on the C language. This does not make sense to me at all. Where do you come up with 10.1f and 10.2f ? Could someone please explain this? | http://en.wikipedia.org/wiki/Printf#printf_format_placeholders is Wikipedia's reference for format placeholders in printf. http://www.cplusplus.com/reference/clibrary/cstdio/printf.html is also helpful Basically in a simple form it's %[width].[precision][type]. Width allows you to make sure that the variable which is being printed is at least a certain length (useful for tables etc). Precision allows you to specify the precision a number is printed to (eg. decimal places etc) and the informs C/C++ what the variable you've given it is (character, integer, double etc). Hope this helps UPDATE: To clarify using your examples: printf( "%10.1f %10.2\n", radius, area ); %10.1f (referring to the first argument: radius) means make it 10 characters long (ie. pad with spaces), and print it as a float with one decimal place. %10.2 (referring to the second argument: area) means make it 10 character long (as above) and print with two decimal places. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17980",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2128/"
]
} |
17,984 | Alright, this might be a bit of a long shot, but I have having problems getting AnkhSVN to connect from Visual Studio 2005 to an external SVN server. There is a network proxy in the way, but I can't seem to find a way in AnkhSVN to configure the proxy and doesn't seem to be detecting the Internet Explorer proxy configuration. Is there any way to resolve this issue, or will it likely just not work? | http://en.wikipedia.org/wiki/Printf#printf_format_placeholders is Wikipedia's reference for format placeholders in printf. http://www.cplusplus.com/reference/clibrary/cstdio/printf.html is also helpful Basically in a simple form it's %[width].[precision][type]. Width allows you to make sure that the variable which is being printed is at least a certain length (useful for tables etc). Precision allows you to specify the precision a number is printed to (eg. decimal places etc) and the informs C/C++ what the variable you've given it is (character, integer, double etc). Hope this helps UPDATE: To clarify using your examples: printf( "%10.1f %10.2\n", radius, area ); %10.1f (referring to the first argument: radius) means make it 10 characters long (ie. pad with spaces), and print it as a float with one decimal place. %10.2 (referring to the second argument: area) means make it 10 character long (as above) and print with two decimal places. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/17984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1185/"
]
} |
18,010 | I asked a couple of coworkers about AnkhSVN and neither one of them was happy with it. One of them went as far as saying that AnkhSVN has messed up his devenv several times. What's your experience with AnkhSVN? I really miss having an IDE integrated source control tool. | Older AnkhSVN (pre 2.0) was very crappy and I was only using it for shiny icons in the solution explorer. I relied on Tortoise for everything except reverts. The newer Ankh is a complete rewrite (it is now using the Source Control API of the IDE) and looks & works much better. Still, I haven't forced it to any heavy lifting. Icons is enough for me. The only gripe I have with 2.0 is the fact that it slaps its footprint to .sln files. I always revert them lest they cause problems for co-workers who do not have Ankh installed. I don't know if my fears are groundless or not. addendum: I have been using v2.1.7141 a bit more extensively for the last few weeks and here are the new things I have to add: No ugly crashes that plagued v1.x. Yay! For some reason, "Show Changes" (diff) windows are limited to only two. Meh. Diff windows do not allow editing/reverting yet. Boo! Updates, commits and browsing are MUCH faster than Tortoise. Yay! All in all, I would not use it standalone, but once you start using it, it becomes an almost indispensable companion to Tortoise. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/18010",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/781/"
]
} |
18,034 | How do I create a self signed SSL certificate for an Apache Server to use while testing a web app? | How do I create a self-signed SSL Certificate for testing purposes? from http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert : Make sure OpenSSL is installed and in your PATH. Run the following command, to create server.key and server.crtfiles: openssl req -new -x509 -nodes -out server.crt -keyout server.key These can be used as follows in your httpd.conf file: SSLCertificateFile /path/to/this/server.crtSSLCertificateKeyFile /path/to/this/server.key It is important that you are aware that this server.key does not have any passphrase. To add a passphrase to the key, you should run the following command, and enter & verify the passphrase as requested. openssl rsa -des3 -in server.key -out server.key.newmv server.key.new server.key Please backup the server.key file, and the passphrase you entered,in a secure location. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/18034",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1310/"
]
} |
18,077 | I wanted some of those spiffy rounded corners for a web project that I'm currently working on. I thought I'd try to accomplish it using javascript and not CSS in an effort to keep the requests for image files to a minimum (yes, I know that it's possible to combine all required rounded corner shapes into one image) and I also wanted to be able to change the background color pretty much on the fly. I already utilize jQuery so I looked at the excellent rounded corners plugin and it worked like a charm in every browser I tried. Being a developer however I noticed the opportunity to make it a bit more efficient. The script already includes code for detecting if the current browser supports webkit rounded corners (safari based browsers). If so it uses raw CSS instead of creating layers of divs. I thought that it would be awesome if the same kind of check could be performed to see if the browser supports the Gecko-specific -moz-border-radius-* properties and if so utilize them. The check for webkit support looks like this: var webkitAvailable = false; try { webkitAvailable = (document.defaultView.getComputedStyle(this[0], null)['-webkit-border-radius'] != undefined);} catch(err) {} That, however, did not work for -moz-border-radius so I started checking for alternatives. My fallback solution is of course to use browser detection but that's far from recommended practice ofcourse. My best solution yet is as follows. var mozborderAvailable = false;try { var o = jQuery('<div>').css('-moz-border-radius', '1px'); mozborderAvailable = $(o).css('-moz-border-radius-topleft') == '1px'; o = null;} catch(err) {} It's based on the theory that Gecko "expands" the composite -moz-border-radius to the four sub-properties -moz-border-radius-topleft -moz-border-radius-topright -moz-border-radius-bottomleft -moz-border-radius-bottomright Is there any javascript/CSS guru out there that have a better solution? (The feature request for this page is at http://plugins.jquery.com/node/3619 ) | How about this? var mozborderAvailable = false;try { if (typeof(document.body.style.MozBorderRadius) !== "undefined") { mozborderAvailable = true; }} catch(err) {} I tested it in Firefox 3 (true) and false in: Safari, IE7, and Opera. (Edit: better undefined test) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/18077",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2114/"
]
} |
18,082 | What's the cleanest, most effective way to validate decimal numbers in JavaScript? Bonus points for: Clarity. Solution should be clean and simple. Cross-platform. Test cases: 01. IsNumeric('-1') => true02. IsNumeric('-1.5') => true03. IsNumeric('0') => true04. IsNumeric('0.42') => true05. IsNumeric('.42') => true06. IsNumeric('99,999') => false07. IsNumeric('0x89f') => false08. IsNumeric('#abcdef') => false09. IsNumeric('1.2.3') => false10. IsNumeric('') => false11. IsNumeric('blah') => false | @Joel's answer is pretty close, but it will fail in the following cases: // Whitespace strings:IsNumeric(' ') == true;IsNumeric('\t\t') == true;IsNumeric('\n\r') == true;// Number literals:IsNumeric(-1) == false;IsNumeric(0) == false;IsNumeric(1.1) == false;IsNumeric(8e5) == false; Some time ago I had to implement an IsNumeric function, to find out if a variable contained a numeric value, regardless of its type , it could be a String containing a numeric value (I had to consider also exponential notation, etc.), a Number object, virtually anything could be passed to that function, I couldn't make any type assumptions, taking care of type coercion (eg. +true == 1; but true shouldn't be considered as "numeric" ). I think is worth sharing this set of +30 unit tests made to numerous function implementations, and also share the one that passes all my tests: function isNumeric(n) { return !isNaN(parseFloat(n)) && isFinite(n);} P.S. isNaN & isFinite have a confusing behavior due to forced conversion to number. In ES6, Number.isNaN & Number.isFinite would fix these issues. Keep that in mind when using them. Update : Here's how jQuery does it now (2.2-stable) : isNumeric: function(obj) { var realStringObj = obj && obj.toString(); return !jQuery.isArray(obj) && (realStringObj - parseFloat(realStringObj) + 1) >= 0;} Update : Angular 4.3 : export function isNumeric(value: any): boolean { return !isNaN(value - parseFloat(value));} | {
"score": 13,
"source": [
"https://Stackoverflow.com/questions/18082",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/29/"
]
} |
18,097 | In C#, if I have an inherited class with a default constructor, do I have to explicitly call the base class' constructor or will it be implicitly called? class BaseClass{ public BaseClass() { // ... some code }}class MyClass : BaseClass{ public MyClass() // Do I need to put ": base()" here or is it implied? { // ... some code }} | You do not need to explicitly call the base constructor, it will be implicitly called. Extend your example a little and create a Console Application and you can verify this behaviour for yourself: using System;namespace ConsoleApplication1{ class Program { static void Main(string[] args) { MyClass foo = new MyClass(); Console.ReadLine(); } } class BaseClass { public BaseClass() { Console.WriteLine("BaseClass constructor called."); } } class MyClass : BaseClass { public MyClass() { Console.WriteLine("MyClass constructor called."); } }} | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/18097",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1463/"
]
} |
18,290 | Within Ruby on Rails applications database.yml is a plain text file that stores database credentials. When I deploy my Rails applications I have an after deploy callback in my Capistrano recipe that creates a symbolic link within the application's /config directory to the database.yml file. The file itself is stored in a separate directory that's outside the standard Capistrano /releases directory structure. I chmod 400 the file so it's only readable by the user who created it. Is this sufficient to lock it down? If not, what else do you do? Is anyone encrypting their database.yml files? | You'll also want to make sure that your SSH system is well secured to prevent people from logging in as your Capistrano bot. I'd suggest restricting access to password-protected key pairs. Encrypting the .yml file on the server is useless since you have to give the bot the key, which would be stored . . . on the same server. Encrypting it on your machine is probably a good idea. Capistrano can decrypt it before sending. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/18290",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1450/"
]
} |
18,291 | I'm wondering how the few Delphi users here are doing unit testing, if any? Is there anything that integrates with the IDE that you've found works well? If not, what tools are you using and do you have or know of example mini-projects that demonstrate how it all works? Update: I forgot to mention that I'm using BDS 2006 Pro, though I occasionally drop into Delphi 7, and of course others may be using other versions. | DUnit is a xUnit type of unit testing framework to be used with win32 Delphi. Since Delphi 2005 DUnit is integrated to a certan point into the IDE. Other DUnit integration tools for the Delphi IDE can be found here . DUnit comes with documentation with examples . | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/18291",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1461/"
]
} |
18,292 | Trying to setup an SSH server on Windows Server 2003. What are some good ones? Preferably open source. I plan on using WinSCP as a client so a server which supports the advanced features implemented by that client would be great. | I've been using Bitvise SSH Server and it's really great. From install to administration it does it all through a GUI so you won't be putting together a sshd_config file. Plus if you use their client, Tunnelier , you get some bonus features (like mapping shares, port forwarding setup up server side, etc.) If you don't use their client it will still work with the Open Source SSH clients. It's not Open Source and it costs $39.95, but I think it's worth it. UPDATE 2009-05-21 11:10 : The pricing has changed. The current price is $99.95 per install for commercial, but now free for non-commercial/personal use. Here is the current pricing . | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/18292",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1946/"
]
} |
18,407 | If I have a variable in C# that needs to be checked to determine if it is equal to one of a set of variables, what is the best way to do this? I'm not looking for a solution that stores the set in an array. I'm more curious to see if there is a solution that uses boolean logic in some way to get the answer. I know I could do something like this: int baseCase = 5;bool testResult = baseCase == 3 || baseCase == 7 || baseCase == 12 || baseCase == 5; I'm curious to see if I could do something more like this: int baseCase = 5;bool testResult = baseCase == (3 | 7 | 12 | 5); Obviously the above won't work, but I'm interested in seeing if there is something more succinct than my first example, which has to repeat the same variable over and over again for each test value. UPDATE: I decided to accept CoreyN's answer as it seems like the most simple approach. It's practical, and still simple for a novice to understand, I think. Unfortunately where I work our system uses the .NET 2.0 framework and there's no chance of upgrading any time soon. Are there any other solutions out there that don't rely on the .NET 3.5 framework, besides the most obvious one I can think of: new List<int>(new int[] { 3, 6, 7, 1 }).Contains(5); | bool b = new int[] { 3,7,12,5 }.Contains(5); | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/18407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/392/"
]
} |
18,418 | When I am writing code in Python, I often need to remove items from a list or other sequence type based on some criteria. I haven't found a solution that is elegant and efficient, as removing items from a list you are currently iterating through is bad. For example, you can't do this: for name in names: if name[-5:] == 'Smith': names.remove(name) I usually end up doing something like this: toremove = []for name in names: if name[-5:] == 'Smith': toremove.append(name)for name in toremove: names.remove(name)del toremove This is innefficient, fairly ugly and possibly buggy (how does it handle multiple 'John Smith' entries?). Does anyone have a more elegant solution, or at least a more efficient one? How about one that works with dictionaries? | Two easy ways to accomplish just the filtering are: Using filter : names = filter(lambda name: name[-5:] != "Smith", names) Using list comprehensions: names = [name for name in names if name[-5:] != "Smith"] Note that both cases keep the values for which the predicate function evaluates to True , so you have to reverse the logic (i.e. you say "keep the people who do not have the last name Smith" instead of "remove the people who have the last name Smith"). Edit Funny... two people individually posted both of the answers I suggested as I was posting mine. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/18418",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1892/"
]
} |
18,449 | For those of us who use standard shared hosting packages, such as GoDaddy or Network Solutions, how do you handle datetime conversions when your hosting server (PHP) and MySQL server are in different time zones? Also, does anybody have some best practice advice for determining what time zone a visitor to your site is in and manipulating a datetime variable appropriately? | As of PHP 5.1.0 you can use date_default_timezone_set() function to set the default timezone used by all date/time functions in a script. For MySql (quoted from MySQL Server Time Zone Support page) Before MySQL 4.1.3, the server operates only in the system time zone set at startup. Beginning with MySQL 4.1.3, the server maintains several time zone settings, some of which can be modified at runtime. Of interest to you is per-connection setting of the time zones, which you would use at the beginning of your scripts SET timezone = 'Europe/London'; As for detecting the client timezone setting, you could use a bit of JavaScript to get and save that information to a cookie, and use it on subsequent page reads, to calculate the proper timezone. //Returns the offset (time difference) between Greenwich Mean Time (GMT) //and local time of Date object, in minutes.var offset = new Date().getTimezoneOffset(); document.cookie = 'timezoneOffset=' + escape(offset); Or you could offer users the chioce to set their time zones themselves. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/18449",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2056/"
]
} |
18,450 | Has anyone used Mono, the open source .NET implementation on a large or medium sized project? I'm wondering if it's ready for real world, production environments. Is it stable, fast, compatible, ... enough to use? Does it take a lot of effort to port projects to the Mono runtime, or is it really, really compatible enough to just take of and run already written code for Microsoft's runtime? | There are a couple of scenarios to consider: (a) if you are porting an existing application and wondering if Mono is good enough for this task; (b) you are starting to write some new code, and you want to know if Mono is mature enough. For the first case, you can use the Mono Migration Analyzer tool (Moma) to evaluate how far your application is from running on Mono. If the evaluation comes back with flying colors, you should start on your testing and QA and get ready to ship. If your evaluation comes back with a report highlighting features that are missing or differ significantly in their semantics in Mono you will have to evaluate whether the code can be adapted, rewritten or in the worst case whether your application can work with reduced functionality. According to our Moma statistics based on user submissions (this is from memory) about 50% of the applications work out of the box, about 25% require about a week worth of work (refactoring, adapting) another 15% require a serious commitment to redo chunks of your code, and the rest is just not worth bothering porting since they are so incredibly tied to Win32. At that point, either you start from zero, or a business decision will drive the effort to make your code portable, but we are talking months worth of work (at least from the reports we have). If you are starting from scratch, the situation is a lot simpler, because you will only be using the APIs that are present in Mono. As long as you stay with the supported stack (which is pretty much .NET 2.0, plus all the core upgrades in 3.5 including LINQ and System.Core, plus any of the Mono cross-platform APIs) you will be fine. Every once in a while you might run into bugs in Mono or limitations, and you might have to work around them, but that is not different than any other system. As for portability: ASP.NET applications are the easier ones to port, as those have little to no dependencies on Win32 and you can even use SQL server or other popular databases (there are plenty of bundled database providers with Mono). Windows.Forms porting is sometimes trickier because developers like to escape the .NET sandbox and P/Invoke their brains out to configure things as useful as the changing the cursor blinking rate expressed as two bezier points encoded in BCD form in a wParam. Or some junk like that. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/18450",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2018/"
]
} |
18,465 | In .Net you can read a string value into another data type using either <datatype>.parse or Convert.To<DataType> . I'm not familiar with the fundamentals of parse versus convert so I am always at a loss when asked which one is better/faster/more appropriate. So - which way is best in what type of circumstances? | The Convert.ToXXX() methods are for objects that might be of the correct or similar type, while .Parse() and .TryParse() are specifically for strings: //o is actually a boxed intobject o = 12345;//unboxes itint castVal = (int) 12345;//o is a boxed enumobject o = MyEnum.ValueA;//this will get the underlying int of ValueAint convVal = Convert.ToInt32( o );//now we have a stringstring s = "12345";//this will throw an exception if s can't be parsedint parseVal = int.Parse( s );//alternatively:int tryVal;if( int.TryParse( s, out tryVal ) ) { //do something with tryVal } If you compile with optimisation flags TryParse is very quick - it's the best way to get a number from a string. However if you have an object that might be an int or might be a string Convert.ToInt32 is quicker. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/18465",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/149/"
]
} |
18,524 | I have a list of integers, List<Integer> and I'd like to convert all the integer objects into Strings, thus finishing up with a new List<String> . Naturally, I could create a new List<String> and loop through the list calling String.valueOf() for each integer, but I was wondering if there was a better (read: more automatic ) way of doing it? | As far as I know, iterate and instantiate is the only way to do this. Something like (for others potential help, since I'm sure you know how to do this): List<Integer> oldList = .../* Specify the size of the list up front to prevent resizing. */List<String> newList = new ArrayList<>(oldList.size());for (Integer myInt : oldList) { newList.add(String.valueOf(myInt)); } | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/18524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/916/"
]
} |
18,533 | I've found myself increasingly unsatisfied with the DataSet/DataTable/DataRow paradigm in .Net, mostly because it's often a couple of steps more complicated than what I really want to do. In cases where I'm binding to controls, DataSets are fine. But in other cases, there seems to be a fair amount of mental overhead. I've played a bit with SqlDataReader, and that seems to be good for simple jaunts through a select, but I feel like there may be some other models lurking in .Net that are useful to learn more about. I feel like all of the help I find on this just uses DataSet by default. Maybe that and DataReader really are the best options. I'm not looking for a best/worst breakdown, just curious what my options are and what experiences you've had with them. Thanks! -Eric Sipple | Since .NET 3.5 came out, I've exclusively used LINQ. It's really that good; I don't see any reason to use any of those old crutches any more. As great as LINQ is, though, I think any ORM system would allow you to do away with that dreck. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/18533",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/111/"
]
} |
18,538 | I'd like some sorthand for this: Map rowToMap(row) { def rowMap = [:]; row.columns.each{ rowMap[it.name] = it.val } return rowMap;} given the way the GDK stuff is, I'd expect to be able to do something like: Map rowToMap(row) { row.columns.collectMap{ [it.name,it.val] }} but I haven't seen anything in the docs... am I missing something? or am I just way too lazy? | I've recently came across the need to do exactly that: converting a list into a map. This question was posted before Groovy version 1.7.9 came out, so the method collectEntries didn't exist yet. It works exactly as the collectMap method that was proposed : Map rowToMap(row) { row.columns.collectEntries{[it.name, it.val]}} If for some reason you are stuck with an older Groovy version, the inject method can also be used (as proposed here ). This is a slightly modified version that takes only one expression inside the closure (just for the sake of character saving!): Map rowToMap(row) { row.columns.inject([:]) {map, col -> map << [(col.name): col.val]}} The + operator can also be used instead of the << . | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/18538",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2031/"
]
} |
18,585 | Update: Solved, with code I got it working, see my answer below for the code... Original Post As Tundey pointed out in his answer to my last question , you can bind nearly everything about a windows forms control to ApplicationSettings pretty effortlessly. So is there really no way to do this with form Size? This tutorial says you need to handle Size explicitly so you can save RestoreBounds instead of size if the window is maximized or minimized. However, I hoped I could just use a property like: public Size RestoreSize{ get { if (this.WindowState == FormWindowState.Normal) { return this.Size; } else { return this.RestoreBounds.Size; } } set { ... }} But I can't see a way to bind this in the designer (Size is notably missing from the PropertyBinding list). | I finally came up with a Form subclass that solves this, once and for all. To use it: Inherit from RestorableForm instead of Form. Add a binding in (ApplicationSettings) -> (PropertyBinding) to WindowRestoreState. Call Properties.Settings.Default.Save() when the window is about to close. Now window position and state will be remembered between sessions. Following the suggestions from other posters below, I included a function ConstrainToScreen that makes sure the window fits nicely on the available displays when restoring itself. Code // Consider this code public domain. If you want, you can even tell// your boss, attractive women, or the other guy in your cube that// you wrote it. Enjoy!using System;using System.Windows.Forms;using System.ComponentModel;using System.Drawing;namespace Utilities{ public class RestorableForm : Form, INotifyPropertyChanged { // We invoke this event when the binding needs to be updated. public event PropertyChangedEventHandler PropertyChanged; // This stores the last window position and state private WindowRestoreStateInfo windowRestoreState; // Now we define the property that we will bind to our settings. [Browsable(false)] // Don't show it in the Properties list [SettingsBindable(true)] // But do enable binding to settings public WindowRestoreStateInfo WindowRestoreState { get { return windowRestoreState; } set { windowRestoreState = value; if (PropertyChanged != null) { // If anybody's listening, let them know the // binding needs to be updated: PropertyChanged(this, new PropertyChangedEventArgs("WindowRestoreState")); } } } protected override void OnClosing(CancelEventArgs e) { WindowRestoreState = new WindowRestoreStateInfo(); WindowRestoreState.Bounds = WindowState == FormWindowState.Normal ? Bounds : RestoreBounds; WindowRestoreState.WindowState = WindowState; base.OnClosing(e); } protected override void OnLoad(EventArgs e) { base.OnLoad(e); if (WindowRestoreState != null) { Bounds = ConstrainToScreen(WindowRestoreState.Bounds); WindowState = WindowRestoreState.WindowState; } } // This helper class stores both position and state. // That way, we only have to set one binding. public class WindowRestoreStateInfo { Rectangle bounds; public Rectangle Bounds { get { return bounds; } set { bounds = value; } } FormWindowState windowState; public FormWindowState WindowState { get { return windowState; } set { windowState = value; } } } private Rectangle ConstrainToScreen(Rectangle bounds) { Screen screen = Screen.FromRectangle(WindowRestoreState.Bounds); Rectangle workingArea = screen.WorkingArea; int width = Math.Min(bounds.Width, workingArea.Width); int height = Math.Min(bounds.Height, workingArea.Height); // mmm....minimax int left = Math.Min(workingArea.Right - width, Math.Max(bounds.Left, workingArea.Left)); int top = Math.Min(workingArea.Bottom - height, Math.Max(bounds.Top, workingArea.Top)); return new Rectangle(left, top, width, height); } }} Settings Bindings References SettingsBindableAttribute INotifyPropertyChanged | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/18585",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/229/"
]
} |
18,601 | Unit testing and ASP.NET web applications are an ambiguous point in my group. More often than not, good testing practices fall through the cracks and web applications end up going live for several years with no tests. The cause of this pain point generally revolves around the hassle of writing UI automation mid-development. How do you or your organization integrate best TDD practices with web application development? | Unit testing will be achievable if you separate your layers appropriately. As Rob Cooper implied, don't put any logic in your WebForm other than logic to manage your presentation . All other stuff logic and persistence layers should be kept in separate classes and then you can test those individually. To test the GUI some people like selenium . Others complain that is a pain to set up. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/18601",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1473493/"
]
} |
18,617 | How do you configure tomcat to bind to a single ip address (localhost) instead of all addresses? | Several connectors are configured, and each connector has an optional "address" attribute where you can set the IP address. Edit tomcat/conf/server.xml . Specify a bind address for that connector: <Connector port="8080" protocol="HTTP/1.1" address="127.0.0.1" connectionTimeout="20000" redirectPort="8443" /> | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/18617",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1310/"
]
} |
18,632 | For debugging purposes in a somewhat closed system, I have to output text to a file. Does anyone know of a tool that runs on windows (console based or not) that detects changes to a file and outputs them in real-time? | Tail for Win32 Apache Chainsaw - used this with log4net logs , may require file to be in a certain format | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/18632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2011/"
]
} |
18,655 | I really need to see some honest, thoughtful debate on the merits of the currently accepted enterprise application design paradigm. I am not convinced that entity objects should exist. By entity objects I mean the typical things we tend to build for our applications, like "Person", "Account", "Order", etc. My current design philosophy is this: All database access must be accomplished via stored procedures. Whenever you need data, call a stored procedure and iterate over a SqlDataReader or the rows in a DataTable (Note: I have also built enterprise applications with Java EE, java folks please substitute the equvalent for my .NET examples) I am not anti-OO. I write lots of classes for different purposes, just not entities. I will admit that a large portion of the classes I write are static helper classes. I am not building toys. I'm talking about large, high volume transactional applications deployed across multiple machines. Web applications, windows services, web services, b2b interaction, you name it. I have used OR Mappers. I have written a few. I have used the Java EE stack, CSLA, and a few other equivalents. I have not only used them but actively developed and maintained these applications in production environments. I have come to the battle-tested conclusion that entity objects are getting in our way, and our lives would be so much easier without them. Consider this simple example: you get a support call about a certain page in your application that is not working correctly, maybe one of the fields is not being persisted like it should be. With my model, the developer assigned to find the problem opens exactly 3 files . An ASPX, an ASPX.CS and a SQL file with the stored procedure. The problem, which might be a missing parameter to the stored procedure call, takes minutes to solve. But with any entity model, you will invariably fire up the debugger, start stepping through code, and you may end up with 15-20 files open in Visual Studio. By the time you step down to the bottom of the stack, you forgot where you started. We can only keep so many things in our heads at one time. Software is incredibly complex without adding any unnecessary layers. Development complexity and troubleshooting are just one side of my gripe. Now let's talk about scalability. Do developers realize that each and every time they write or modify any code that interacts with the database, they need to do a throrough analysis of the exact impact on the database? And not just the development copy, I mean a mimic of production, so you can see that the additional column you now require for your object just invalidated the current query plan and a report that was running in 1 second will now take 2 minutes, just because you added a single column to the select list? And it turns out that the index you now require is so big that the DBA is going to have to modify the physical layout of your files? If you let people get too far away from the physical data store with an abstraction, they will create havoc with an application that needs to scale. I am not a zealot. I can be convinced if I am wrong, and maybe I am, since there is such a strong push towards Linq to Sql, ADO.NET EF, Hibernate, Java EE, etc. Please think through your responses, if I am missing something I really want to know what it is, and why I should change my thinking. [Edit] It looks like this question is suddenly active again, so now that we have the new comment feature I have commented directly on several answers. Thanks for the replies, I think this is a healthy discussion. I probably should have been more clear that I am talking about enterprise applications. I really can't comment on, say, a game that's running on someone's desktop, or a mobile app. One thing I have to put up here at the top in response to several similar answers: orthogonality and separation of concerns often get cited as reasons to go entity/ORM. Stored procedures, to me, are the best example of separation of concerns that I can think of. If you disallow all other access to the database, other than via stored procedures, you could in theory redesign your entire data model and not break any code, so long as you maintained the inputs and outputs of the stored procedures. They are a perfect example of programming by contract (just so long as you avoid "select *" and document the result sets). Ask someone who's been in the industry for a long time and has worked with long-lived applications: how many application and UI layers have come and gone while a database has lived on? How hard is it to tune and refactor a database when there are 4 or 5 different persistence layers generating SQL to get at the data? You can't change anything! ORMs or any code that generates SQL lock your database in stone . | I think it comes down to how complicated the "logic" of the application is, and where you have implemented it. If all your logic is in stored procedures, and all your application does is call those procedures and display the results, then developing entity objects is indeed a waste of time. But for an application where the objects have rich interactions with one another, and the database is just a persistence mechanism, there can be value to having those objects. So, I'd say there is no one-size-fits-all answer. Developers do need to be aware that, sometimes, trying to be too OO can cause more problems than it solves. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/18655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1219/"
]
} |
18,671 | Anyone know a quick easy way to migrate a SQLite3 database to MySQL? | Everyone seems to starts off with a few greps and perl expressions and you sorta kinda get something that works for your particular dataset but you have no idea if it's imported the data correctly or not. I'm seriously surprised nobody's built a solid library that can convert between the two. Here a list of ALL the differences in SQL syntax that I know about between the two file formats:The lines starting with: BEGIN TRANSACTION COMMIT sqlite_sequence CREATE UNIQUE INDEX are not used in MySQL SQLite uses CREATE TABLE/INSERT INTO "table_name" and MySQL uses CREATE TABLE/INSERT INTO table_name MySQL doesn't use quotes inside the schema definition MySQL uses single quotes for strings inside the INSERT INTO clauses SQLite and MySQL have different ways of escaping strings inside INSERT INTO clauses SQLite uses 't' and 'f' for booleans, MySQL uses 1 and 0 (a simple regex for this can fail when you have a string like: 'I do, you don't' inside your INSERT INTO ) SQLLite uses AUTOINCREMENT , MySQL uses AUTO_INCREMENT Here is a very basic hacked up perl script which works for my dataset and checks for many more of these conditions that other perl scripts I found on the web. Nu guarantees that it will work for your data but feel free to modify and post back here. #! /usr/bin/perlwhile ($line = <>){ if (($line !~ /BEGIN TRANSACTION/) && ($line !~ /COMMIT/) && ($line !~ /sqlite_sequence/) && ($line !~ /CREATE UNIQUE INDEX/)){ if ($line =~ /CREATE TABLE \"([a-z_]*)\"(.*)/i){ $name = $1; $sub = $2; $sub =~ s/\"//g; $line = "DROP TABLE IF EXISTS $name;\nCREATE TABLE IF NOT EXISTS $name$sub\n"; } elsif ($line =~ /INSERT INTO \"([a-z_]*)\"(.*)/i){ $line = "INSERT INTO $1$2\n"; $line =~ s/\"/\\\"/g; $line =~ s/\"/\'/g; }else{ $line =~ s/\'\'/\\\'/g; } $line =~ s/([^\\'])\'t\'(.)/$1THIS_IS_TRUE$2/g; $line =~ s/THIS_IS_TRUE/1/g; $line =~ s/([^\\'])\'f\'(.)/$1THIS_IS_FALSE$2/g; $line =~ s/THIS_IS_FALSE/0/g; $line =~ s/AUTOINCREMENT/AUTO_INCREMENT/g; print $line; }} | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/18671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/534/"
]
} |
18,676 | I need to generate a random integer between 1 and n (where n is a positive whole number) to use for a unit test. I don't need something overly complicated to ensure true randomness - just an old-fashioned random number. How would I do that? | To get a random integer value between 1 and N (inclusive) you can use the following. CInt(Math.Ceiling(Rnd() * n)) + 1 | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/18676",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/225/"
]
} |
18,685 | Can anyone tell me how I can display a status message like "12 seconds ago" or "5 minutes ago" etc in a web page? | Here is the php code for the same: function time_since($since) { $chunks = array( array(60 * 60 * 24 * 365 , 'year'), array(60 * 60 * 24 * 30 , 'month'), array(60 * 60 * 24 * 7, 'week'), array(60 * 60 * 24 , 'day'), array(60 * 60 , 'hour'), array(60 , 'minute'), array(1 , 'second') ); for ($i = 0, $j = count($chunks); $i < $j; $i++) { $seconds = $chunks[$i][0]; $name = $chunks[$i][1]; if (($count = floor($since / $seconds)) != 0) { break; } } $print = ($count == 1) ? '1 '.$name : "$count {$name}s"; return $print;} The function takes the number of seconds as input and outputs text such as: 10 seconds 1 minute etc | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/18685",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/184/"
]
} |
18,717 | As far as I know, foreign keys (FK) are used to aid the programmer to manipulate data in the correct way. Suppose a programmer is actually doing this in the right manner already, then do we really need the concept of foreign keys? Are there any other uses for foreign keys? Am I missing something here? | Foreign keys help enforce referential integrity at the data level. They also improve performance because they're normally indexed by default. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/18717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/184/"
]
} |
18,754 | I'm writing some documentation in Markdown, and creating a separate file for each section of the doc. I would like to be able to convert all the files to HTML in one go, but I can't find anyone else who has tried the same thing. I'm on a Mac, so I would think a simple bash script should be able to handle it, but I've never done anything in bash and haven't had any luck. It seems like it should be simple to write something so I could just run: markdown-batch ./*.markdown Any ideas? | This is how you would do it in Bash. for i in ./*.markdown; do perl markdown.pl --html4tags $i > $i.html; done; Of course, you need the Markdown script . | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/18754",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2185/"
]
} |
18,764 | Since both a Table Scan and a Clustered Index Scan essentially scan all records in the table, why is a Clustered Index Scan supposedly better? As an example - what's the performance difference between the following when there are many records?: declare @temp table( SomeColumn varchar(50))insert into @tempselect 'SomeVal'select * from @temp-----------------------------declare @temp table( RowID int not null identity(1,1) primary key, SomeColumn varchar(50))insert into @tempselect 'SomeVal'select * from @temp | In a table without a clustered index (a heap table), data pages are not linked together - so traversing pages requires a lookup into the Index Allocation Map . A clustered table, however, has it's data pages linked in a doubly linked list - making sequential scans a bit faster. Of course, in exchange, you have the overhead of dealing with keeping the data pages in order on INSERT , UPDATE , and DELETE . A heap table, however, requires a second write to the IAM. If your query has a RANGE operator (e.g.: SELECT * FROM TABLE WHERE Id BETWEEN 1 AND 100 ), then a clustered table (being in a guaranteed order) would be more efficient - as it could use the index pages to find the relevant data page(s). A heap would have to scan all rows, since it cannot rely on ordering. And, of course, a clustered index lets you do a CLUSTERED INDEX SEEK, which is pretty much optimal for performance...a heap with no indexes would always result in a table scan. So: For your example query where you select all rows, the only difference is the doubly linked list a clustered index maintains. This should make your clustered table just a tiny bit faster than a heap with a large number of rows. For a query with a WHERE clause that can be (at least partially) satisfied by the clustered index, you'll come out ahead because of the ordering - so you won't have to scan the entire table. For a query that is not satisified by the clustered index, you're pretty much even...again, the only difference being that doubly linked list for sequential scanning. In either case, you're suboptimal. For INSERT , UPDATE , and DELETE a heap may or may not win. The heap doesn't have to maintain order, but does require a second write to the IAM. I think the relative performance difference would be negligible, but also pretty data dependent. Microsoft has a whitepaper which compares a clustered index to an equivalent non-clustered index on a heap (not exactly the same as I discussed above, but close). Their conclusion is basically to put a clustered index on all tables. I'll do my best to summarize their results (again, note that they're really comparing a non-clustered index to a clustered index here - but I think it's relatively comparable): INSERT performance: clustered index wins by about 3% due to the second write needed for a heap. UPDATE performance: clustered index wins by about 8% due to the second lookup needed for a heap. DELETE performance: clustered index wins by about 18% due to the second lookup needed and the second delete needed from the IAM for a heap. single SELECT performance: clustered index wins by about 16% due to the second lookup needed for a heap. range SELECT performance: clustered index wins by about 29% due to the random ordering for a heap. concurrent INSERT : heap table wins by 30% under load due to page splits for the clustered index. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/18764",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/357/"
]
} |
18,765 | I'm currently working on creating a new C# project that needs to interact with an older C++ application. There is an error enumeration that already exists in the C++ app that I need to use in the C# app. I don't want to just re declare the enumeration in C# because that could cause sync issues down the line if the files aren't updated together . All that being said my question is this:Is there a way for me to taken an enumeration declared like so: typedef enum{ eDEVICEINT_ERR_FATAL = 0x10001 ...} eDeviceIntErrCodes; and use it in a C# program like so: eDeviceIntErrCodes.eDEVICEINT_ERR_FATAL | Check out the PInvoke Interop Assistant tool http://www.codeplex.com/clrinterop/Release/ProjectReleases.aspx?ReleaseId=14120 . Its a useful tool for generating PInvoke signatures for native methods. If I feed it your enum it generates this code. There is a command line version of the tool included so you could potentially build an automated process to keep the C# definition of the enum up to date whenever the C++ version changes. public enum eDeviceIntErrCodes { /// eDEVICEINT_ERR_FATAL -> 0x10001 eDEVICEINT_ERR_FATAL = 65537, } | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/18765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2191/"
]
} |
18,783 | When you have a query or stored procedure that needs performance tuning, what are some of the first things you try? | Here is the handy-dandy list of things I always give to someone asking me about optimisation. We mainly use Sybase, but most of the advice will apply across the board. SQL Server, for example, comes with a host of performance monitoring / tuning bits, but if you don't have anything like that (and maybe even if you do) then I would consider the following... 99% of problems I have seen are caused by putting too many tables in a join . The fix for this is to do half the join (with some of the tables) and cache the results in a temporary table. Then do the rest of the query joining on that temporary table. Query Optimisation Checklist Run UPDATE STATISTICS on the underlying tables Many systems run this as a scheduled weekly job Delete records from underlying tables (possibly archive the deleted records) Consider doing this automatically once a day or once a week. Rebuild Indexes Rebuild Tables (bcp data out/in) Dump / Reload the database (drastic, but might fix corruption) Build new, more appropriate index Run DBCC to see if there is possible corruption in the database Locks / Deadlocks Ensure no other processes running in database Especially DBCC Are you using row or page level locking? Lock the tables exclusively before starting the query Check that all processes are accessing tables in the same order Are indices being used appropriately? Joins will only use index if both expressions are exactly the same data type Index will only be used if the first field(s) on the index are matched in the query Are clustered indices used where appropriate? range data WHERE field between value1 and value2 Small Joins are Nice Joins By default the optimiser will only consider the tables 4 at a time. This means that in joins with more than 4 tables, it has a good chance of choosing a non-optimal query plan Break up the Join Can you break up the join? Pre-select foreign keys into a temporary table Do half the join and put results in a temporary table Are you using the right kind of temporary table? #temp tables may perform much better than @table variables with large volumes (thousands of rows). Maintain Summary Tables Build with triggers on the underlying tables Build daily / hourly / etc. Build ad-hoc Build incrementally or teardown / rebuild See what the query plan is with SET SHOWPLAN ON See what’s actually happenning with SET STATS IO ON Force an index using the pragma: (index: myindex) Force the table order using SET FORCEPLAN ON Parameter Sniffing: Break Stored Procedure into 2 call proc2 from proc1 allows optimiser to choose index in proc2 if @parameter has been changed by proc1 Can you improve your hardware? What time are you running? Is there a quieter time? Is Replication Server (or other non-stop process) running? Can you suspend it? Run it eg. hourly? | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/18783",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/357/"
]
} |
18,803 | In college I've had numerous design and UML oriented courses, and I recognize that UML can be used to benefit a software project, especially use-case mapping, but is it really practical? I've done a few co-op work terms, and it appears that UML is not used heavily in the industry. Is it worth the time during a project to create UML diagrams? Also, I find that class diagrams are generally not useful, because it's just faster to look at the header file for a class. Specifically which diagrams are the most useful? Edit: My experience is limited to small, under 10 developer projects. Edit: Many good answers, and though not the most verbose, I belive the one selected is the most balanced. | In a sufficiently complex system there are some places where some UML is considered useful. The useful diagrams for a system, vary by applicability. But the most widely used ones are: Class Diagrams State Diagrams Activity Diagrams Sequence Diagrams There are many enterprises who swear by them and many who outright reject them as an utter waste of time and effort. It's best not to go overboard and think what's best for the project you are on and pick the stuff that is applicable and makes sense. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/18803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2134/"
]
} |
18,836 | I'm looking for shell scripts files installed on my system, but find doesn't work: $ find /usr -name *.sh But I know there are a ton of scripts out there. For instance: $ ls /usr/local/lib/*.sh/usr/local/lib/tclConfig.sh /usr/local/lib/tkConfig.sh Why doesn't find work? | Try quoting the wildcard: $ find /usr -name \*.sh or: $ find /usr -name '*.sh' If you happen to have a file that matches *.sh in the current working directory, the wildcard will be expanded before find sees it. If you happen to have a file named tkConfig.sh in your working directory, the find command would expand to: $ find /usr -name tkConfig.sh which would only find files named tkConfig.sh. If you had more than one file that matches *.sh , you'd get a syntax error from find : $ cd /usr/local/lib$ find /usr -name *.shfind: bad option tkConfig.shfind: path-list predicate-list Again, the reason is that the wildcard expands to both files: $ find /usr -name tclConfig.sh tkConfig.sh Quoting the wildcard prevents it from being prematurely expanded. Another possibility is that /usr or one of its subdirectories is a symlink. find doesn't normally follow links, so you might need the -follow option: $ find /usr -follow -name '*.sh' | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/18836",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1438/"
]
} |
18,858 | Does anyone here know of good batch file code indenters or beautifiers? Specifically for PHP, JS and SGML-languages. Preferably with options as to style. | Try quoting the wildcard: $ find /usr -name \*.sh or: $ find /usr -name '*.sh' If you happen to have a file that matches *.sh in the current working directory, the wildcard will be expanded before find sees it. If you happen to have a file named tkConfig.sh in your working directory, the find command would expand to: $ find /usr -name tkConfig.sh which would only find files named tkConfig.sh. If you had more than one file that matches *.sh , you'd get a syntax error from find : $ cd /usr/local/lib$ find /usr -name *.shfind: bad option tkConfig.shfind: path-list predicate-list Again, the reason is that the wildcard expands to both files: $ find /usr -name tclConfig.sh tkConfig.sh Quoting the wildcard prevents it from being prematurely expanded. Another possibility is that /usr or one of its subdirectories is a symlink. find doesn't normally follow links, so you might need the -follow option: $ find /usr -follow -name '*.sh' | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/18858",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2118/"
]
} |
18,861 | So I am writing a registration form and I need the display name to be only numbers, letters and underscores. Have a look at my code and tell me what I'm doing wrong. <form method="post" action="/" onsubmit="return check_form()"> <input type="text" id="display-name" name="display-name" maxlength="255" /> <input type="submit" /></form><script type="text/javascript"><!-- var name_regex = /^([a-zA-Z0-9_])+/ function check_form() { if (!name_regex.test(document.forms[0].elements[0].value)) { document.forms[0].elements[0].focus() alert("Your display name may only contain letters, numbers and underscores") return false } }--></script> It's obviously been trimmed down to not include anything not related to the problem but even this snippet doesn't work. | Your regex /^([a-zA-Z0-9_])+/ Looks for Start of string(check), followed by 1 or more letters, numbers, or underscore (check) And then whatever comes after it doesn't matter. This regex will match anything at all so long as it begins with a letter, number, or underscore If you put a $ at the end, then it will work - $ matches 'end of string', so the only way it can match is if there are only numbers, letters, and underscores between the start and end of the string. /^([a-zA-Z0-9_])+$/ Secondly, I'd suggest using document.getElementById('display-name').value instead of document.forms as it won't break if you rearrange the HTML, and is more 'the commonly accepted standard of what to do' | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/18861",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/428190/"
]
} |
18,912 | I know in JavaScript, objects double as hashes, but I have been unable to find a built-in function to get the keys: var h = {a:'b', c:'d'}; I want something like var k = h.keys() ; // k = ['a', 'c']; It is simple to write a function myself to iterate over the items and add the keys to an array that I return, but is there a standard cleaner way to do that? I keep feeling it must be a simple built in function that I missed but I can't find it! | There is function in modern JavaScript (ECMAScript 5) called Object.keys performing this operation: var obj = { "a" : 1, "b" : 2, "c" : 3};alert(Object.keys(obj)); // will output ["a", "b", "c"] Compatibility details can be found here . On the Mozilla site there is also a snippet for backward compatibility: if(!Object.keys) Object.keys = function(o){ if (o !== Object(o)) throw new TypeError('Object.keys called on non-object'); var ret=[],p; for(p in o) if(Object.prototype.hasOwnProperty.call(o,p)) ret.push(p); return ret;} | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/18912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/238/"
]
} |
18,918 | Im testing an ASP.NEt site. When I execute it, it starts the ASP.NET Development Server and opens up a page. Now I want to test it in the intranet I have. Can I use this server or I need to configure IIS in this machine? Do I need to configure something for it to work? I've changed the localhost to the correct IP and I opened up the firewall. Thanks | Yes you can! And you don't need IIS Just use a simple Java TCP tunnel. Download this Java app & just tunnel the traffic back. http://jcbserver.uwaterloo.ca/cs436/software/tgui/tcpTunnelGUI.shtml In command prompt, you'd then run the java app like this... Let's assume you want external access on port 80 and your standard debug environment runs on port 1088... java -jar tunnel.jar 80 localhost 1088 (Also answered here: Accessing asp. net development server external to VM ) | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/18918",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1013/"
]
} |
18,932 | I need to remove duplicate rows from a fairly large SQL Server table (i.e. 300,000+ rows). The rows, of course, will not be perfect duplicates because of the existence of the RowID identity field. MyTable RowID int not null identity(1,1) primary key,Col1 varchar(20) not null,Col2 varchar(2048) not null,Col3 tinyint not null How can I do this? | Assuming no nulls, you GROUP BY the unique columns, and SELECT the MIN (or MAX) RowId as the row to keep. Then, just delete everything that didn't have a row id: DELETE FROM MyTableLEFT OUTER JOIN ( SELECT MIN(RowId) as RowId, Col1, Col2, Col3 FROM MyTable GROUP BY Col1, Col2, Col3) as KeepRows ON MyTable.RowId = KeepRows.RowIdWHERE KeepRows.RowId IS NULL In case you have a GUID instead of an integer, you can replace MIN(RowId) with CONVERT(uniqueidentifier, MIN(CONVERT(char(36), MyGuidColumn))) | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/18932",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/357/"
]
} |
18,952 | Every project invariably needs some type of reporting functionality. From a foreach loop in your language of choice to a full blow BI platform. To get the job done what tools, widgets, platforms has the group used with success, frustration and failure? | For knocking out fairly "run of the mill" reports, SQL Reporting Services is really quite impressive. For complicated analysis, loading the data (maybe pre-aggregated) into an Excel Pivot table is usually adequate for most users. I've found you can spend a lot of time (and money) building a comprehensive "ad-hoc" reporting suite and after the first month or two of "wow factor", 99% of the reports generated will be the same report with minor differences in a fixed set of parameters. Don't accept when a user says they want "ad-hoc" reports without specifying what goals and targets their looking for. They are just fishing and they need to actually spend as much time on THINKING about THEIR reporting requirements as YOU would have to spend BUILDING their solution. I've spent too much time building the "the system that can report everything" and for it to become out of date or out of favour before it was finished. Much better to get the quick wins out of the way as quick as possible and then spend time "systemising" the most important reports. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/18952",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1293/"
]
} |
18,984 | What are your opinions on developing for the command line first, then adding a GUI on after the fact by simply calling the command line methods? eg. W:\ todo AddTask "meeting with John, re: login peer review" "John's office" "2008-08-22" "14:00" loads todo.exe and calls a function called AddTask that does some validation and throws the meeting in a database. Eventually you add in a screen for this: ============================================================Event: [meeting with John, re: login peer review]Location: [John's office] Date: [Fri. Aug. 22, 2008] Time: [ 2:00 PM][Clear] [Submit]============================================================ When you click submit, it calls the same AddTask function. Is this considered: a good way to code just for the newbies horrendous!. Addendum: I'm noticing a trend here for "shared library called by both the GUI and CLI executables." Is there some compelling reason why they would have to be separated, other than maybe the size of the binaries themselves? Why not just call the same executable in different ways: "todo /G" when you want the full-on graphical interface "todo /I" for an interactive prompt within todo.exe (scripting, etc) plain old "todo <function>" when you just want to do one thing and be done with it. Addendum 2: It was mentioned that "the way [I've] described things, you [would] need to spawn an executable every time the GUI needs to do something." Again, this wasn't my intent. When I mentioned that the example GUI called "the same AddTask function," I didn't mean the GUI called the command line program each time. I agree that would be totally nasty. I had intended (see first addendum) that this all be held in a single executable, since it was a tiny example, but I don't think my phrasing necessarily precluded a shared library. Also, I'd like to thank all of you for your input. This is something that keeps popping back in my mind and I appreciate the wisdom of your experience. | I would go with building a library with a command line application that links to it. Afterwards, you can create a GUI that links to the same library. Calling a command line from a GUI spawns external processes for each command and is more disruptive to the OS. Also, with a library you can easily do unit tests for the functionality. But even as long as your functional code is separate from your command line interpreter, then you can just re-use the source for a GUI without having the two kinds at once to perform an operation. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/18984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1588/"
]
} |
18,985 | I am writing a batch script in order to beautify JavaScript code. It needs to work on both Windows and Linux . How can I beautify JavaScript code using the command line tools? | First, pick your favorite Javascript based Pretty Print/Beautifier. I prefer the one at http://jsbeautifier.org/ , because it's what I found first. Downloads its file https://github.com/beautify-web/js-beautify/blob/master/js/lib/beautify.js Second, download and install The Mozilla group's Java based Javascript engine, Rhino . "Install" is a little bit misleading; Download the zip file, extract everything, place js.jar in your Java classpath (or Library/Java/Extensions on OS X). You can then run scripts with an invocation similar to this java -cp js.jar org.mozilla.javascript.tools.shell.Main name-of-script.js Use the Pretty Print/Beautifier from step 1 to write a small shell script that will read in your javascript file and run it through the Pretty Print/Beautifier from step one. For example //original code (function() { ... js_beautify code ... }());//new codeprint(global.js_beautify(readFile(arguments[0]))); Rhino gives javascript a few extra useful functions that don't necessarily make sense in a browser context, but do in a console context. The function print does what you'd expect, and prints out a string. The function readFile accepts a file path string as an argument and returns the contents of that file. You'd invoke the above something like java -cp js.jar org.mozilla.javascript.tools.shell.Main beautify.js file-to-pp.js You can mix and match Java and Javascript in your Rhino run scripts, so if you know a little Java it shouldn't be too hard to get this running with text-streams as well. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/18985",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/486/"
]
} |
19,014 | I want to use Lucene (in particular, Lucene.NET) to search for email address domains. E.g. I want to search for "@gmail.com" to find all emails sent to a gmail address. Running a Lucene query for "*@gmail.com" results in an error, asterisks cannot be at the start of queries. Running a query for "@gmail.com" doesn't return any matches, because "[email protected]" is seen as a whole word, and you cannot search for just parts of a word. How can I do this? | No one gave a satisfactory answer, so we started poking around Lucene documentation and discovered we can accomplish this using custom Analyzers and Tokenizers. The answer is this: create a WhitespaceAndAtSymbolTokenizer and a WhitespaceAndAtSymbolAnalyzer, then recreate your index using this analyzer. Once you do this, a search for "@gmail.com" will return all gmail addresses, because it's seen as a separate word thanks to the Tokenizer we just created. Here's the source code, it's actually very simple: class WhitespaceAndAtSymbolTokenizer : CharTokenizer{ public WhitespaceAndAtSymbolTokenizer(TextReader input) : base(input) { } protected override bool IsTokenChar(char c) { // Make whitespace characters and the @ symbol be indicators of new words. return !(char.IsWhiteSpace(c) || c == '@'); }}internal class WhitespaceAndAtSymbolAnalyzer : Analyzer{ public override TokenStream TokenStream(string fieldName, TextReader reader) { return new WhitespaceAndAtSymbolTokenizer(reader); }} That's it! Now you just need to rebuild your index and do all searches using this new Analyzer. For example, to write documents to your index: IndexWriter index = new IndexWriter(indexDirectory, new WhitespaceAndAtSymbolAnalyzer());index.AddDocument(myDocument); Performing searches should use the analyzer as well: IndexSearcher searcher = new IndexSearcher(indexDirectory);Query query = new QueryParser("TheFieldNameToSearch", new WhitespaceAndAtSymbolAnalyzer()).Parse("@gmail.com");Hits hits = query.Search(query); | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/19014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/536/"
]
} |
19,035 | I am working with both amq.js (ActiveMQ) and Google Maps . I load my scripts in this order <head> <meta http-equiv="content-type" content="text/html;charset=UTF-8" /> <title>AMQ & Maps Demo</title> <!-- Stylesheet --> <link rel="stylesheet" type="text/css" href="style.css"></link> <!-- Google APIs --> <script type="text/javascript" src="http://www.google.com/jsapi?key=abcdefg"></script> <!-- Active MQ --> <script type="text/javascript" src="amq/amq.js"></script> <script type="text/javascript">amq.uri='amq';</script> <!-- Application --> <script type="text/javascript" src="application.js"></script></head> However in my application.js it loads Maps fine but I get an error when trying to subscribe to a Topic with AMQ. AMQ depends on prototype which the error console in Firefox says object is not defined. I think I have a problem with using the amq object before the script is finished loading. Is there a way to make sure both scripts load before I use them in my application.js? Google has this nice function call google.setOnLoadCallback(initialize); which works great. I'm not sure amq.js has something like this. | Is there a way to make sure both scripts load before I use them in my application.js? JavaScript files should load sequentially and block so unless the scripts you are depending on are doing something unusual all you should need to do is load application.js after the other files. Non-blocking JavaScript Downloads has some information about how scripts load (and discusses some techniques to subvert the blocking). | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/19035",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1992/"
]
} |
19,058 | Example: select ename from emp where hiredate = todate('01/05/81','dd/mm/yy') and select ename from emp where hiredate = todate('01/05/81','dd/mm/rr') return different results | http://oracle.ittoolbox.com/groups/technical-functional/oracle-dev-l/difference-between-yyyy-and-rrrr-format-519525 YY allows you to retrieve just two digits of a year, for example, the 99 in 1999. The other digits (19) are automatically assigned to the current century. RR converts two-digit years into four-digit years by rounding. 50-99 are stored as 1950-1999, and dates ending in 00-49 are stored as 2000-2049. RRRR accepts a four-digit input (although not required), and converts two-digit dates as RR does. YYYY accepts 4-digit inputs butdoesn't do any date converting Essentially, your first example will assume that 81 is 2081 whereas the RR one assumes 1981. So the first example should not return any rows as you most likely did not hire any guys after May 1 2081 yet :-) | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/19058",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1782/"
]
} |
19,122 | Does anyone know how bash handles sending data through pipes? cat file.txt | tail -20 Does this command print all the contents of file.txt into a buffer, which is then read by tail? Or does this command, say, print the contents of file.txt line by line, and then pause at each line for tail to process, and then ask for more data? The reason I ask is that I'm writing a program on an embedded device that basically performs a sequence of operations on some chunk of data, where the output of one operation is send off as the input of the next operation. I would like to know how linux (bash) handles this so please give me a general answer, not specifically what happens when I run "cat file.txt | tail -20". EDIT: Shog9 pointed out a relevant Wikipedia Article, this didn't lead me directly to the article but it helped me find this: http://en.wikipedia.org/wiki/Pipeline_%28Unix%29#Implementation which did have the information I was looking for. I'm sorry for not making myself clear. Of course you're using a pipe and of course you're using stdin and stdout of the respective parts of the command. I had assumed that was too obvious to state. What I'm asking is how this is handled/implemented. Since both programs cannot run at once, how is data sent from stdin to stdout? What happens if the first program generates data significantly faster than the second program? Does the system just run the first command until either it's terminated or it's stdout buffer is full, and then move on to the next program, and so on in a loop until no more data is left to be processed or is there a more complicated mechanism? | I decided to write a slightly more detailed explanation. The "magic" here lies in the operating system. Both programs do start up at roughly the same time, and run at the same time (the operating system assigns them slices of time on the processor to run) as every other simultaneously running process on your computer (including the terminal application and the kernel). So, before any data gets passed, the processes are doing whatever initialization necessary. In your example, tail is parsing the '-20' argument and cat is parsing the 'file.txt' argument and opening the file. At some point tail will get to the point where it needs input and it will tell the operating system that it is waiting for input. At some other point (either before or after, it doesn't matter) cat will start passing data to the operating system using stdout. This goes into a buffer in the operating system. The next time tail gets a time slice on the processor after some data has been put into the buffer by cat, it will retrieve some amount of that data (or all of it) which leaves the buffer on the operating system. When the buffer is empty, at some point tail will have to wait for cat to output more data. If cat is outputting data much faster than tail is handling it, the buffer will expand. cat will eventually be done outputting data, but tail will still be processing, so cat will close and tail will process all remaining data in the buffer. The operating system will signal tail when their is no more incoming data with an EOF. Tail will process the remaining data. In this case, tail is probably just receiving all the data into a circular buffer of 20 lines, and when it is signalled by the operating system that there is no more incoming data, it then dumps the last twenty lines to its own stdout, which just gets displayed in the terminal. Since tail is a much simpler program than cat, it will likely spend most of the time waiting for cat to put data into the buffer. On a system with multiple processors, the two programs will not just be sharing alternating time slices on the same processor core, but likely running at the same time on separate cores. To get into a little more detail, if you open some kind of process monitor (operating system specific) like 'top' in Linux you will see a whole list of running processes, most of which are effectively using 0% of the processor. Most applications, unless they are crunching data, spend most of their time doing nothing. This is good, because it allows other processes to have unfettered access to the processor according to their needs. This is accomplished in basically three ways. A process could get to a sleep(n) style instruction where it basically tells the kernel to wait n milliseconds before giving it another time slice to work with. Most commonly a program needs to wait for something from another program, like 'tail' waiting for more data to enter the buffer. In this case the operating system will wake up the process when more data is available. Lastly, the kernel can preempt a process in the middle of execution, giving some processor time slices to other processes. 'cat' and 'tail' are simple programs. In this example, tail spends most of it's time waiting for more data on the buffer, and cat spends most of it's time waiting for the operating system to retrieve data from the harddrive. The bottleneck is the speed (or slowness) of the physical medium that the file is stored on. That perceptible delay you might detect when you run this command for the first time is the time it takes for the read heads on the disk drive to seek to the position on the harddrive where 'file.txt' is. If you run the command a second time, the operating system will likely have the contents of file.txt cached in memory, and you will not likely see any perceptible delay (unless file.txt is very large, or the file is no longer cached.) Most operations you do on your computer are IO bound, which is to say that you are usually waiting for data to come from your harddrive, or from a network device, etc. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/19122",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/306/"
]
} |
19,132 | I'm asking with regards to c#, but I assume its the same in most other languages. Does anyone have a good definition of expressions and statements and what the differences are? | Expression: Something which evaluates to a value. Example: 1+2/x Statement: A line of code which does something. Example: GOTO 100 In the earliest general-purpose programming languages, like FORTRAN, the distinction was crystal-clear. In FORTRAN, a statement was one unit of execution, a thing that you did. The only reason it wasn't called a "line" was because sometimes it spanned multiple lines. An expression on its own couldn't do anything... you had to assign it to a variable. 1 + 2 / X is an error in FORTRAN, because it doesn't do anything. You had to do something with that expression: X = 1 + 2 / X FORTRAN didn't have a grammar as we know it today—that idea was invented, along with Backus-Naur Form (BNF), as part of the definition of Algol-60. At that point the semantic distinction ("have a value" versus "do something") was enshrined in syntax : one kind of phrase was an expression, and another was a statement, and the parser could tell them apart. Designers of later languages blurred the distinction: they allowed syntactic expressions to do things, and they allowed syntactic statements that had values.The earliest popular language example that still survives is C. The designers of C realized that no harm was done if you were allowed to evaluate an expression and throw away the result. In C, every syntactic expression can be a made into a statement just by tacking a semicolon along the end: 1 + 2 / x; is a totally legit statement even though absolutely nothing will happen. Similarly, in C, an expression can have side-effects —it can change something. 1 + 2 / callfunc(12); because callfunc might just do something useful. Once you allow any expression to be a statement, you might as well allow the assignment operator (=) inside expressions. That's why C lets you do things like callfunc(x = 2); This evaluates the expression x = 2 (assigning the value of 2 to x) and then passes that (the 2) to the function callfunc . This blurring of expressions and statements occurs in all the C-derivatives (C, C++, C#, and Java), which still have some statements (like while ) but which allow almost any expression to be used as a statement (in C# only assignment, call, increment, and decrement expressions may be used as statements; see Scott Wisniewski's answer ). Having two "syntactic categories" (which is the technical name for the sort of thing statements and expressions are) can lead to duplication of effort. For example, C has two forms of conditional, the statement form if (E) S1; else S2; and the expression form E ? E1 : E2 And sometimes people want duplication that isn't there: in standard C, for example, only a statement can declare a new local variable—but this ability is useful enough that the GNU C compiler provides a GNU extension that enables an expression to declare a local variable as well. Designers of other languages didn't like this kind of duplication, and they saw early on that if expressions can have side effects as well as values, then the syntactic distinction between statements and expressions is not all that useful—so they got rid of it. Haskell, Icon, Lisp, and ML are all languages that don't have syntactic statements—they only have expressions. Even the class structured looping and conditional forms are considered expressions, and they have values—but not very interesting ones. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/19132",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
19,147 | Using C# and WPF under .NET (rather than Windows Forms or console), what is the correct way to create an application that can only be run as a single instance? I know it has something to do with some mythical thing called a mutex, rarely can I find someone that bothers to stop and explain what one of these are. The code needs to also inform the already-running instance that the user tried to start a second one, and maybe also pass any command-line arguments if any existed. | Here is a very good article regarding the Mutex solution. The approach described by the article is advantageous for two reasons. First, it does not require a dependency on the Microsoft.VisualBasic assembly. If my project already had a dependency on that assembly, I would probably advocate using the approach shown in another answer . But as it is, I do not use the Microsoft.VisualBasic assembly, and I'd rather not add an unnecessary dependency to my project. Second, the article shows how to bring the existing instance of the application to the foreground when the user tries to start another instance. That's a very nice touch that the other Mutex solutions described here do not address. UPDATE As of 8/1/2014, the article I linked to above is still active, but the blog hasn't been updated in a while. That makes me worry that eventually it might disappear, and with it, the advocated solution. I'm reproducing the content of the article here for posterity. The words belong solely to the blog owner at Sanity Free Coding . Today I wanted to refactor some code that prohibited my application from running multiple instances of itself. Previously I had use System.Diagnostics.Process to search for an instance of my myapp.exe in the process list. While this works, it brings on a lot of overhead, and I wanted something cleaner. Knowing that I could use a mutex for this (but never having done it before) I set out to cut down my code and simplify my life. In the class of my application main I created a static named Mutex : static class Program{ static Mutex mutex = new Mutex(true, "{8F6F0AC4-B9A1-45fd-A8CF-72F04E6BDE8F}"); [STAThread] ...} Having a named mutex allows us to stack synchronization across multiple threads and processes which is just the magic I'm looking for. Mutex.WaitOne has an overload that specifies an amount of time for us to wait. Since we're not actually wanting to synchronizing our code (more just check if it is currently in use) we use the overload with two parameters: Mutex.WaitOne(Timespan timeout, bool exitContext) . Wait one returns true if it is able to enter, and false if it wasn't. In this case, we don't want to wait at all; If our mutex is being used, skip it, and move on, so we pass in TimeSpan.Zero (wait 0 milliseconds), and set the exitContext to true so we can exit the synchronization context before we try to aquire a lock on it. Using this, we wrap our Application.Run code inside something like this: static class Program{ static Mutex mutex = new Mutex(true, "{8F6F0AC4-B9A1-45fd-A8CF-72F04E6BDE8F}"); [STAThread] static void Main() { if(mutex.WaitOne(TimeSpan.Zero, true)) { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); mutex.ReleaseMutex(); } else { MessageBox.Show("only one instance at a time"); } }} So, if our app is running, WaitOne will return false, and we'll get a message box. Instead of showing a message box, I opted to utilize a little Win32 to notify my running instance that someone forgot that it was already running (by bringing itself to the top of all the other windows). To achieve this I used PostMessage to broadcast a custom message to every window (the custom message was registered with RegisterWindowMessage by my running application, which means only my application knows what it is) then my second instance exits. The running application instance would receive that notification and process it. In order to do that, I overrode WndProc in my main form and listened for my custom notification. When I received that notification I set the form's TopMost property to true to bring it up on top. Here is what I ended up with: Program.cs static class Program{ static Mutex mutex = new Mutex(true, "{8F6F0AC4-B9A1-45fd-A8CF-72F04E6BDE8F}"); [STAThread] static void Main() { if(mutex.WaitOne(TimeSpan.Zero, true)) { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); mutex.ReleaseMutex(); } else { // send our Win32 message to make the currently running instance // jump on top of all the other windows NativeMethods.PostMessage( (IntPtr)NativeMethods.HWND_BROADCAST, NativeMethods.WM_SHOWME, IntPtr.Zero, IntPtr.Zero); } }} NativeMethods.cs // this class just wraps some Win32 stuff that we're going to useinternal class NativeMethods{ public const int HWND_BROADCAST = 0xffff; public static readonly int WM_SHOWME = RegisterWindowMessage("WM_SHOWME"); [DllImport("user32")] public static extern bool PostMessage(IntPtr hwnd, int msg, IntPtr wparam, IntPtr lparam); [DllImport("user32")] public static extern int RegisterWindowMessage(string message);} Form1.cs (front side partial) public partial class Form1 : Form{ public Form1() { InitializeComponent(); } protected override void WndProc(ref Message m) { if(m.Msg == NativeMethods.WM_SHOWME) { ShowMe(); } base.WndProc(ref m); } private void ShowMe() { if(WindowState == FormWindowState.Minimized) { WindowState = FormWindowState.Normal; } // get our current "TopMost" value (ours will always be false though) bool top = TopMost; // make our form jump to the top of everything TopMost = true; // set it back to whatever it was TopMost = top; }} | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/19147",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/483/"
]
} |
19,151 | How would one create an iterative function (or iterator object) in python? | Iterator objects in python conform to the iterator protocol, which basically means they provide two methods: __iter__() and __next__() . The __iter__ returns the iterator object and is implicitly calledat the start of loops. The __next__() method returns the next value and is implicitly called at each loop increment. This method raises a StopIteration exception when there are no more value to return, which is implicitly captured by looping constructs to stop iterating. Here's a simple example of a counter: class Counter: def __init__(self, low, high): self.current = low - 1 self.high = high def __iter__(self): return self def __next__(self): # Python 2: def next(self) self.current += 1 if self.current < self.high: return self.current raise StopIterationfor c in Counter(3, 9): print(c) This will print: 345678 This is easier to write using a generator, as covered in a previous answer: def counter(low, high): current = low while current < high: yield current current += 1for c in counter(3, 9): print(c) The printed output will be the same. Under the hood, the generator object supports the iterator protocol and does something roughly similar to the class Counter. David Mertz's article, Iterators and Simple Generators , is a pretty good introduction. | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/19151",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/145/"
]
} |
19,201 | Does anyone use Accurev for Source Control Management? We are switching (eventually) from StarTeam to Accurev. My initial impression is that the GUI tool is severely lacking, however the underlying engine, and the branches as streams concept is incredible. The biggest difficulty we are facing is assessing our own DIY tools that interfaced with starteam, and either replacing them with DIY new tools, or finding and purchasing appropriate replacements. Additionally, is anyone using the AccuWork component for Issue management? Starteam had a very nice change request system, and AccuWork does not come close to matching it. We are evaluating either using Accuwork, or buying a 3rd party package such as JIRA. Opinions? | Honestly, I feel like I need to double check to see if I'm using the same tool as these folks that seem to like Accurev. I used Subversion in my previous job and liked it a lot. We never had any issues with it to speak of, and of course the price is right. My biggest problem with Accurev is that it seems they felt the need to be different for different's sake. It uses a completely different vocabulary to express versioning concepts, that even after using it for almost 6 months, feels very foreign to me. It has no fewer than 8 or 9 states any given file can be in, compared to about around 1/2 as many for Subversion. The GUI is crappy and slow, and the IDE integration plugins are sub-par. I had assumed that at some point I would "get" Accurev and see why it's so much better, but that has yet to happen. My advice is to stay away. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/19201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1965/"
]
} |
19,314 | I've been using WWF for a while as part of an internal call center application (ASP.NET), and while learning it was a good practice in understanding how a state machine based workflow system should work, I am definitely not in love with WWF itself. In my opinion it is: Overly complex, especially for use within web apps (all that threaded runtime stuff) Immature (ever worked with that horrible designer?) Anemic in its current feature set Does anyone have a suggestion for a better .NET based workflow framework? Specifically, I am looking for the following features: State machine based (mapping states to available actions) A focus on user permissions (controlling who has access to what actions) The ability to run workflows as timed background tasks (for example, to send out reminders for items that have been sitting in a certain state for x days) That's really all I need. I don't need to be able to "drag and drop" any activities or visually design the flow. I am perfectly comfortable writing actual code once a particular action is triggered. | You could try Simple State Machine . You would have to implement access control and background timers yourself, but that shouldn't be a big deal. SSM was also built out of frustration with WF. There are some other state machine implementations on Codeplex as well. If one of them doesn't fit he bill out of the box, they are open source and should get you close enough. I wholeheartedly agree with you about state machines in WF - they aren't testable, are too complicated, the threading model is peculiar and hard to follow, and I'm not sure a visual designer could have been more poorly conceived for designing state machines graphically. I think this may be because the state machine concept feels tacked onto the WF runtime, which was designed for sequential state machines, something WF does a much better job with, in my opinion. The problem is that state machines are really not the same animal as a sequential work flow, and should have been given a first class implementation of their own, because the warping of WF to make it seem to support them turned out to be more or less unsupportable, if not actually unusable. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/19314",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1436/"
]
} |
19,339 | I have a list of 2-item tuples and I'd like to convert them to 2 lists where the first contains the first item in each tuple and the second list holds the second item. For example: original = [('a', 1), ('b', 2), ('c', 3), ('d', 4)]# and I want to become...result = (['a', 'b', 'c', 'd'], [1, 2, 3, 4]) Is there a builtin function that does that? | zip is its own inverse! Provided you use the special * operator. >>> zip(*[('a', 1), ('b', 2), ('c', 3), ('d', 4)])[('a', 'b', 'c', 'd'), (1, 2, 3, 4)] The way this works is by calling zip with the arguments: zip(('a', 1), ('b', 2), ('c', 3), ('d', 4)) … except the arguments are passed to zip directly (after being converted to a tuple), so there's no need to worry about the number of arguments getting too big. | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/19339",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/680/"
]
} |
19,353 | I'm tasked with building a .NET client app to detect silence in a WAV files. Is this possible with the built-in Windows APIs? Or alternately, any good libraries out there to help with this? | Audio analysis is a difficult thing requiring a lot of complex math (think Fourier Transforms). The question you have to ask is "what is silence". If the audio that you are trying to edit is captured from an analog source, the chances are that there isn't any silence... they will only be areas of soft noise (line hum, ambient background noise, etc). All that said, an algorithm that should work would be to determine a minimum volume (amplitude) threshold and duration (say, <10dbA for more than 2 seconds) and then simply do a volume analysis of the waveform looking for areas that meet this criteria (with perhaps some filters for millisecond spikes). I've never written this in C#, but this CodeProject article looks interesting; it describes C# code to draw a waveform... that is the same kind of code which could be used to do other amplitude analysis. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/19353",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/536/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.