source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
11,887 | I get an Access is Denied error message when I use the strong name tool to create a new key to sign a .NET assembly. This works just fine on a Windows XP machine but it does not work on my Vista machine. PS C:\users\brian\Dev\Projects\BELib\BELib> sn -k keypair.snkMicrosoft (R) .NET Framework Strong Name Utility Version 3.5.21022.8Copyright (c) Microsoft Corporation. All rights reserved.Failed to generate a strong name key pair -- Access is denied. What causes this problem and how can I fix it? Are you running your PowerShell or Command Prompt as an Administrator? I found this to be the first place to look until you get used to User Access Control or by turning User Access Control off. Yes I have tried running PS and the regular command prompt as administrator. The same error message comes up. | Yes I have tried running PS and the regular command prompt as administrator. The same error message comes up. Another possible solution could be that you need to give your user account access to the key container located at C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/11887",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1254/"
]
} |
11,903 | My website will be using only OpenID for authentication. I'd like to pull user details down via attribute exchange, but attribute exchange seems to have caused a lot of grief for StackOverflow. What is the current state of play in the industry? Does any OpenID provider do a decent job of attribute exchange? Should I just steer away from OpenID attribute exchange altogether? How can I deal with inconsistent support for functionality? | Here on Stack Overflow, we're just using the Simple Registration extension for now, as there were some issues with Attribute Exchange (AX). The biggest was OpenID Providers (OP) not agreeing on which attribute type urls to use. The finalized spec for AX says that attribute urls should come from http://www.axschema.org/ However, some OPs, especially our favorite http://myopenid.com , recognize other urls . I wasn't going to keep a list of which ones were naughty and which were nice! The other problem was that most of the OPs I tried just didn't return information when queried with AX - I might have been doing something wrong (happens quite frequently :) ), but I had made relevant details public on my profiles and we're using the latest, most excellent .NET library, DotNetOpenId . We'll definitely revisit AX here on Stack Overflow when we get a little more time, as a seamless user experience is very important to us! | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/11903",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/257/"
]
} |
11,915 | How would you reccommend handling RSS Feeds in ASP.NET MVC? Using a third party library? Using the RSS stuff in the BCL? Just making an RSS view that renders the XML? Or something completely different? | Here is what I recommend: Create a class called RssResult thatinherits off the abstract base classActionResult. Override the ExecuteResult method. ExecuteResult has the ControllerContext passed to it by the caller and with this you can get the data and content type. Once you change the content type to rss, you will want to serialize the data to RSS (using your own code or another library) and write to the response. Create an action on a controller that you want to return rss and set the return type as RssResult. Grab the data from your model based on what you want to return. Then any request to this action will receive rss of whatever data you choose. That is probably the quickest and reusable way of returning rss has a response to a request in ASP.NET MVC. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/11915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/571/"
]
} |
11,930 | How can I determine the IP of my router/gateway in Java? I can get my IP easily enough. I can get my internet IP using a service on a website. But how can I determine my gateway's IP? This is somewhat easy in .NET if you know your way around. But how do you do it in Java? | Java doesn't make this as pleasant as other languages, unfortunately. Here's what I did: import java.io.*;import java.util.*;public class ExecTest { public static void main(String[] args) throws IOException { Process result = Runtime.getRuntime().exec("traceroute -m 1 www.amazon.com"); BufferedReader output = new BufferedReader(new InputStreamReader(result.getInputStream())); String thisLine = output.readLine(); StringTokenizer st = new StringTokenizer(thisLine); st.nextToken(); String gateway = st.nextToken(); System.out.printf("The gateway is %s\n", gateway); }} This presumes that the gateway is the second token and not the third. If it is, you need to add an extra st.nextToken(); to advance the tokenizer one more spot. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/11930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/338/"
]
} |
11,950 | I'm looking for the best way to log errors in an ASP.NET application.I want to be able to receive emails when errors occurs in my application, with detailed information about the Exception and the current Request. In my company we used to have our own ErrorMailer, catching everything in the Global.asax Application_Error. It was "Ok" but not very flexible nor configurable. We switched recently to NLog. It's much more configurable, we can define different targets for the errors, filter them, buffer them (not tried yet). It's a very good improvement. But I discovered lately that there's a whole Namespace in the .Net framework for this purpose : System.Web.Management and it can be configured in the healthMonitoring section of web.config. Have you ever worked with .Net health monitoring? What is your solution for error logging? | I use elmah . It has some really nice features and here is a CodeProject article on it. I think the StackOverflow team uses elmah also! | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/11950",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1130/"
]
} |
11,964 | What should I use to virtualize my desktop, vmx, xen, or vmware? Needs to work on a linux or windows host, sorry virtual pc. @Derek Park: Free as in speech, not beer. I want to be able to make a new virtual machine from my own licensed copies of windows, for that vmware is kind of expensive. | Try VirtualBox . It's free, open source, and it runs on Windows, Linux, Macintosh and OpenSolaris. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/11964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1220/"
]
} |
11,975 | Store everything in GMT? Store everything the way it was entered with an embedded offset? Do the math everytime you render? Display relative Times "1 minutes ago"? | You have to store in UTC - if you don't, your historic reporting and behaviour during things like Daylight Savings goes... funny. GMT is a local time, subject to Daylight Savings relative to UTC (which is not). Presentation to users in different time-zones can be a real bastard if you're storing local time. It's easy to adjust to local if your raw data is in UTC - just add your user's offset and you're done! Joel talked about this in one of the podcasts (in a round-about way) - he said to store your data in the highest resolution possible (search for 'fidelity'), because you can always munge it when it goes out again. That's why I say store it as UTC, as local time you need to adjust for anyone who's not in that timezone, and that's a lot of hard work. And you need to store whether, for example, daylight savings was in effect when you stored the time. Yuk. Often in databases in the past I've stored two - UTC for sorting, local time for display. That way neither the user nor the computer get confused. Now, as to display: Sure, you can do the "3 minutes ago" thing, but only if you store UTC - otherwise, data entered in different timezones is going to do things like display as "-4 hours ago", which will freak people out. If you're going to display an actual time, people love to have it in their local time - and if data's being entered in multiple timezones you can only do that with ease if you're storing UTC. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/11975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1220/"
]
} |
11,986 | I am running OS X 10.5, Ruby 1.8.6, Rails 2.1, sqlite3-ruby 1.2.2 and I get the following error when trying to rake db:migrate on an app that works find connected to MySQL. rake aborted! no such file to load -- sqlite3/database | You have to store in UTC - if you don't, your historic reporting and behaviour during things like Daylight Savings goes... funny. GMT is a local time, subject to Daylight Savings relative to UTC (which is not). Presentation to users in different time-zones can be a real bastard if you're storing local time. It's easy to adjust to local if your raw data is in UTC - just add your user's offset and you're done! Joel talked about this in one of the podcasts (in a round-about way) - he said to store your data in the highest resolution possible (search for 'fidelity'), because you can always munge it when it goes out again. That's why I say store it as UTC, as local time you need to adjust for anyone who's not in that timezone, and that's a lot of hard work. And you need to store whether, for example, daylight savings was in effect when you stored the time. Yuk. Often in databases in the past I've stored two - UTC for sorting, local time for display. That way neither the user nor the computer get confused. Now, as to display: Sure, you can do the "3 minutes ago" thing, but only if you store UTC - otherwise, data entered in different timezones is going to do things like display as "-4 hours ago", which will freak people out. If you're going to display an actual time, people love to have it in their local time - and if data's being entered in multiple timezones you can only do that with ease if you're storing UTC. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/11986",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1396/"
]
} |
12,009 | How can I pipe the new password to smbpasswd so I can automate my installation process. | Thanks to Mark I found the answer: (echo newpassword; echo confirmNewPassword) | smbpasswd -s BTW: (echo oldpasswd; echo newpasswd) | smbpasswd -s does not work. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/12009",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115/"
]
} |
12,045 | I am currently writing a simple, timer based mini app in C# that performs an action n times every k seconds. I am trying to adopt a test driven development style, so my goal is to unit test all parts of the app. So, my question is: Is there a good way to unit test a timer based class? The problem, as I see it, is that there is a big risk that the tests will take uncomfortably long to execute, since they must wait so and so long for the desired actions to happen. Especially if one wants realistic data (seconds), instead of using the minimal time resolution allowed by the framework (1 ms?). I am using a mock object for the action, to register the number of times the action was called, and so that the action takes practically no time. | What I have done is to mock the timer, and also the current system time, that my events could be triggered immediately, but as far as the code under test was concerned time elapsed was seconds. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12045",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/276/"
]
} |
12,051 | If I inherit from a base class and want to pass something from the constructor of the inherited class to the constructor of the base class, how do I do that? For example, if I inherit from the Exception class I want to do something like this: class MyExceptionClass : Exception{ public MyExceptionClass(string message, string extraInfo) { //This is where it's all falling apart base(message); }} Basically what I want is to be able to pass the string message to the base Exception class. | Modify your constructor to the following so that it calls the base class constructor properly: public class MyExceptionClass : Exception{ public MyExceptionClass(string message, string extrainfo) : base(message) { //other stuff here }} Note that a constructor is not something that you can call anytime within a method. That's the reason you're getting errors in your call in the constructor body. | {
"score": 12,
"source": [
"https://Stackoverflow.com/questions/12051",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/493/"
]
} |
12,075 | I'm sure many readers on SO have used Lutz Roeder 's .NET reflector to decompile their .NET code. I was amazed just how accurately our source code could be recontructed from our compiled assemblies. I'd be interested in hearing how many of you use obfuscation, and for what sort of products? I'm sure that this is a much more important issue for, say, a .NET application that you offer for download over the internet as opposed to something that is built bespoke for a particular client. | I wouldn't worry about it too much. I'd rather focus on putting out an awesome product, getting a good user base, and treating your customers right than worry about the minimal percentage of users concerned with stealing your code or looking at the source. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1078/"
]
} |
12,088 | I wonder if anyone uses commercial/free java obfuscators on his own commercial product. I know only about one project that actually had an obfuscating step in the ant build step for releases. Do you obfuscate? And if so, why do you obfuscate? Is it really a way to protect the code or is it just a better feeling for the developers/managers? edit: Ok, I to be exact about my point: Do you obfuscate to protect your IP (your algorithms, the work you've put into your product)? I won't obfuscate for security reasons, that doesn't feel right. So I'm only talking about protecting your applications code against competitors. @staffan has a good point: The reason to stay away from chaining code flow is that some of those changes makes it impossible for the JVM to efficiently optimize the code. In effect it will actually degrade the performance of your application. | If you do obfuscate, stay away from obfuscators that modify the code by changing code flow and/or adding exception blocks and such to make it hard to disassemble it. To make the code unreadable it is usually enough to just change all names of methods, fields and classes. The reason to stay away from changing code flow is that some of those changes makes it impossible for the JVM to efficiently optimize the code. In effect it will actually degrade the performance of your application. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/12088",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/834/"
]
} |
12,144 | OK, so I don't want to start a holy-war here, but we're in the process of trying to consolidate the way we handle our application configuration files and we're struggling to make a decision on the best approach to take. At the moment, every application we distribute is using it's own ad-hoc configuration files, whether it's property files (ini style), XML or JSON (internal use only at the moment!). Most of our code is Java at the moment, so we've been looking at Apache Commons Config , but we've found it to be quite verbose. We've also looked at XMLBeans , but it seems like a lot of faffing around. I also feel as though I'm being pushed towards XML as a format, but my clients and colleagues are apprehensive about trying something else. I can understand it from the client's perspective, everybody's heard of XML, but at the end of the day, shouldn't be using the right tool for the job? What formats and libraries are people using in production systems these days, is anyone else trying to avoid the angle bracket tax ? Edit: really needs to be a cross platform solution: Linux, Windows, Solaris etc. and the choice of library used to interface with configuration files is just as important as the choice of format. | XML XML XML XML. We're talking config files here . There is no "angle bracket tax" if you're not serializing objects in a performance-intense situation. Config files must be human readable and human understandable, in addition to machine readable. XML is a good compromise between the two. If your shop has people that are afraid of that new-fangled XML technology, I feel bad for you. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1030/"
]
} |
12,159 | I have thus far avoided the nightmare that is testing multi-threaded code since it just seems like too much of a minefield. I'd like to ask how people have gone about testing code that relies on threads for successful execution, or just how people have gone about testing those kinds of issues that only show up when two threads interact in a given manner? This seems like a really key problem for programmers today, it would be useful to pool our knowledge on this one imho. | Look, there's no easy way to do this. I'm working on a project that is inherently multithreaded. Events come in from the operating system and I have to process them concurrently. The simplest way to deal with testing complex, multithreaded application code is this: If it's too complex to test, you're doing it wrong. If you have a single instance that has multiple threads acting upon it, and you can't test situations where these threads step all over each other, then your design needs to be redone. It's both as simple and as complex as this. There are many ways to program for multithreading that avoids threads running through instances at the same time. The simplest is to make all your objects immutable. Of course, that's not usually possible. So you have to identify those places in your design where threads interact with the same instance and reduce the number of those places. By doing this, you isolate a few classes where multithreading actually occurs, reducing the overall complexity of testing your system. But you have to realize that even by doing this, you still can't test every situation where two threads step on each other. To do that, you'd have to run two threads concurrently in the same test, then control exactly what lines they are executing at any given moment. The best you can do is simulate this situation. But this might require you to code specifically for testing, and that's at best a half step towards a true solution. Probably the best way to test code for threading issues is through static analysis of the code. If your threaded code doesn't follow a finite set of thread safe patterns, then you might have a problem. I believe Code Analysis in VS does contain some knowledge of threading, but probably not much. Look, as things stand currently (and probably will stand for a good time to come), the best way to test multithreaded apps is to reduce the complexity of threaded code as much as possible. Minimize areas where threads interact, test as best as possible, and use code analysis to identify danger areas. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/12159",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/912/"
]
} |
12,176 | Is there any way to include the SVN repository revision number in the version string of a .NET assembly? Something like Major.Minor.SVNRev I've seen mention of doing this with something like CC.NET (although on ASP.NET actually), but is there any way to do it without any extra software? I've done similar things in C/C++ before using build batch scripts, but in was accomplished by reading the version number, then having the script write out a file called "ver.h" everytime with something to the effect of: #define MAJORVER 4#define MINORVER 23#define SOURCEVER 965 We would then use these defines to generate the version string. Is something like this possible for .NET? | Here's and C# example for updating the revision info in the assembly automatically. It is based on the answer by Will Dean, which is not very elaborate. Example : Copy AssemblyInfo.cs to AssemblyInfoTemplate.cs in the project'sfolder Properties . Change the Build Action to None for AssemblyInfoTemplate.cs. Modify the line with the AssemblyFileVersion to: [assembly: AssemblyFileVersion("1.0.0.$WCREV$")] Consider adding: [assembly: AssemblyInformationalVersion("Build date: $WCNOW=%Y-%m-%d %H:%M:%S$; Revision date: $WCDATE=%Y-%m-%d %H:%M:%S$; Revision(s) in working copy: $WCRANGE$$WCMODS?; WARNING working copy had uncommitted modifications:$.")] , which will give details about the revision status of the source the assembly was build from. Add the following Pre-build event to the project file properties: subwcrev "$(SolutionDir)." "$(ProjectDir)Properties\AssemblyInfoTemplate.cs" "$(ProjectDir)Properties\AssemblyInfo.cs" -f Consider adding AssemblyInfo.cs to the svn ignore list. Substituted revision numbers and dates will modify the file, which results in insignificant changes and revisions and $WCMODS$ will evaluate to true. AssemblyInfo.cs must, of course, be included in the project. In response to the objections by Wim Coenen, I noticed that, in contrast to what was suggested by Darryl, the AssemblyFileVersion also does not support numbers above 2^16. The build will complete, but the property File Version in the actual assembly will be AssemblyFileVersion modulo 65536. Thus, 1.0.0.65536 as well as 1.0.0.131072 will yield 1.0.0.0, etc. In this example, there is always the true revision number in the AssemblyInformationalVersion property. You could leave out step 3, if you consider this a significant issue. Edit: some additional info after having used this solution for a while. It now use AssemblyInfo.cst rather than AssemblyInfoTemplate.cs, because it will automatically have Build Action option None , and it will not clutter you Error list, but you'll loose syntax highlighting. I've added two tests to my AssemblyInfo.cst files: #if(!DEBUG) $WCMODS?#error Working copy has uncommitted modifications, please commit all modifications before creating a release build.:$ #endif #if(!DEBUG) $WCMIXED?#error Working copy has multiple revisions, please update to the latest revision before creating a release build.:$ #endif Using this, you will normally have to perform a complete SVN Update, after a commit and before you can do a successful release build. Otherwise, $WCMIXED will be true. This seems to be caused by the fact that the committed files re at head revision after the commit, but other files not. I have had some doubts whether the first parameter to subwcrev, "$(SolutionDir)", which sets the scope for checking svn version info, does always work as desired. Maybe, it should be $(ProjectDir), if you are content if each individual assembly is in a consistent revision. Addition To answer the comment by @tommylux. SubWcRev can be used for any file in you project. If you want to display revision info in a web page, you could use this VersionInfo template: public class VersionInfo{ public const int RevisionNumber = $WCREV$; public const string BuildDate = "$WCNOW=%Y-%m-%d %H:%M:%S$"; public const string RevisionDate = "$WCDATE=%Y-%m-%d %H:%M:%S$"; public const string RevisionsInWorkingCopy = "$WCRANGE$"; public const bool UncommitedModification = $WCMODS?true:false$;} Add a pre-build event just like the one for AssemblyInfo.cst and you will have easy access to all relevant SubVersion info. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12176",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/194/"
]
} |
12,243 | I apologize for asking such a generalized question, but it's something that can prove challenging for me. My team is about to embark on a large project that will hopefully drag together all of the random one-off codebases that have evolved through the years. Given that this project will cover standardizing logical entities across the company ("Customer", "Employee"), small tasks, large tasks that control the small tasks, and utility services, I'm struggling to figure out the best way to structure the namespaces and code structure. Though I guess I'm not giving you enough specifics to go on, do you have any resources or advice on how to approach splitting your domains up logically ? In case it helps, most of this functionality will be revealed via web services, and we're a Microsoft shop with all the latest gizmos and gadgets. I'm debating one massive solution with subprojects to make references easier, but will that make it too unwieldy? Should I wrap up legacy application functionality, or leave that completely agnostic in the namespace (making an OurCRMProduct.Customer class versus a generic Customer class, for instance)? Should each service/project have its own BAL and DAL , or should that be an entirely separate assembly that everything references? I don't have experience with organizing such far-reaching projects, only one-offs, so I'm looking for any guidance I can get. | There's a million ways to skin a cat. However, the simplest one is always the best. Which way is the simplest for you? Depends on your requirements. But there are some general rules of thumb I follow. First, reduce the overall number of projects as much as possible. When you compile twenty times a day, that extra minute adds up. If your app is designed for extensibility, consider splitting your assemblies along the lines of design vs. implementation. Place your interfaces and base classes in a public assembly. Create an assembly for your company's implementations of these classes. For large applications, keep your UI logic and business logic separate. SIMPLIFY your solution. If it looks too complex, it probably is. Combine, reduce. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1212/"
]
} |
12,306 | I'm trying to serialize a Type object in the following way: Type myType = typeof (StringBuilder);var serializer = new XmlSerializer(typeof(Type));TextWriter writer = new StringWriter();serializer.Serialize(writer, myType); When I do this, the call to Serialize throws the following exception: "The type System.Text.StringBuilder was not expected. Use the XmlInclude or SoapInclude attribute to specify types that are not known statically." Is there a way for me to serialize the Type object? Note that I am not trying to serialize the StringBuilder itself, but the Type object containing the metadata about the StringBuilder class. | I wasn't aware that a Type object could be created with only a string containing the fully-qualified name. To get the fully qualified name, you can use the following: string typeName = typeof (StringBuilder).FullName; You can then persist this string however needed, then reconstruct the type like this: Type t = Type.GetType(typeName); If you need to create an instance of the type, you can do this: object o = Activator.CreateInstance(t); If you check the value of o.GetType(), it will be StringBuilder, just as you would expect. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/12306",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/767/"
]
} |
12,319 | I'm looking to the equivalent of Windows _wfopen() under Mac OS X. Any idea? I need this in order to port a Windows library that uses wchar* for its File interface. As this is intended to be a cross-platform library, I am unable to rely on how the client application will get the file path and give it to the library. | POSIX API in Mac OS X are usable with UTF-8 strings. In order to convert a wchar_t string to UTF-8, it is possible to use the CoreFoundation framework from Mac OS X. Here is a class that will wrap an UTF-8 generated string from a wchar_t string. class Utf8{public: Utf8(const wchar_t* wsz): m_utf8(NULL) { // OS X uses 32-bit wchar const int bytes = wcslen(wsz) * sizeof(wchar_t); // comp_bLittleEndian is in the lib I use in order to detect PowerPC/Intel CFStringEncoding encoding = comp_bLittleEndian ? kCFStringEncodingUTF32LE : kCFStringEncodingUTF32BE; CFStringRef str = CFStringCreateWithBytesNoCopy(NULL, (const UInt8*)wsz, bytes, encoding, false, kCFAllocatorNull ); const int bytesUtf8 = CFStringGetMaximumSizeOfFileSystemRepresentation(str); m_utf8 = new char[bytesUtf8]; CFStringGetFileSystemRepresentation(str, m_utf8, bytesUtf8); CFRelease(str); } ~Utf8() { if( m_utf8 ) { delete[] m_utf8; } }public: operator const char*() const { return m_utf8; }private: char* m_utf8;}; Usage: const wchar_t wsz = L"Here is some Unicode content: éà€œæ";const Utf8 utf8 = wsz;FILE* file = fopen(utf8, "r"); This will work for reading or writing files. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12319",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/268/"
]
} |
12,332 | I am looking for a more technical explanation than the OS calls the function. Is there a website or book? | The .exe file (or equivalent on other platforms) contains an 'entry point' address. To a first approximation, the OS loads the relevant sections of the .EXE file into RAM, and then jumps to the entry point. As others have said, this entry point will not be 'main', but will instead be a part of the runtime library - it will do things like initialising static objects, setting up the argc and argv parameters, setting up standard input, standard output, standard error, etc. When it's done all that, it will call your main() function. When main exits, the runtime goes through an analogous process of passing your return code back to the environment, calling static destructors, calling _atexit routines, etc. If you have Microsoft tools (perhaps not the freebie ones), then you have all the runtime source, and an easy way to look at it is to put a breakpoint on the closing brace of your main() method, and single step back up into the runtime. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/12332",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
12,348 | I'm running PHP 5.2.3 on Windows 2000 Server with IIS 5. I'm tryingto get cURL working, so in my php.ini file, I have this line: extension_dir ="F:\PHP\ext" And later, I have: extension=php_curl.dll The file F:\PHP\ext\php_curl.dll exists, but when I try to run any PHPscript, I get this in the error log: PHP Warning: PHP Startup: Unable to load dynamic library 'F:\PHP\ext \php_curl.dll' - The specified module could not be found. in Unknown on line 0 | Problem solved! Although the error message said The specified module could not be found , this is a little misleading -- it's not that it couldn't find php_curl.dll , but rather it couldn't find a module that php_curl.dll required. The 2 DLLs it requires are libeay32.dll and SSLeay32.dll . So, you have to put those 2 DLLs somewhere in your PATH (e.g., C:\Windows\system32 ). That's all there is to it. However, even that did not work for me initially. So I downloaded the Windows zip of the latest version of PHP, which includes all the necessary DLLs. I didn't reinstall PHP, I just copied all of the DLLs in the "ext" folder to my PHP extensions folder (as specified in the extension_dir variable in php.ini ), and I copied the versions of libeay32.dll and SSLeay32.dll from the PHP download into my System32 directory. I also did an iisreset, but I don't know if that was necessary. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/12348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1418/"
]
} |
12,368 | The .NET garbage collector will eventually free up memory, but what if you want that memory back immediately? What code do you need to use in a class MyClass to call MyClass.Dispose() and free up all the used space by variables and objects in MyClass ? | IDisposable has nothing to do with freeing memory. IDisposable is a pattern for freeing unmanaged resources -- and memory is quite definitely a managed resource. The links pointing to GC.Collect() are the correct answer, though use of this function is generally discouraged by the Microsoft .NET documentation. Edit: Having earned a substantial amount of karma for this answer, I feel a certain responsibility to elaborate on it, lest a newcomer to .NET resource management get the wrong impression. Inside a .NET process, there are two kinds of resource -- managed and unmanaged. "Managed" means that the runtime is in control of the resource, while "unmanaged" means that it's the programmer's responsibility. And there really is only one kind of managed resource that we care about in .NET today -- memory. The programmer tells the runtime to allocate memory and after that it's up to the runtime to figure out when the memory can freed. The mechanism that .NET uses for this purpose is called garbage collection and you can find plenty of information about GC on the internet simply by using Google. For the other kinds of resources, .NET doesn't know anything about cleaning them up so it has to rely on the programmer to do the right thing. To this end, the platform gives the programmer three tools: The IDisposable interface and the "using" statement in VB and C# Finalizers The IDisposable pattern as implemented by many BCL classes The first of these allows the programmer to efficiently acquire a resource, use it and then release it all within the same method. using (DisposableObject tmp = DisposableObject.AcquireResource()) { // Do something with tmp}// At this point, tmp.Dispose() will automatically have been called// BUT, tmp may still a perfectly valid object that still takes up memory If "AcquireResource" is a factory method that (for instance) opens a file and "Dispose" automatically closes the file, then this code cannot leak a file resource. But the memory for the "tmp" object itself may well still be allocated. That's because the IDisposable interface has absolutely no connection to the garbage collector. If you did want to ensure that the memory was freed, your only option would be to call GC.Collect() to force a garbage collection. However, it cannot be stressed enough that this is probably not a good idea. It's generally much better to let the garbage collector do what it was designed to do, which is to manage memory. What happens if the resource is being used for a longer period of time, such that its lifespan crosses several methods? Clearly, the "using" statement is no longer applicable, so the programmer would have to manually call "Dispose" when he or she is done with the resource. And what happens if the programmer forgets? If there's no fallback, then the process or computer may eventually run out of whichever resource isn't being properly freed. That's where finalizers come in. A finalizer is a method on your class that has a special relationship with the garbage collector. The GC promises that -- before freeing the memory for any object of that type -- it will first give the finalizer a chance to do some kind of cleanup. So in the case of a file, we theoretically don't need to close the file manually at all. We can just wait until the garbage collector gets to it and then let the finalizer do the work. Unfortunately, this doesn't work well in practice because the garbage collector runs non-deterministically. The file may stay open considerably longer than the programmer expects. And if enough files are kept open, the system may fail when trying to open an additional file. For most resources, we want both of these things. We want a convention to be able to say "we're done with this resource now" and we want to make sure that there's at least some chance for the cleanup to happen automatically if we forget to do it manually. That's where the "IDisposable" pattern comes into play. This is a convention that allows IDispose and a finalizer to play nicely together. You can see how the pattern works by looking at the official documentation for IDisposable . Bottom line: If what you really want to do is to just make sure that memory is freed, then IDisposable and finalizers will not help you. But the IDisposable interface is part of an extremely important pattern that all .NET programmers should understand. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/12368",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1271/"
]
} |
12,374 | We’ve found that the unit tests we’ve written for our C#/C++ code have really paid off. But we still have thousands of lines of business logic in stored procedures, which only really get tested in anger when our product is rolled out to a large number of users. What makes this worse is that some of these stored procedures end up being very long, because of the performance hit when passing temporary tables between SPs. This has prevented us from refactoring to make the code simpler. We have made several attempts at building unit tests around some of our key stored procedures (primarily testing the performance), but have found that setting up the test data for these tests is really hard. For example, we end up copying around test databases. In addition to this, the tests end up being really sensitive to change, and even the smallest change to a stored proc. or table requires a large amount of changes to the tests. So after many builds breaking due to these database tests failing intermittently, we’ve just had to pull them out of the build process. So, the main part of my questions is: has anyone ever successfully written unit tests for their stored procedures? The second part of my questions is whether unit testing would be/is easier with linq? I was thinking that rather than having to set up tables of test data, you could simply create a collection of test objects, and test your linq code in a “linq to objects” situation? (I am a totally new to linq so don’t know if this would even work at all) | I ran into this same issue a while back and found that if I created a simple abstract base class for data access that allowed me to inject a connection and transaction, I could unit test my sprocs to see if they did the work in SQL that I asked them to do and then rollback so none of the test data is left in the db. This felt better than the usual "run a script to setup my test db, then after the tests run do a cleanup of the junk/test data". This also felt closer to unit testing because these tests could be run alone w/out having a great deal of "everything in the db needs to be 'just so' before I run these tests". Here is a snippet of the abstract base class used for data access Public MustInherit Class Repository(Of T As Class) Implements IRepository(Of T) Private mConnectionString As String = ConfigurationManager.ConnectionStrings("Northwind.ConnectionString").ConnectionString Private mConnection As IDbConnection Private mTransaction As IDbTransaction Public Sub New() mConnection = Nothing mTransaction = Nothing End Sub Public Sub New(ByVal connection As IDbConnection, ByVal transaction As IDbTransaction) mConnection = connection mTransaction = transaction End Sub Public MustOverride Function BuildEntity(ByVal cmd As SqlCommand) As List(Of T) Public Function ExecuteReader(ByVal Parameter As Parameter) As List(Of T) Implements IRepository(Of T).ExecuteReader Dim entityList As List(Of T) If Not mConnection Is Nothing Then Using cmd As SqlCommand = mConnection.CreateCommand() cmd.Transaction = mTransaction cmd.CommandType = Parameter.Type cmd.CommandText = Parameter.Text If Not Parameter.Items Is Nothing Then For Each param As SqlParameter In Parameter.Items cmd.Parameters.Add(param) Next End If entityList = BuildEntity(cmd) If Not entityList Is Nothing Then Return entityList End If End Using Else Using conn As SqlConnection = New SqlConnection(mConnectionString) Using cmd As SqlCommand = conn.CreateCommand() cmd.CommandType = Parameter.Type cmd.CommandText = Parameter.Text If Not Parameter.Items Is Nothing Then For Each param As SqlParameter In Parameter.Items cmd.Parameters.Add(param) Next End If conn.Open() entityList = BuildEntity(cmd) If Not entityList Is Nothing Then Return entityList End If End Using End Using End If Return Nothing End FunctionEnd Class next you will see a sample data access class using the above base to get a list of products Public Class ProductRepository Inherits Repository(Of Product) Implements IProductRepository Private mCache As IHttpCache 'This const is what you will use in your app Public Sub New(ByVal cache As IHttpCache) MyBase.New() mCache = cache End Sub 'This const is only used for testing so we can inject a connectin/transaction and have them roll'd back after the test Public Sub New(ByVal cache As IHttpCache, ByVal connection As IDbConnection, ByVal transaction As IDbTransaction) MyBase.New(connection, transaction) mCache = cache End Sub Public Function GetProducts() As System.Collections.Generic.List(Of Product) Implements IProductRepository.GetProducts Dim Parameter As New Parameter() Parameter.Type = CommandType.StoredProcedure Parameter.Text = "spGetProducts" Dim productList As List(Of Product) productList = MyBase.ExecuteReader(Parameter) Return productList End Function 'This function is used in each class that inherits from the base data access class so we can keep all the boring left-right mapping code in 1 place per object Public Overrides Function BuildEntity(ByVal cmd As System.Data.SqlClient.SqlCommand) As System.Collections.Generic.List(Of Product) Dim productList As New List(Of Product) Using reader As SqlDataReader = cmd.ExecuteReader() Dim product As Product While reader.Read() product = New Product() product.ID = reader("ProductID") product.SupplierID = reader("SupplierID") product.CategoryID = reader("CategoryID") product.ProductName = reader("ProductName") product.QuantityPerUnit = reader("QuantityPerUnit") product.UnitPrice = reader("UnitPrice") product.UnitsInStock = reader("UnitsInStock") product.UnitsOnOrder = reader("UnitsOnOrder") product.ReorderLevel = reader("ReorderLevel") productList.Add(product) End While If productList.Count > 0 Then Return productList End If End Using Return Nothing End FunctionEnd Class And now in your unit test you can also inherit from a very simple base class that does your setup / rollback work - or keep this on a per unit test basis below is the simple testing base class I used Imports System.ConfigurationImports System.DataImports System.Data.SqlClientImports Microsoft.VisualStudio.TestTools.UnitTestingPublic MustInherit Class TransactionFixture Protected mConnection As IDbConnection Protected mTransaction As IDbTransaction Private mConnectionString As String = ConfigurationManager.ConnectionStrings("Northwind.ConnectionString").ConnectionString <TestInitialize()> _ Public Sub CreateConnectionAndBeginTran() mConnection = New SqlConnection(mConnectionString) mConnection.Open() mTransaction = mConnection.BeginTransaction() End Sub <TestCleanup()> _ Public Sub RollbackTranAndCloseConnection() mTransaction.Rollback() mTransaction.Dispose() mConnection.Close() mConnection.Dispose() End SubEnd Class and finally - the below is a simple test using that test base class that shows how to test the entire CRUD cycle to make sure all the sprocs do their job and that your ado.net code does the left-right mapping correctly I know this doesn't test the "spGetProducts" sproc used in the above data access sample, but you should see the power behind this approach to unit testing sprocs Imports SampleApplication.LibraryImports System.Collections.GenericImports Microsoft.VisualStudio.TestTools.UnitTesting<TestClass()> _Public Class ProductRepositoryUnitTest Inherits TransactionFixture Private mRepository As ProductRepository <TestMethod()> _ Public Sub Should-Insert-Update-And-Delete-Product() mRepository = New ProductRepository(New HttpCache(), mConnection, mTransaction) '** Create a test product to manipulate throughout **' Dim Product As New Product() Product.ProductName = "TestProduct" Product.SupplierID = 1 Product.CategoryID = 2 Product.QuantityPerUnit = "10 boxes of stuff" Product.UnitPrice = 14.95 Product.UnitsInStock = 22 Product.UnitsOnOrder = 19 Product.ReorderLevel = 12 '** Insert the new product object into SQL using your insert sproc **' mRepository.InsertProduct(Product) '** Select the product object that was just inserted and verify it does exist **' '** Using your GetProductById sproc **' Dim Product2 As Product = mRepository.GetProduct(Product.ID) Assert.AreEqual("TestProduct", Product2.ProductName) Assert.AreEqual(1, Product2.SupplierID) Assert.AreEqual(2, Product2.CategoryID) Assert.AreEqual("10 boxes of stuff", Product2.QuantityPerUnit) Assert.AreEqual(14.95, Product2.UnitPrice) Assert.AreEqual(22, Product2.UnitsInStock) Assert.AreEqual(19, Product2.UnitsOnOrder) Assert.AreEqual(12, Product2.ReorderLevel) '** Update the product object **' Product2.ProductName = "UpdatedTestProduct" Product2.SupplierID = 2 Product2.CategoryID = 1 Product2.QuantityPerUnit = "a box of stuff" Product2.UnitPrice = 16.95 Product2.UnitsInStock = 10 Product2.UnitsOnOrder = 20 Product2.ReorderLevel = 8 mRepository.UpdateProduct(Product2) '**using your update sproc '** Select the product object that was just updated to verify it completed **' Dim Product3 As Product = mRepository.GetProduct(Product2.ID) Assert.AreEqual("UpdatedTestProduct", Product2.ProductName) Assert.AreEqual(2, Product2.SupplierID) Assert.AreEqual(1, Product2.CategoryID) Assert.AreEqual("a box of stuff", Product2.QuantityPerUnit) Assert.AreEqual(16.95, Product2.UnitPrice) Assert.AreEqual(10, Product2.UnitsInStock) Assert.AreEqual(20, Product2.UnitsOnOrder) Assert.AreEqual(8, Product2.ReorderLevel) '** Delete the product and verify it does not exist **' mRepository.DeleteProduct(Product3.ID) '** The above will use your delete product by id sproc **' Dim Product4 As Product = mRepository.GetProduct(Product3.ID) Assert.AreEqual(Nothing, Product4) End SubEnd Class I know this is a long example, but it helped to have a reusable class for the data access work, and yet another reusable class for my testing so I didn't have to do the setup/teardown work over and over again ;) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1078/"
]
} |
12,476 | This is a self-explanatory question: Why does this thing bubble into my try catch's even when nothing is wrong? Why is it showing up in my log, hundreds of times? I know its a newb question, but if this site is gonna get search ranking and draw in newbs we have to ask them | This is probably coming from a Response.Redirect call. Check this link for an explanation: http://dotnet.org.za/armand/archive/2004/11/16/7088.aspx (In most cases, calling Response.Redirect(url, false) fixes the problem) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12476",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1220/"
]
} |
12,482 | I have CruiseControl.NET Version 1.4 set up on my development server. Whenever a developer checks in code, it makes a compile. Now we're at a place where we can start giving our application to the testers. We'd like to use ClickOnce to distribute the application, with the idea being that when a tester goes to test the application, they have the latest build. I can't find a way to make that happen with CruiseControl.NET. We're using MSBUILD to perform the builds. | We've done this and can give you some pointers to start. 2 things you should be aware of: MSBuild can generate the necessary deployment files for you. MSBuild won't deploy the files to the FTP or UNC share. You'll need a separate step for this. To use MSBuild to generate the ClickOnce manifests, here's the command you'll need to issue: msbuild /target:publish /p:Configuration=Release /p:Platform=AnyCPU; "c:\yourProject.csproj" That will tell MSBuild to build your project and generate ClickOnce deployment files inside the bin\Release\YourProject.publish directory. All that's left is to copy those files to the FTP/UNC share/wherever, and you're all set. You can tell CruiseControl.NET to build using those MSBuild parameters. You'll then need a CruiseControl.NET build task to take the generated deployment files and copy them to the FTP or UNC share. We use a custom little C# console program for this, but you could just as easily use a Powershell script. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/702/"
]
} |
12,489 | How do you scan a directory for folders and files in C? It needs to be cross-platform. | The following POSIX program will print the names of the files in the current directory: #define _XOPEN_SOURCE 700#include <stdio.h>#include <sys/types.h>#include <dirent.h>int main (void){ DIR *dp; struct dirent *ep; dp = opendir ("./"); if (dp != NULL) { while ((ep = readdir (dp)) != NULL) puts (ep->d_name); (void) closedir (dp); return 0; } else { perror ("Couldn't open the directory"); return -1; }} Credit: http://www.gnu.org/software/libtool/manual/libc/Simple-Directory-Lister.html Tested in Ubuntu 16.04. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/12489",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/432/"
]
} |
12,492 | I use emacs to edit my xml files (nxml-mode) and the files were generated by machine don't have any pretty formatting of the tags. I have searched for pretty printing the entire file with indentation and saving it, but wasn't able to find an automatic way. Is there a way? Or atleast some editor on linux which can do it. | You don't even need to write your own function - sgml-mode (a gnu emacs core module) has a built-in pretty printing function called (sgml-pretty-print ...) which takes region beginning and end arguments. If you are cutting and pasting xml and you find your terminal is chopping the lines in arbitrary places you can use this pretty printer which fixes broken lines first. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/12492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448/"
]
} |
12,509 | ...instead of using the Atom syndication format? Atom is a well-defined , general-purpose XML syndication format. RSS is fractured into four different versions. All the major feed readers have supported Atom for as long as I can remember, so why isn't its use more prevalent? Worst of all are sites that provide feeds in both formats - what's the point?! UPDATE (18 August): Interestingly,this site itself is using Atom forits feeds rather than RSS. | The fundamental thing that the Atom creators didn't understand (and that the Atom supporters still don't understand), is that Atom isn't somehow separate from RSS. There's this idea that RSS fractured, and that somehow Atom fixes that problem. But it doesn't. Atom is just another RSS splinter. A new name doesn't change the fact that it's just one more standard competing to do the same job, a job for which any of the competing standards are sufficient. No one outside a fairly small group of people care at all which standard is used. They just want it to work. Atom, RSS 2.0, RSS 1.0, RSS 401(k), whatever. As long as it works, the users are happy. The RSS "brand" very much defines the entire feed category, though, so on the rare occasion that someone does know enough to choose, they will tend to choose RSS, because it's got "the name." They will also tend to choose RSS 2.0, because it's got the bigger number. RSS, and especially RSS 2.0, are very much entrenched in the feed "industry." Atom hasn't taken off because it doesn't bring much except a new name. Why switch away from RSS when it works just fine? And why even bother using Atom on new projects if RSS is sufficient? Switching to a new feed format mostly means extra time spent learning the new format. If nothing else Apple's exclusive use of RSS 2.0 for podcasts means that RSS 2.0 is here for the foreseeable future. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/12509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1450/"
]
} |
12,516 | This morning, I was reading Steve Yegge's: When Polymorphism Fails , when I came across a question that a co-worker of his used to ask potential employees when they came for their interview at Amazon. As an example of polymorphism in action, let's look at the classic "eval" interview question, which (as far as I know) was brought to Amazon by Ron Braunstein. The question is quite a rich one, as it manages to probe a wide variety of important skills: OOP design, recursion, binary trees, polymorphism and runtime typing, general coding skills, and (if you want to make it extra hard) parsing theory. At some point, the candidate hopefully realizes that you can represent an arithmetic expression as a binary tree, assuming you're only using binary operators such as "+", "-", "*", "/". The leaf nodes are all numbers, and the internal nodes are all operators. Evaluating the expression means walking the tree. If the candidate doesn't realize this, you can gently lead them to it, or if necessary, just tell them. Even if you tell them, it's still an interesting problem. The first half of the question, which some people (whose names I will protect to my dying breath, but their initials are Willie Lewis) feel is a Job Requirement If You Want To Call Yourself A Developer And Work At Amazon, is actually kinda hard. The question is: how do you go from an arithmetic expression (e.g. in a string) such as "2 + (2)" to an expression tree. We may have an ADJ challenge on this question at some point. The second half is: let's say this is a 2-person project, and your partner, who we'll call "Willie", is responsible for transforming the string expression into a tree. You get the easy part: you need to decide what classes Willie is to construct the tree with. You can do it in any language, but make sure you pick one, or Willie will hand you assembly language. If he's feeling ornery, it will be for a processor that is no longer manufactured in production. You'd be amazed at how many candidates boff this one. I won't give away the answer, but a Standard Bad Solution involves the use of a switch or case statment (or just good old-fashioned cascaded-ifs). A Slightly Better Solution involves using a table of function pointers, and the Probably Best Solution involves using polymorphism. I encourage you to work through it sometime. Fun stuff! So, let's try to tackle the problem all three ways. How do you go from an arithmetic expression (e.g. in a string) such as "2 + (2)" to an expression tree using cascaded-if's, a table of function pointers, and/or polymorphism? Feel free to tackle one, two, or all three. [update: title modified to better match what most of the answers have been.] | Polymorphic Tree Walking , Python version #!/usr/bin/pythonclass Node: """base class, you should not process one of these""" def process(self): raise('you should not be processing a node')class BinaryNode(Node): """base class for binary nodes""" def __init__(self, _left, _right): self.left = _left self.right = _right def process(self): raise('you should not be processing a binarynode')class Plus(BinaryNode): def process(self): return self.left.process() + self.right.process()class Minus(BinaryNode): def process(self): return self.left.process() - self.right.process()class Mul(BinaryNode): def process(self): return self.left.process() * self.right.process()class Div(BinaryNode): def process(self): return self.left.process() / self.right.process()class Num(Node): def __init__(self, _value): self.value = _value def process(self): return self.valuedef demo(n): print n.process()demo(Num(2)) # 2demo(Plus(Num(2),Num(5))) # 2 + 3demo(Plus(Mul(Num(2),Num(3)),Div(Num(10),Num(5)))) # (2 * 3) + (10 / 2) The tests are just building up the binary trees by using constructors. program structure: abstract base class: Node all Nodes inherit from this class abstract base class: BinaryNode all binary operators inherit from this class process method does the work of evaluting the expression and returning the result binary operator classes: Plus,Minus,Mul,Div two child nodes, one each for left side and right side subexpressions number class: Num holds a leaf-node numeric value, e.g. 17 or 42 | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12516",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/25/"
]
} |
12,533 | Is there a quick one-liner to call datepart in Sql Server and get back the name of the day instead of just the number? select datepart(dw, getdate()); This will return 1-7, with Sunday being 1. I would like 'Sunday' instead of 1. | select datename(weekday, getdate()); | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12533",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1219/"
]
} |
12,556 | It looks interesting and I've played around with it some --- but the development IDE in a web browser seems to be nightmare eventually. Does anyone have experience using it and what are your thoughts? | We evaluated GI a few months ago for a project but didn't end up selecting it. The IDE-in-a-browser (which is itself build with GI) actually works surprisingly well, though there are some features you normally expect from an editor that it lacks, most notably (and irritatingly) an Undo command. It's also impossible to do things like subdocument includes (practically a necessity for team development) from the IDE, though you can do them manually in the underlying XML and the IDE will respect them. In the end the main reason we didn't go with it was that it was difficult to make the resulting web application look as good as the designers really wanted. It was relatively easy to build functionality, but the components were very restrictive in look and feel. The way GI renders its own document model to HTML involves a lot of style attributes which makes skinning in CSS all but impossible. It seems to prefer making web applications that look like applications, instead of web applications that look like websites. So it would probably be great for building intranet type applications where look and feel isn't a huge issue, but I probably wouldn't use it to make a public facing site. By the way for those that don't know, TIBCO GI is a completely separate product from the rest of TIBCO's SOA business integration stuff - General Interface was a separate company that was acquired by TIBCO a couple of years ago. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1341/"
]
} |
12,565 | In Ruby, what's the difference between {} and [] ? {} seems to be used for both code blocks and hashes. Are [] only for arrays? The documention isn't very clear. | It depends on the context: When on their own, or assigning to a variable, [] creates arrays, and {} creates hashes. e.g. a = [1,2,3] # an arrayb = {1 => 2} # a hash [] can be overridden as a custom method, and is generally used to fetch things from hashes (the standard library sets up [] as a method on hashes which is the same as fetch ) There is also a convention that it is used as a class method in the same way you might use a static Create method in C# or Java. e.g. a = {1 => 2} # create a hash for exampleputs a[1] # same as a.fetch(1), will print 2Hash[1,2,3,4] # this is a custom class method which creates a new hash See the Ruby Hash docs for that last example. This is probably the most tricky one - {} is also syntax for blocks, but only when passed to a method OUTSIDE the arguments parens. When you invoke methods without parens, Ruby looks at where you put the commas to figure out where the arguments end (where the parens would have been, had you typed them) 1.upto(2) { puts 'hello' } # it's a block1.upto 2 { puts 'hello' } # syntax error, ruby can't figure out where the function args end1.upto 2, { puts 'hello' } # the comma means "argument", so ruby sees it as a hash - this won't work because puts 'hello' isn't a valid hash | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/12565",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1470/"
]
} |
12,576 | What can I do to increase the performance/speed of my PHP scripts without installing software on my servers? | Profile. Profile. Profile. I'm not sure if there is anything out there for PHP, but it should be simple to write a little tool to insert profiling information in your code. You will want to profile function times and SQL query times. So where you have a function: function foo($stuff) { ... return ...;} I would change it to: function foo($stuff) { trace_push_fn('foo'); ... trace_pop_fn('foo'); return ...;} (This is one of those cases where multiple returns in a function become a hinderance.) And SQL: function bar($stuff) { trace_push_fn('bar'); $query = ...; trace_push_sql($query); mysql_query($query); trace_pop_sql($query); trace_pop_fn('bar'); return ...;} In the end, you can generate a full trace of the program execution and use all sorts of techniques to identify your bottlenecks. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12576",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115/"
]
} |
12,592 | Is it possible to write a doctest unit test that will check that an exception is raised? For example, if I have a function foo(x) that is supposed to raise an exception if x < 0 , how would I write the doctest for that? | Yes. You can do it. The doctest module documentation and Wikipedia has an example of it. >>> x Traceback (most recent call last): ... NameError: name 'x' is not defined | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/12592",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/742/"
]
} |
12,593 | I'm developing a C# application that uses a handful of XML files and some classes in System.Xml. A coworker insists on adding the MSXML6 redistributable to our install, along with the .NET framework but I don't think the .NET framework uses or needs MSXML in anyway. I am well aware that using MSXML from .NET is not supported but I suppose its theoretically possible for System.Xml itself to wrap MSXML at a low level. I haven't found anything definitive that .NET has its own implementation but neither can I find anything to suggest it needs MSXML. Help me settle the debate. Does System.Xml use MSXML? | System.Xml doesn't use MSXML6. They are seperate xml processing engines. See post here: MSXML 6.0 vs. System.Xml: Schema handling differences | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12593",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1254/"
]
} |
12,613 | I've been using Emacs's sql interactive mode to talk to the MySQL db server and gotten to enjoy it. A developer has set up another db on a new non-default port number but I don't know how to access it using sql-mysql. How do I specify a port number when I'm trying to connect to a database? It would be even better if Emacs can prompt me for a port number and just use the default if I don't specify. Any chances of that? | After digging through the sql.el file, I found a variable that allows me to specify a port when I try to create a connection. This option was added GNU Emacs 24.1. sql-mysql-login-params List of login parameters needed to connect to MySQL. I added this to my Emacs init file: (setq sql-mysql-login-params (append sql-mysql-login-params '(port))) The default port is 0. If you'd like to set that to the default MySQL port you can customize sql-port (setq sql-port 3306) ;; default MySQL port There is a sql-*-login-params variable for all the popular RDMS systems in GNU Emacs 24.1. sql-port is used for both MySQL and PostreSQL | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12613",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/680/"
]
} |
12,633 | I'm trying to parse an INI file using C++. Any tips on what is the best way to achieve this? Should I use the Windows API tools for INI file processing (with which I am totally unfamiliar), an open-source solution or attempt to parse it manually? | You can use the Windows API functions, such as GetPrivateProfileString() and GetPrivateProfileInt() . | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/12633",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1467/"
]
} |
12,647 | Is there a simple way in Perl that will allow me to determine if a given variable is numeric? Something along the lines of: if (is_number($x)){ ... } would be ideal. A technique that won't throw warnings when the -w switch is being used is certainly preferred. | Use Scalar::Util::looks_like_number() which uses the internal Perl C API's looks_like_number() function, which is probably the most efficient way to do this.Note that the strings "inf" and "infinity" are treated as numbers. Example: #!/usr/bin/perluse warnings;use strict;use Scalar::Util qw(looks_like_number);my @exprs = qw(1 5.25 0.001 1.3e8 foo bar 1dd inf infinity);foreach my $expr (@exprs) { print "$expr is", looks_like_number($expr) ? '' : ' not', " a number\n";} Gives this output: 1 is a number5.25 is a number0.001 is a number1.3e8 is a numberfoo is not a numberbar is not a number1dd is not a numberinf is a numberinfinity is a number See also: perldoc Scalar::Util perldoc perlapi for looks_like_number | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/12647",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/872/"
]
} |
12,656 | In a project that I'm about to wrap up, I've written and implemented an object-relational mapping solution for PHP. Before the doubters and dreamers cry out "how on earth?", relax -- I haven't found a way to make late static binding work -- I'm just working around it in the best way that I possibly can. Anyway, I'm not currently using prepared statements for querying, because I couldn't come up with a way to pass a variable number of arguments to the bind_params() or bind_result() methods. Why do I need to support a variable number of arguments, you ask? Because the superclass of my models (think of my solution as a hacked-up PHP ActiveRecord wannabe) is where the querying is defined, and so the find() method, for example, doesn't know how many parameters it would need to bind. Now, I've already thought of building an argument list and passing a string to eval(), but I don't like that solution very much -- I'd rather just implement my own security checks and pass on statements. Does anyone have any suggestions (or success stories) about how to get this done? If you can help me solve this first problem, perhaps we can tackle binding the result set (something I suspect will be more difficult, or at least more resource-intensive if it involves an initial query to determine table structure). | In PHP you can pass a variable number of arguments to a function or method by using call_user_func_array . An example for a method would be: call_user_func_array(array(&$stmt, 'bindparams'), $array_of_params); The function will be called with each member in the array passed as its own argument. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12656",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1344/"
]
} |
12,661 | What's the most efficient way to resize large images in PHP? I'm currently using the GD function imagecopyresampled to take high resolution images, and cleanly resize them down to a size for web viewing (roughly 700 pixels wide by 700 pixels tall). This works great on small (under 2 MB) photos and the entire resize operation takes less than a second on the server. However, the site will eventually service photographers who may be uploading images up to 10 MB in size (or images up to 5000x4000 pixels in size). Doing this kind of resize operation with large images tends to increase the memory usage by a very large margin (larger images can spike the memory usage for the script past 80 MB). Is there any way to make this resize operation more efficient? Should I be using an alternate image library such as ImageMagick ? Right now, the resize code looks something like this function makeThumbnail($sourcefile, $endfile, $thumbwidth, $thumbheight, $quality) { // Takes the sourcefile (path/to/image.jpg) and makes a thumbnail from it // and places it at endfile (path/to/thumb.jpg). // Load image and get image size. $img = imagecreatefromjpeg($sourcefile); $width = imagesx( $img ); $height = imagesy( $img ); if ($width > $height) { $newwidth = $thumbwidth; $divisor = $width / $thumbwidth; $newheight = floor( $height / $divisor); } else { $newheight = $thumbheight; $divisor = $height / $thumbheight; $newwidth = floor( $width / $divisor ); } // Create a new temporary image. $tmpimg = imagecreatetruecolor( $newwidth, $newheight ); // Copy and resize old image into new image. imagecopyresampled( $tmpimg, $img, 0, 0, 0, 0, $newwidth, $newheight, $width, $height ); // Save thumbnail into a file. imagejpeg( $tmpimg, $endfile, $quality); // release the memory imagedestroy($tmpimg); imagedestroy($img); | People say that ImageMagick is much faster. At best just compare both libraries and measure that. Prepare 1000 typical images. Write two scripts -- one for GD, onefor ImageMagick. Run both of them a few times. Compare results (total executiontime, CPU and I/O usage, resultimage quality). Something which the best everyone else, could not be the best for you. Also, in my opinion, ImageMagick has much better API interface. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/12661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1477/"
]
} |
12,702 | I have a WCF service from which I want to return a DataTable. I know that this is often a highly-debated topic, as far as whether or not returning DataTables is a good practice. Let's put that aside for a moment. When I create a DataTable from scratch, as below, there are no problems whatsoever. The table is created, populated, and returned to the client, and all is well: [DataContract]public DataTable GetTbl(){ DataTable tbl = new DataTable("testTbl"); for(int i=0;i<100;i++) { tbl.Columns.Add(i); tbl.Rows.Add(new string[]{"testValue"}); } return tbl;} However, as soon as I go out and hit the database to create the table, as below, I get a CommunicationException "The underlying connection was closed: The connection was closed unexpectedly." [DataContract]public DataTable GetTbl(){ DataTable tbl = new DataTable("testTbl"); //Populate table with SQL query return tbl;} The table is being populated correctly on the server side. It is significantly smaller than the test table that I looped through and returned, and the query is small and fast - there is no issue here with timeouts or large data transfer. The same exact functions and DataContracts/ServiceContracts/BehaviorContracts are being used. Why would the way that the table is being populated have any bearing on the table returning successfully? | For anyone having similar problems, I have solved my issue. It was several-fold. As Darren suggested and Paul backed up, the Max..Size properties in the configuration needed to be enlarged. The SvcTraceViewer utility helped in determining this, but it still does not always give the most helpful error messages. It also appears that when the Service Reference is updated on the client side, the configuration will sometimes not update properly (e.g. Changing config values on the server will not always properly update on the client. I had to go in and change the Max..Size properties multiple times on both the client and server sides in the course of my debugging) For a DataTable to be serializable, it needs to be given a name. The default constructor does not give the table a name, so: return new DataTable(); will not be serializable, while: return new DataTable("someName"); will name the table whatever is passed as the parameter. Note that a table can be given a name at any time by assigning a string to the TableName property of the DataTable. var table = new DataTable();table.TableName = "someName"; Hopefully that will help someone. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/12702",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/940/"
]
} |
12,765 | This is driving me crazy. I have this one php file on a test server at work which does not work.. I kept deleting stuff from it till it became <?print 'Hello';?> it outputs Hello if I create a new file and copy / paste the same script to it it works!Why does this one file give me the strange characters all the time? | That's the BOM (Byte Order Mark) you are seeing. In your editor, there should be a way to force saving without BOM which will remove the problem. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
12,794 | I'm trying to use jQuery to format code blocks, specifically to add a <pre> tag inside the <code> tag: $(document).ready(function() { $("code").wrapInner("<pre></pre>");}); Firefox applies the formatting correctly, but IE puts the entire code block on one line. If I add an alert alert($("code").html()); I see that IE has inserted some additional text into the pre tag: <PRE jQuery1218834632572="null"> If I reload the page, the number following jQuery changes. If I use wrap() instead of wrapInner() , to wrap the <pre> outside the <code> tag, both IE and Firefox handle it correctly. But shouldn't <pre> work inside <code> as well? I'd prefer to use wrapInner() because I can then add a CSS class to the <pre> tag to handle all formatting, but if I use wrap() , I have to put page formatting CSS in the <pre> tag and text/font formatting in the <code> tag, or Firefox and IE both choke. Not a huge deal, but I'd like to keep it as simple as possible. Has anyone else encountered this? Am I missing something? | That's the difference between block and inline elements. pre is a block level element . It's not legal to put it inside a code tag, which can only contain inline content . Because browsers have to support whatever godawful tag soup they might find on the real web, Firefox tries to do what you mean. IE happens to handle it differently, which is fine by the spec; behavior in that case is unspecified, because it should never happen. Could you instead replace the code element with the pre ? (Because of the block/inline issue, technically that should only work if the elements are inside an element with "flow" content , but the browsers might do what you want anyway.) Why is it a code element in the first place, if you want pre 's behavior? You could also give the code element pre 's whitespace preserving power with the CSS white-space: pre , but apparently IE 6 only honors that in Strict Mode . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/311/"
]
} |
12,836 | We have our own ORM we use here, and provide strongly typed wrappers for all of our db tables. We also allow weakly typed ad-hoc SQL to be executed, but these queries still go through the same class for getting values out of a data reader. In tweaking that class to work with Oracle, we've come across an interesting question. Is it better to use DBNull.Value, or null? Are there any benefits to using DBNull.Value? It seems more "correct" to use null, since we've separated ourselves from the DB world, but there are implications (you can't just blindly ToString() when a value is null for example) so its definitely something we need to make a conscious decision about. | I find it better to use null, instead of DB null. The reason is because, as you said, you're separating yourself from the DB world. It is generally good practice to check reference types to ensure they aren't null anyway. You're going to be checking for null for things other than DB data, and I find it is best to keep consistency across the system, and use null, not DBNull . In the long run, architecturally I find it to be the better solution. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12836",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/489/"
]
} |
12,843 | I have two separate mercurial repositories. At this point it makes sense that they "become one" because I want to work on the two projects simultaneously. I'd really like the two projects to each be a subdirectory in the new repository. How do I merge the two projects? Is this a good idea, or should Ikeep them separate? It seems I ought to be able to push from one repository to the other... Maybe this is really straight forward? | I was able to combine my two repositories in this way: Use hg clone first_repository to clone one of the repositories. Use hg pull -f other_repository to pull the code in from the other repository. The -f (force) flag on the pull is the key -- it says to ignore the fact that the two repositories are not from the same source. Here are the docs for this feature. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/12843",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/814/"
]
} |
12,865 | Got a bluescreen in windows while cloning a mercurial repository. After reboot, I now get this message for almost all hg commands: c:\src\>hg commitwaiting for lock on repository c:\src\McVrsServer held by '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'interrupted! Google is no help. Any tips? | When "waiting for lock on repository", delete the repository file: .hg/wlock (or it may be in .hg/store/lock ) When deleting the lock file, you must make sure nothing else is accessing the repository. (If the lock is a string of zeros or blank, this is almost certainly true). | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/12865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/814/"
]
} |
12,870 | This is a nasty one for me... I'm a PHP guy working in Java on a JSP project. I know how to do what I'm attempting through too much code and a complete lack of finesse. I'd prefer to do it right. Here is the situation: I'm writing a small display to show customers what days they can water their lawns based on their watering group (ABCDE) and what time of year it is. Our seasons look like this:Summer (5-1 to 8-31) Spring (3-1 to 4-30) Fall (9-1 to 10-31)Winter (11-1 to 2-28) An example might be: If I'm in group A, here would be my allowed times:Winter: Mondays onlySpring: Tues, Thurs, SatSummer: Any DayFall: Tues, Thurs, Sat If I was writing this in PHP I would use arrays like this: //M=Monday,t=Tuesday,T=Thursday.... etc$schedule["A"]["Winter"]='M';$schedule["A"]["Spring"]='tTS';$schedule["A"]["Summer"]='Any';$schedule["A"]["Fall"]='tTS';$schedule["B"]["Winter"]='t'; I COULD make the days arrays (array("Tuesday","Thursday","Saturday")) etc, but it is not necessary for what I'm really trying to accomplish. I will also need to setup arrays to determine what season I'm in: $seasons["Summer"]["start"]=0501;$seasons["Summer"]["end"]=0801; Can anyone suggest a really cool way to do this? I will have today's date and the group letter. I will need to get out of my function a day (M) or a series of days (tTS), (Any). | You could do essentially the same code with Hashtables (or some other Map): Hashtable<String, Hashtable<String, String>> schedule = new Hashtable<String, Hashtable<String, String>>();schedule.put("A", new Hashtable<String, String>());schedule.put("B", new Hashtable<String, String>());schedule.put("C", new Hashtable<String, String>());schedule.put("D", new Hashtable<String, String>());schedule.put("E", new Hashtable<String, String>());schedule.get("A").put("Winter", "M");schedule.get("A").put("Spring", "tTS");// Etc... Not as elegant, but then again, Java isn't a dynamic language, and it doesn't have hashes on the language level. Note: You might be able to do a better solution, this just popped in my head as I read your question. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12870",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/172/"
]
} |
12,890 | I have a large database of normalized order data that is becoming very slow to query for reporting. Many of the queries that I use in reports join five or six tables and are having to examine tens or hundreds of thousands of lines. There are lots of queries and most have been optimized as much as possible to reduce server load and increase speed. I think it's time to start keeping a copy of the data in a denormalized format. Any ideas on an approach? Should I start with a couple of my worst queries and go from there? | I know more about mssql that mysql, but I don't think the number of joins or number of rows you are talking about should cause you too many problems with the correct indexes in place. Have you analyzed the query plan to see if you are missing any? http://dev.mysql.com/doc/refman/5.0/en/explain.html That being said, once you are satisifed with your indexes and have exhausted all other avenues, de-normalization might be the right answer. If you just have one or two queries that are problems, a manual approach is probably appropriate, whereas some sort of data warehousing tool might be better for creating a platform to develop data cubes. Here's a site I found that touches on the subject: http://www.meansandends.com/mysql-data-warehouse/?link_body%2Fbody=%7Bincl%3AAggregation%7D Here's a simple technique that you can use to keep denormalizing queries simple, if you're just doing a few at a time (and I'm not replacing your OLTP tables, just creating a new one for reporting purposes). Let's say you have this query in your application: select a.name, b.address from tbla a join tblb b on b.fk_a_id = a.id where a.id=1 You could create a denormalized table and populate with almost the same query: create table tbl_ab (a_id, a_name, b_address); -- (types elided) Notice the underscores match the table aliases you use insert tbl_ab select a.id, a.name, b.address from tbla ajoin tblb b on b.fk_a_id = a.id -- no where clause because you want everything Then to fix your app to use the new denormalized table, switch the dots for underscores. select a_name as name, b_address as address from tbl_ab where a_id = 1; For huge queries this can save a lot of time and makes it clear where the data came from, and you can re-use the queries you already have. Remember, I'm only advocating this as the last resort. I bet there's a few indexes that would help you. And when you de-normalize, don't forget to account for the extra space on your disks, and figure out when you will run the query to populate the new tables. This should probably be at night, or whenever activity is low. And the data in that table, of course, will never exactly be up to date. [Yet another edit] Don't forget that the new tables you create need to be indexed too! The good part is that you can index to your heart's content and not worry about update lock contention, since aside from your bulk insert the table will only see selects. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12890",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1430/"
]
} |
12,896 | I'm looking for good/working/simple to use PHP code for parsing raw email into parts. I've written a couple of brute force solutions, but every time, one small change/header/space/something comes along and my whole parser fails and the project falls apart. And before I get pointed at PEAR/PECL, I need actual code. My host has some screwy config or something, I can never seem to get the .so's to build right. If I do get the .so made, some difference in path/environment/php.ini doesn't always make it available (apache vs cron vs CLI). Oh, and one last thing, I'm parsing the raw email text, NOT POP3, and NOT IMAP. It's being piped into the PHP script via a .qmail email redirect. I'm not expecting SOF to write it for me, I'm looking for some tips/starting points on doing it "right". This is one of those "wheel" problems that I know has already been solved. | What are you hoping to end up with at the end? The body, the subject, the sender, an attachment? You should spend some time with RFC2822 to understand the format of the mail, but here's the simplest rules for well formed email: HEADERS\n\nBODY That is, the first blank line (double newline) is the separator between the HEADERS and the BODY. A HEADER looks like this: HSTRING:HTEXT HSTRING always starts at the beginning of a line and doesn't contain any white space or colons. HTEXT can contain a wide variety of text, including newlines as long as the newline char is followed by whitespace. The "BODY" is really just any data that follows the first double newline. (There are different rules if you are transmitting mail via SMTP, but processing it over a pipe you don't have to worry about that). So, in really simple, circa-1982 RFC822 terms, an email looks like this: HEADER: HEADER TEXTHEADER: MORE HEADER TEXT INCLUDING A LINE CONTINUATIONHEADER: LAST HEADERTHIS IS ANYARBITRARY DATA(FOR THE MOST PART) Most modern email is more complex than that though. Headers can be encoded for charsets or RFC2047 mime words, or a ton of other stuff I'm not thinking of right now. The bodies are really hard to roll your own code for these days to if you want them to be meaningful. Almost all email that's generated by an MUA will be MIME encoded. That might be uuencoded text, it might be html, it might be a uuencoded excel spreadsheet. I hope this helps provide a framework for understanding some of the very elemental buckets of email. If you provide more background on what you are trying to do with the data I (or someone else) might be able to provide better direction. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/12896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/314/"
]
} |
12,905 | I'm experimenting with creating an add-in for Infopath 2007. The documentation is very skimpy. What I'm trying to determine is what kind of actions an add-in can take while designing a form. Most of the discussion and samples are for when the user is filling out the form. Can I, for example, add a new field to the form in the designer? Add a new item to the schema? Move a form field on the design surface? It doesn't appear so, but I can't find anything definitive. | What are you hoping to end up with at the end? The body, the subject, the sender, an attachment? You should spend some time with RFC2822 to understand the format of the mail, but here's the simplest rules for well formed email: HEADERS\n\nBODY That is, the first blank line (double newline) is the separator between the HEADERS and the BODY. A HEADER looks like this: HSTRING:HTEXT HSTRING always starts at the beginning of a line and doesn't contain any white space or colons. HTEXT can contain a wide variety of text, including newlines as long as the newline char is followed by whitespace. The "BODY" is really just any data that follows the first double newline. (There are different rules if you are transmitting mail via SMTP, but processing it over a pipe you don't have to worry about that). So, in really simple, circa-1982 RFC822 terms, an email looks like this: HEADER: HEADER TEXTHEADER: MORE HEADER TEXT INCLUDING A LINE CONTINUATIONHEADER: LAST HEADERTHIS IS ANYARBITRARY DATA(FOR THE MOST PART) Most modern email is more complex than that though. Headers can be encoded for charsets or RFC2047 mime words, or a ton of other stuff I'm not thinking of right now. The bodies are really hard to roll your own code for these days to if you want them to be meaningful. Almost all email that's generated by an MUA will be MIME encoded. That might be uuencoded text, it might be html, it might be a uuencoded excel spreadsheet. I hope this helps provide a framework for understanding some of the very elemental buckets of email. If you provide more background on what you are trying to do with the data I (or someone else) might be able to provide better direction. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/12905",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9/"
]
} |
12,927 | I am calling a vendor's Java API, and on some servers it appears that the JVM goes into a low priority polling loop after logging into the API (CPU at 100% usage). The same app on other servers does not exhibit this behavior. This happens on WebSphere and Tomcat. The environment is tricky to set up so it is difficult to try to do something like profiling within Eclipse. Is there a way to profile (or some other method of inspecting) an existing Java app running in Tomcat to find out what methods are being executed while it's in this spinwait kind of state? The app is only executing one method when it gets in this state (vendor's method). Vendor can't replicate the behavior (of course). Update: Using JConsole I was able to determine who was running and what they were doing. It took me a few hours to then figure out why it was doing it. The problem ended up being that the vendor's API jar that was being used did not match exactly to the the database configuration that it was using. It was defaulting to having tracing and performance monitoring enabled on the servers that had the slight mis-match in configuration. I used a different jar and all is well. So thanks, Joshua, for your answer. JConsole was extremely easy to setup and use to monitor an existing application. @Cringe - I did some experimenting with some of the options you suggested. I had some problems with getting JProfiler set up, it looks good (but pricey). Going forward I went ahead and added the Eclipse Profiler plugin and I'll be looking over the different open source profilers to compare functionality. | If you are using Java 5 or later, you can connect to your application using jconsole to view all running threads. jstack also will do a stack dump. I think this should still work even inside a container like Tomcat. Both of these tools are included with JDK5 and later (I assume the process needs to be at least Java 5, though I could be wrong) Update:It's also worth noting that starting with JDK 1.6 update 7 there is now a bundled profiler called VisualVM which can be launched with 'jvisualvm'. It looks like it is a java.net project , so additional info may be available at that page. I haven't used this yet but it looks useful for more serious analysis. Hope that helps | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12927",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/791/"
]
} |
12,929 | Is it in best interests of the software development industry for one framework, browser or language to win the war and become the de facto standard? On one side it takes away the challenges of cross platform, but it opens it up for a single point of failure. Would it also result in a stagnation of innovation, or would it allow the industry to focus on more important things (whatever those might be). | Defacto standards are bad because they are usually controlled by a single party. What is best for the industry is for there to be a foundation of open standards on top of which everyone can compete. The web is a perfect example. When IE won the browser war, it stagnated for years , and is only just now starting to improve because it's hemorrhaging marketshare. The Netscape years prior to that weren't much better. The CSS 2.1 standard was released ten years ago and still isn't supported well. As a consequence, web development is a Black Art of hacks and work-arounds to get websites to render consistently. My job would be a hundred times easier if I could build a website according to web standards and be confident it would display correctly. Just think of all the cool things we could have been working on instead of fixing IE's rendering errors. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/255/"
]
} |
12,936 | Does anybody have experience working with PHP accelerators such as MMCache or Zend Accelerator ? I'd like to know if using either of these makes PHP comparable to faster web-technologies. Also, are there trade offs for using these? | Note that Zend Optimizer and MMCache (or similar applications) are totally different things. While Zend Optimizer tries to optimize the program opcode MMCache will cache the scripts in memory and reuse the precompiled code. I did some benchmarks some time ago and you can find the results in my blog (in German though). The basic results: Zend Optimizer alone didn't help at all. Actually my scripts were slower than without optimizer. When it comes to caches:* fastest: eAccelerator * XCache * APC And: You DO want to install a opcode cache! For example: alt text http://blogs.interdose.com/dominik/wp-content/uploads/2008/04/opcode_wordpress.png This is the duration it took to call the wordpress homepage 10.000 times. Edit: BTW, eAccelerator contains an optimizer itself. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/12936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/40/"
]
} |
13,021 | I've frequently encountered sites that put all of their JavaScript inside a namespace structure along the lines of: namespaces = { com : { example: { example.com's data} } However, setting this up safely with respect to other namespaced frameworks seems to require a relatively hefty amount of code (defined as > 2 lines). I was wondering whether anyone knows of a concise way to do this? Furthermore, whether there's a relatively standard/consistent way to structure it? For example, is the com namespace directly attached to the global object, or is it attached through a namespace object? [Edit: whoops, obviously {com = { ... } } wouldn't accomplish anything close to what I intended, thanks to Shog9 for pointing that out.] | Javascript doesn't have stand-alone namespaces. It has functions, which can provide scope for resolving names, and objects, which can contribute to the named data accessible in a given scope. Here's your example, corrected: var namespaces = { com: { example: { /* example.com's data */ } } } This is a variable namespaces being assigned an object literal. The object contains one property: com , an object with one property: example , an object which presumably would contain something interesting. So, you can type something like namespaces.com.example. somePropertyOrFunctionOnExample and it'll all work. Of course, it's also ridiculous. You don't have a hierarchical namespace, you have an object containing an object containing an object with the stuff you actually care about. var com_example_data = { /* example.com's data */ }; That works just as well, without the pointless hierarchy. Now , if you actually want to build a hierarchy, you can try something like this: com_example = com_example || {};com_example.flags = com_example.flags || { active: false, restricted: true};com_example.ops = com_example.ops || (function() { var launchCodes = "38925491753824"; // hidden / private return { activate: function() { /* ... */ }, destroyTheWorld: function() { /* ... */ } }; })(); ...which is, IMHO, reasonably concise. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/13021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/784/"
]
} |
13,049 | What's the difference between struct and class in .NET? | In .NET, there are two categories of types, reference types and value types . Structs are value types and classes are reference types . The general difference is that a reference type lives on the heap, and a value type lives inline, that is, wherever it is your variable or field is defined. A variable containing a value type contains the entire value type value. For a struct, that means that the variable contains the entire struct, with all its fields. A variable containing a reference type contains a pointer, or a reference to somewhere else in memory where the actual value resides. This has one benefit, to begin with: value types always contains a value reference types can contain a null -reference, meaning that they don't refer to anything at all at the moment Internally, reference type s are implemented as pointers, and knowing that, and knowing how variable assignment works, there are other behavioral patterns: copying the contents of a value type variable into another variable, copies the entire contents into the new variable, making the two distinct. In other words, after the copy, changes to one won't affect the other copying the contents of a reference type variable into another variable, copies the reference, which means you now have two references to the same somewhere else storage of the actual data. In other words, after the copy, changing the data in one reference will appear to affect the other as well, but only because you're really just looking at the same data both places When you declare variables or fields, here's how the two types differ: variable: value type lives on the stack, reference type lives on the stack as a pointer to somewhere in heap memory where the actual memory lives (though note Eric Lipperts article series: The Stack Is An Implementation Detail .) class/struct-field: value type lives completely inside the type, reference type lives inside the type as a pointer to somewhere in heap memory where the actual memory lives. | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/13049",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/905/"
]
} |
13,055 | I'm looking for a clear, concise and accurate answer. Ideally as the actual answer, although links to good explanations welcome. | Boxed values are data structures that are minimal wrappers around primitive types *. Boxed values are typically stored as pointers to objects on the heap . Thus, boxed values use more memory and take at minimum two memory lookups to access: once to get the pointer, and another to follow that pointer to the primitive. Obviously this isn't the kind of thing you want in your inner loops. On the other hand, boxed values typically play better with other types in the system. Since they are first-class data structures in the language, they have the expected metadata and structure that other data structures have. In Java and Haskell generic collections can't contain unboxed values. Generic collections in .NET can hold unboxed values with no penalties. Where Java's generics are only used for compile-time type checking, .NET will generate specific classes for each generic type instantiated at run time . Java and Haskell have unboxed arrays, but they're distinctly less convenient than the other collections. However, when peak performance is needed it's worth a little inconvenience to avoid the overhead of boxing and unboxing. * For this discussion, a primitive value is any that can be stored on the call stack , rather than stored as a pointer to a value on the heap. Frequently that's just the machine types (ints, floats, etc), structs, and sometimes static sized arrays. .NET-land calls them value types (as opposed to reference types). Java folks call them primitive types. Haskellions just call them unboxed. ** I'm also focusing on Java, Haskell, and C# in this answer, because that's what I know. For what it's worth, Python, Ruby, and Javascript all have exclusively boxed values. This is also known as the "Everything is an object" approach***. *** Caveat: A sufficiently advanced compiler / JIT can in some cases actually detect that a value which is semantically boxed when looking at the source, can safely be an unboxed value at runtime. In essence, thanks to brilliant language implementors your boxes are sometimes free. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/13055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/905/"
]
} |
13,060 | I'm looking for a clear, concise and accurate answer. Ideally as the actual answer, although links to good explanations welcome. This also applies to VB.Net, but the keywords are different - ByRef and ByVal . | By default (in C#), passing an object to a function actually passes a copy of the reference to that object. Changing the parameter itself only changes the value in the parameter, and not the variable that was specified. void Test1(string param){ param = "new value";}string s1 = "initial value";Test1(s1);// s1 == "initial value" Using out or ref passes a reference to the variable specified in the call to the function. Any changes to the value of an out or ref parameter will be passed back to the caller. Both out and ref behave identically except for one slight difference: ref parameters are required to be initialised before calling, while out parameters can be uninitialised. By extension, ref parameters are guaranteed to be initialised at the start of the method, while out parameters are treated as uninitialised. void Test2(ref string param){ param = "new value";}void Test3(out string param){ // Use of param here will not compile param = "another value";}string s2 = "initial value";string s3;Test2(ref s2);// s2 == "new value"// Test2(ref s3); // Passing ref s3 will not compileTest3(out s2);// s2 == "another value"Test3(out s3);// s3 == "another value" Edit : As dp points out, the difference between out and ref is only enforced by the C# compiler, not by the CLR. As far as I know, VB has no equivalent for out and implements ref (as ByRef ) only, matching the support of the CLR. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/13060",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/905/"
]
} |
13,109 | In php, I often need to map a variable using an array ... but I can not seem to be able to do this in a one liner. c.f. example: // the following results in an error:echo array('a','b','c')[$key];// this works, using an unnecessary variable:$variable = array('a','b','c');echo $variable[$key]; This is a minor problem, but it keeps bugging every once in a while ... I don't like the fact, that I use a variable for nothing ;) | I wouldn't bother about that extra variable, really. If you want, though, you could also remove it from memory after you've used it: $variable = array('a','b','c');echo $variable[$key];unset($variable); Or, you could write a small function: function indexonce(&$ar, $index) { return $ar[$index];} and call this with: $something = indexonce(array('a', 'b', 'c'), 2); The array should be destroyed automatically now. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/13109",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1532/"
]
} |
13,128 | I'm tired of adding ten link libraries into my project, or requiring eight of them to use my own. I'd like to take existing libraries like libpng.a, libz.a, libjpeg.a, and combine them into one single .a library. Is that possible? How about combining .lib libraries? | You could extract the object files from each library with ar x <library name> and then merge them all into a new library with ar cs <new library name> <list each extracted object file> | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/13128",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1536/"
]
} |
13,293 | I am thinking of using a PHP framework called CodeIgniter . One of the things I am interested in is its speed. I have, however, no way to find out how fast it is, and would rather not simply take the word of their website for it. Does anybody know how I can determine its speed myself, or can someone tell me of a site that can? | Code Igniter also has some built-in benchmarking tools: http://codeigniter.com/user_guide/general/profiling.html | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/13293",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1384652/"
]
} |
13,343 | I've been thinking about software estimation lately, and I have a bunch of questions around time spent coding. I'm curious to hear from people who have had at least a couple years of experience developing software. When you have to estimate the amount of time you'll spend working on something, how many hours of the day do you spend coding? What occupies the other non-coding hours? Do you find you spend more or less hours than your teammates coding? Do you feel like you're getting more or less work done than they are? What are your work conditions like? Private office, shared office, team room? Coding alone or as a pair? How has your working condition changed the amount of time you spend coding each day? If you can work from home, does that help or hurt your productivity? What development methodology do you use? Waterfall? Agile? Has changing from one methodology to another had an impact on your coding hours per day? Most importantly: Are you happy with your productivity? If not, what single change would you make that would have the most impact on it? | I'm a corporate developer, the kind Joel Spolsky called "depressed" in a couple of the StackOverflow podcasts. Because my company is not a software company it has little business reason to implement many of the measures software experts recommend companies engage for developer productivity. We don't get private offices and dual 30 inch monitors. Our source control system is Microsoft Visual Source Safe. Enough said. On the other hand, I get to do a lot of things that fill out my day and add some variety to my job. I get involved in business analysis, project management, development, production support, international implementations, training support, team planning, and process improvement. I'd say I get 85% of my day to code, when I can focus and I have a major programming task. But more often I get about 50% of my day for coding. If production support (non coding-related) is heavy I may only get 15% of my day to code. Most of the companies I've worked for were not actively engaged in evaluating agile processes or test-driven development, but they didn't do a good job of waterfall either; most of their developers worked like cut-and-paste cowboys with impugnity. On occasion I do work from home and with kids, it's horrible . I'm more productive at work. My productivity is good, but could be better if the interruption factor and cost of mental context switching was removed. Production support and project management overhead both create those types of interruptions. But both are necessary parts of the job, so I don't think I can get rid of them. What I would like to consider is a restructuring of the team so that people on projects could focus on projects while the others could block the interruptions by being dedicated to support. And then swapping when the project is over. Unfortunately, no one wants to do support, so the other productivity improvement measure I'd wish for would be one of the following: Better testing tools/methodologies to speed up unit testing Better business analysis tools/skills to improve the quality of new development and limit its contributions to the production support load | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/13343",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1554/"
]
} |
13,347 | We are currently working on a new version of our main application. one thing that I really wish to work on is providing support for multiple monitors. Increasingly, our target users are adding second screens to their desktops and I think our product could leverage this extra space to improve user performance. Our application is a financial package that supports leasing and fleet companies - a very specialised market. That being said, I am sure that many people with multiple monitors have a favourite bit of software that they think would be improved if it supported those extra screens better. I'm looking for some opinions on those niggles that you have with current software, and how you think they could be improved to support multi-monitor setups. My aim is to then review these and decide how I can implement them and, hopefully, provide an even better environment for my users. Your help is appreciated.Thankyou. | Few random tips: If multiple windows can be open at one time, allow users to have them on separate screens. Seems obvious, but some very popular apps (e.g. Visual Studio) fail miserably at this. Remember the position of the last opened window, and open new windows on the same screen as before. However, sometimes users switch between multiple and single-display (e.g. docking a laptop with an external CRT), so watch cover this case as well. Consider how your particular users work, and how having two maximized windows simultaneously might help. Often, there is a (fairly passive) window for reference (e.g. a web browser/help) and an active window for data entry (e.g. editor/database) that users switch between. Do not put toolboxes/toolbars on a different window than objects they operate on (it's inconvenient to move the mouse so far). | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/13347",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/770/"
]
} |
13,348 | In a database-centric application that is designed for multiple clients, I've always thought it was "better" to use a single database for ALL clients - associating records with proper indexes and keys. In listening to the Stack Overflow podcast, I heard Joel mention that FogBugz uses one database per client (so if there were 1000 clients, there would be 1000 databases). What are the advantages of using this architecture? I understand that for some projects, clients need direct access to all of their data - in such an application, it's obvious that each client needs their own database. However, for projects where a client does not need to access the database directly, are there any advantages to using one database per client? It seems that in terms of flexibility, it's much simpler to use a single database with a single copy of the tables. It's easier to add new features, it's easier to create reports, and it's just easier to manage. I was pretty confident in the "one database for all clients" method until I heard Joel (an experienced developer) mention that his software uses a different approach -- and I'm a little confused with his decision... I've heard people cite that databases slow down with a large number of records, but any relational database with some merit isn't going to have that problem - especially if proper indexes and keys are used. Any input is greatly appreciated! | Assume there's no scaling penalty for storing all the clients in one database; for most people, and well configured databases/queries, this will be fairly true these days. If you're not one of these people, well, then the benefit of a single database is obvious. In this situation, benefits come from the encapsulation of each client. From the code perspective, each client exists in isolation - there is no possible situation in which a database update might overwrite, corrupt, retrieve or alter data belonging to another client. This also simplifies the model, as you don't need to ever consider the fact that records might belong to another client. You also get benefits of separability - it's trivial to pull out the data associated with a given client ,and move them to a different server. Or restore a backup of that client when the call up to say "We've deleted some key data!", using the builtin database mechanisms. You get easy and free server mobility - if you outscale one database server, you can just host new clients on another server. If they were all in one database, you'd need to either get beefier hardware, or run the database over multiple machines. You get easy versioning - if one client wants to stay on software version 1.0, and another wants 2.0, where 1.0 and 2.0 use different database schemas, there's no problem - you can migrate one without having to pull them out of one database. I can think of a few dozen more, I guess. But all in all, the key concept is "simplicity". The product manages one client, and thus one database. There is never any complexity from the "But the database also contains other clients" issue. It fits the mental model of the user, where they exist alone. Advantages like being able to doing easy reporting on all clients at once, are minimal - how often do you want a report on the whole world, rather than just one client? | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/13348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1581/"
]
} |
13,362 | I've got a div that uses overflow:auto to keep the contents inside the div as it is resized and dragged around the page. I'm using some ajax to retrieve lines of text from the server, then append them to the end of the div, so the content is growing downwards. Every time this happens, I'd like to use JS to scroll the div to the bottom so the most recently added content is visible, similar to the way a chat room or command line console would work. So far I've been using this snippet to do it (I'm also using jQuery, hence the $() function): $("#thediv").scrollTop = $("#thediv").scrollHeight; However it's been giving me inconsistent results. Sometimes it works, sometimes not, and it completely ceases to work if the user ever resizes the div or moves the scroll bar manually. The target browser is Firefox 3, and it's being deployed in a controlled environment so it doesn't need to work in IE at all. Any ideas guys? This one's got me stumped. Thanks! | scrollHeight should be the total height of content. scrollTop specifies the pixel offset into that content to be displayed at the top of the element's client area. So you really want (still using jQuery): $("#thediv").each( function() { // certain browsers have a bug such that scrollHeight is too small // when content does not fill the client area of the element var scrollHeight = Math.max(this.scrollHeight, this.clientHeight); this.scrollTop = scrollHeight - this.clientHeight;}); ...which will set the scroll offset to the last clientHeight worth of content. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/13362",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1384/"
]
} |
13,409 | The CodePlex team has a Slack time policy, and it's worked out very well for them. Jim Newkirk and myself used it to work on the xUnit.net project. Jonathan Wanagel used it to work on SvnBridge . Scott Densmore and myself used it to work on an ObjectBuilder 2.0 prototype. For others, it was a great time to explore things that were technically not on the schedule, but could eventually end up being of great use to the rest of the team. I'm so convinced of the value of this that if I'm ever running a team again, I'm going to make it part of the team culture. Have you had a formalized Slack policy on your team? How did it work out? Edited: I just realized I didn't define Slack. For those who haven't read the book, Slack is what Google's "20% time" is: you're given some slice of your day/week/month/year on which to work on things that are not necessarily directly related to your day-to-day job, but might have an indirect benefit (obviously if you work on stuff that's totally not useful for your job or your company, your manager probably won't think very well of the way you spent the time :-p). | I just want to mention Google's policy on the subject. 20% of the day should be used for private projects and research. I think it is time for managers to face the fact that most good developers are a bit lazy. If they weren't, we wouldn't have concepts like code reuse. If this laziness can be focused into a creative force, and the developers can read up on technical issues and experiment with architecture and language features, I am certain that the end result will be better code and a more satisfied developer. So, if you are a manager: Let your developers slack of now and then. Encourage them to hold small seminars with the team to discuss new ways of doing stuff. If you are a developer: Read, learn and love your craft. You have one of the best jobs in the world, as long as you are willing to put some time into learning the best ways to do your job. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/13409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1554/"
]
} |
13,434 | How can I get programmatic access to the call stack? | Try System.Diagnostics.StackTrace . | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/13434",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/100/"
]
} |
13,485 | My current development project has two aspects to it. First, there is a public website where external users can submit and update information for various purposes. This information is then saved to a local SQL Server at the colo facility. The second aspect is an internal application which employees use to manage those same records (conceptually) and provide status updates, approvals, etc. This application is hosted within the corporate firewall with its own local SQL Server database. The two networks are connected by a hardware VPN solution, which is decent, but obviously not the speediest thing in the world. The two databases are similar, and share many of the same tables, but they are not 100% the same. Many of the tables on both sides are very specific to either the internal or external application. So the question is: when a user updates their information or submits a record on the public website, how do you transfer that data to the internal application's database so it can be managed by the internal staff? And vice versa... how do you push updates made by the staff back out to the website? It is worth mentioning that the more "real time" these updates occur, the better. Not that it has to be instant, just reasonably quick. So far, I have thought about using the following types of approaches: Bi-directional replication Web service interfaces on both sides with code to sync the changes as they are made (in real time). Web service interfaces on both sides with code to asynchronously sync the changes (using a queueing mechanism). Any advice? Has anyone run into this problem before? Did you come up with a solution that worked well for you? | This is a pretty common integration scenario, I believe. Personally, I think an asynchronous messaging solution using a queue is ideal. You should be able to achieve near real time synchronization without the overhead or complexity of something like replication. Synchronous web services are not ideal because your code will have to be very sophisticated to handle failure scenarios. What happens when one system is restarted while the other continues to publish changes? Does the sending system get timeouts? What does it do with those? Unless you are prepared to lose data, you'll want some sort of transactional queue (like MSMQ) to receive the change notices and take care of making sure they get to the other system. If either system is down, the changes (passed as messages) will just accumulate and as soon as a connection can be established the re-starting server will process all the queued messages and catch up, making system integrity much, much easier to achieve. There are some open source tools that can really make this easy for you if you are using .NET (especially if you want to use MSMQ). nServiceBus by Udi Dahan Mass Transit by Dru Sellers and Chris Patterson There are commercial products also, and if you are considering a commercial option see here for a list of of options on .NET. Of course, WCF can do async messaging using MSMQ bindings, but a tool like nServiceBus or MassTransit will give you a very simple Send/Receive or Pub/Sub API that will make your requirement a very straightforward job. If you're using Java, there are any number of open source service bus implementations that will make this kind of bi-directional, asynchronous messaging a snap, like Mule or maybe just ActiveMQ. You may also want to consider reading Udi Dahan' s blog, listening to some of his podcasts. Here are some more good resources to get you started. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/13485",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1436/"
]
} |
13,537 | I've heard of the idea of bootstrapping a language, that is, writing a compiler/interpreter for the language in itself. I was wondering how this could be accomplished and looked around a bit, and saw someone say that it could only be done by either writing an initial compiler in a different language. hand-coding an initial compiler in Assembly, which seems like a special case of the first To me, neither of these seem to actually be bootstrapping a language in the sense that they both require outside support. Is there a way to actually write a compiler in its own language? | Is there a way to actually write a compiler in its own language? You have to have some existing language to write your new compiler in. If you were writing a new, say, C++ compiler, you would just write it in C++ and compile it with an existing compiler first. On the other hand, if you were creating a compiler for a new language, let's call it Yazzleof, you would need to write the new compiler in another language first. Generally, this would be another programming language, but it doesn't have to be. It can be assembly, or if necessary, machine code. If you were going to bootstrap a compiler for Yazzleof, you generally wouldn't write a compiler for the full language initially. Instead you would write a compiler for Yazzle-lite, the smallest possible subset of the Yazzleof (well, a pretty small subset at least). Then in Yazzle-lite, you would write a compiler for the full language. (Obviously this can occur iteratively instead of in one jump.) Because Yazzle-lite is a proper subset of Yazzleof, you now have a compiler which can compile itself. There is a really good writeup about bootstrapping a compiler from the lowest possible level (which on a modern machine is basically a hex editor), titled Bootstrapping a simple compiler from nothing . It can be found at https://web.archive.org/web/20061108010907/http://www.rano.org/bcompiler.html . | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/13537",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1266/"
]
} |
13,540 | I've written a stored proc that will do an update if a record exists, otherwise it will do an insert. It looks something like this: update myTable set Col1=@col1, Col2=@col2 where ID=@IDif @@rowcount = 0insert into myTable (Col1, Col2) values (@col1, @col2) My logic behind writing it in this way is that the update will perform an implicit select using the where clause and if that returns 0 then the insert will take place. The alternative to doing it this way would be to do a select and then based on the number of rows returned either do an update or insert. This I considered inefficient because if you are to do an update it will cause 2 selects (the first explicit select call and the second implicit in the where of the update). If the proc were to do an insert then there'd be no difference in efficiency. Is my logic sound here?Is this how you would combine an insert and update into a stored proc? | Your assumption is right, this is the optimal way to do it and it's called upsert/merge . Importance of UPSERT - from sqlservercentral.com : For every update in the case mentioned above we are removing one additional read from the table if we use the UPSERT instead of EXISTS. Unfortunately for an Insert, both the UPSERT and IF EXISTS methods use the same number of reads on the table. Therefore the check for existence should only be done when there is a very valid reason to justify the additional I/O. The optimized way to do things is to make sure that you have little reads as possible on the DB. The best strategy is to attempt the update. If no rows are affected by the update then insert. In most circumstances, the row will already exist and only one I/O will be required. Edit : Please check out this answer and the linked blog post to learn about the problems with this pattern and how to make it work safe. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/13540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1463/"
]
} |
13,569 | In our place we're split between using mysqli and PDO for stuff like prepared statements and transaction support. Some projects use one, some the other. There is little realistic likelihood of us ever moving to another RDBMS. I prefer PDO for the single reason that it allows named parameters for prepared statements, and as far as I am aware mysqli does not. Are there any other pros and cons to choosing one over the other as a standard as we consolidate our projects to use just one approach? | Well, you could argue with the object oriented aspect, the prepared statements, the fact that it becomes a standard, etc. But I know that most of the time, convincing somebody works better with a killer feature. So there it is: A really nice thing with PDO is you can fetch the data, injecting it automatically in an object. If you don't want to use an ORM (cause it's a just a quick script) but you do like object mapping, it's REALLY cool : class Student { public $id; public $first_name; public $last_name public function getFullName() { return $this->first_name.' '.$this->last_name }}try { $dbh = new PDO("mysql:host=$hostname;dbname=school", $username, $password) $stmt = $dbh->query("SELECT * FROM students"); /* MAGIC HAPPENS HERE */ $stmt->setFetchMode(PDO::FETCH_INTO, new Student); foreach($stmt as $student) { echo $student->getFullName().'<br />'; } $dbh = null;}catch(PDOException $e){ echo $e->getMessage();} | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/13569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/137/"
]
} |
13,599 | Does anyone know how to transform a enum value to a human readable value? For example: ThisIsValueA should be "This is Value A". | Converting this from a vb code snippet that a certain Ian Horwill left at a blog post long ago ... i've since used this in production successfully. /// <summary> /// Add spaces to separate the capitalized words in the string, /// i.e. insert a space before each uppercase letter that is /// either preceded by a lowercase letter or followed by a /// lowercase letter (but not for the first char in string). /// This keeps groups of uppercase letters - e.g. acronyms - together. /// </summary> /// <param name="pascalCaseString">A string in PascalCase</param> /// <returns></returns> public static string Wordify(string pascalCaseString) { Regex r = new Regex("(?<=[a-z])(?<x>[A-Z])|(?<=.)(?<x>[A-Z])(?=[a-z])"); return r.Replace(pascalCaseString, " ${x}"); } (requires, 'using System.Text.RegularExpressions;') Thus: Console.WriteLine(Wordify(ThisIsValueA.ToString())); Would return, "This Is Value A". It's much simpler, and less redundant than providing Description attributes. Attributes are useful here only if you need to provide a layer of indirection (which the question didn't ask for). | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/13599",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1154/"
]
} |
13,615 | I need to validate an integer to know if is a valid enum value. What is the best way to do this in C#? | You got to love these folk who assume that data not only always comes from a UI, but a UI within your control! IsDefined is fine for most scenarios, you could start with: public static bool TryParseEnum<TEnum>(this int enumValue, out TEnum retVal){ retVal = default(TEnum); bool success = Enum.IsDefined(typeof(TEnum), enumValue); if (success) { retVal = (TEnum)Enum.ToObject(typeof(TEnum), enumValue); } return success;} (Obviously just drop the ‘this’ if you don’t think it’s a suitable int extension) | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/13615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1154/"
]
} |
13,620 | (assume php5) consider <?php $foo = 'some words'; //case 1 print "these are $foo"; //case 2 print "these are {$foo}"; //case 3 print 'these are ' . $foo;?> Is there much of a difference between 1 and 2? If not, what about between 1/2 and 3? | Well, as with all "What might be faster in real life" questions, you can't beat a real life test. function timeFunc($function, $runs){ $times = array(); for ($i = 0; $i < $runs; $i++) { $time = microtime(); call_user_func($function); $times[$i] = microtime() - $time; } return array_sum($times) / $runs;}function Method1(){ $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = "these are $foo";}function Method2(){ $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = "these are {$foo}";}function Method3() { $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = "these are " . $foo;}print timeFunc('Method1', 10) . "\n";print timeFunc('Method2', 10) . "\n";print timeFunc('Method3', 10) . "\n"; Give it a few runs to page everything in, then... 0.0035568 0.0035388 0.0025394 So, as expected, the interpolation are virtually identical (noise level differences, probably due to the extra characters the interpolation engine needs to handle). Straight up concatenation is about 66% of the speed, which is no great shock. The interpolation parser will look, find nothing to do, then finish with a simple internal string concat. Even if the concat were expensive, the interpolator will still have to do it, after all the work to parse out the variable and trim/copy up the original string. Updates By Somnath: I added Method4() to above real time logic. function Method4() { $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = 'these are ' . $foo;}print timeFunc('Method4', 10) . "\n";Results were:0.00147390.00155740.00119550.001169 When you are just declaring a string only and no need to parse that string too, then why to confuse PHP debugger to parse. I hope you got my point. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/13620",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/314/"
]
} |
13,678 | I am part of a high school robotics team, and there is some debate about which language to use to program our robot. We are choosing between C (or maybe C++) and LabVIEW. There are pros for each language. C(++): Widely used Good preparation for the future (most programming positions require text-based programmers.) We can expand upon our C codebase from last year Allows us to better understand what our robot is doing. LabVIEW Easier to visualize program flow (blocks and wires, instead of lines of code) Easier to teach (Supposedly...) "The future of programming is graphical." (Think so?) Closer to the Robolab background that some new members may have. Don't need to intimately know what's going on. Simply tell the module to find the red ball, don't need to know how. This is a very difficult decision for us, and we've been debating for a while. Based on those pros for each language, and on the experience you've got, what do you think the better option is? Keep in mind that we aren't necessarily going for pure efficiency. We also hope to prepare our programmers for a future in programming. Also: Do you think that graphical languages such as LabVEIW are the future of programming? Is a graphical language easier to learn than a textual language? I think that they should be about equally challenging to learn. Seeing as we are partailly rooted in helping people learn, how much should we rely on prewritten modules, and how much should we try to write on our own? ("Good programmers write good code, great programmers copy great code." But isn't it worth being a good programmer, first?) Thanks for the advice! Edit:I'd like to emphasize this question more:The team captain thinks that LabVIEW is better for its ease of learning and teaching. Is that true? I think that C could be taught just as easily, and beginner-level tasks would still be around with C. I'd really like to hear your opinions. Is there any reason that typing while{} should be any more difficult than creating a "while box?" Isn't it just as intuitive that program flows line by line, only modified by ifs and loops, as it is intuitive that the program flows through the wire, only modified by ifs and loops!? Thanks again! Edit:I just realized that this falls under the topic of "language debate." I hope it's okay, because it's about what's best for a specific branch of programming, with certain goals. If it's not... I'm sorry... | Before I arrived, our group (PhD scientists, with little programming background) had been trying to implement a LabVIEW application on-and-off for nearly a year. The code was untidy, too complex (front and back-end) and most importantly, did not work. I am a keen programmer but had never used LabVIEW. With a little help from a LabVIEW guru who could help translate the textual progamming paradigms I knew into LabVIEW concepts it was possible to code the app in a week. The point here is that the basic coding concepts still have to be learnt, the language, even one like LabVIEW, is just a different way of expressing them . LabVIEW is great to use for what it was originally designed for. i.e. to take data from DAQ cards and display it on-screen perhaps with some minor manipulations in-between. However, programming algorithms is no easier and I would even suggest that it is more difficult. For example, in most procedural languages execution order is generally followed line by line, using pseudo mathematical notation (i.e. y = x*x + x + 1 ) whereas LabVIEW would implement this using a series of VI's which don't necessarily follow from each other (i.e. left-to-right) on the canvas. Moreover programming as a career is more than knowing the technicalities of coding. Being able to effectively ask for help/search for answers, write readable code and work with legacy code are all key skills which are undeniably more difficult in a graphical language such as LabVIEW. I believe some aspects of graphical programming may become mainstream - the use of sub-VIs perfectly embodies the 'black-box' principal of programming and is also used in other language abstractions such as Yahoo Pipes and the Apple Automator - and perhaps some future graphical language will revolutionise the way we program but LabVIEW itself is not a massive paradigm shift in language design, we still have while, for, if flow control, typecasting, event driven programming, even objects. If the future really will be written in LabVIEW, C++ programmer won't have much trouble crossing over. As a postcript I'd say that C/C++ is more suited to robotics since the students will no doubt have to deal with embedded systems and FPGAs at some point. Low level programming knowledge (bits, registers etc.) would be invaluable for this kind of thing. @mendicant Actually LabVIEW is used a lot in industry, especially for control systems. Granted NASA unlikely use it for on-board satellite systems but then software developement for space-systems is a whole different ball game ... | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/13678",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1615/"
]
} |
13,725 | NSInteger / NSUInteger are Cocoa-defined replacements for the regular built-in types. Is there any benefit to using the NS* types over the built-ins? Which do you prefer and why? Are NSInteger and int the same width on 32-bit / 64-bit platforms? | The way I understand it is that NSInteger et al. are architecture safe versions of the corresponding C types. Basically their size vary depending on the architecture, but NSInteger, for example, is guaranteed to hold any valid pointer for the current architecture. Apple recommends that you use these to work with OS X 10.5 and onwards, and Apple's API:s will use them, so it's definitely a good idea to get into the habit of using them. They require a little more typing, but apart from that it doesn't seem to be any reason not to use them. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/13725",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1043/"
]
} |
13,745 | Is there any good way to deal with the class renaming refactor from Resharper when the file is under source control and TortoiseSVN is the client. I have am trying VisualSVN right now but I haven't had the need to rename anything recently. I don't want to change our repository just to try this out. Also not sure if this feature alone is worth the cost of VisualSVN. Update: I have uninstalled the trial of VisualSVN and tried AhknSVN. I seems to provided the same functionality so far. I know this my sound trivial but the indicators seem to be lacking some functionality, it seems like they don't trickle up. (If a file in the project is different I would think the project indicator would indicate this as well.) I tend to keep my projects rolled as much as possible, so it is hard to tell what files have changed unless the project is expanded. | TortoiseSVN 1.5 has a neat hidden feature on the check in window: Select a missing file and a new file and right-click. One of the options will be "fix move". I tend to refactor away, and then use this to fix any files where the name has changed. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/13745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1629/"
]
} |
13,751 | How can I permanently enable line numbers in IntelliJ IDEA? | IntelliJ 14.X Onwards From version 14.0 onwards, the path to the setting dialog is slightly different, a General submenu has been added between Editor and Appearance as shown below IntelliJ 8.1.2 - 13.X From IntelliJ 8.1.2 onwards, this option is in File | Settings 1 . Within the IDE Settings section of that dialog, you'll find it under Editor | Appearance. On a Mac, these are named IntelliJ IDEA | Preferences... | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/13751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
13,763 | Is there a function like document.getElementById("FirstDiv").clear() ? | To answer the original question - there are various ways to do this, but the following would be the simplest. If you already have a handle to the child node that you want to remove, i.e. you have a JavaScript variable that holds a reference to it: myChildNode.parentNode.removeChild(myChildNode); Obviously, if you are not using one of the numerous libraries that already do this, you would want to create a function to abstract this out: function removeElement(node) { node.parentNode.removeChild(node);} EDIT: As has been mentioned by others: if you have any event handlers wired up to the node you are removing, you will want to make sure you disconnect those before the last reference to the node being removed goes out of scope, lest poor implementations of the JavaScript interpreter leak memory. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/13763",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/184/"
]
} |
13,768 | So I'm not quite convinced about OpenID yet, and here is why: I already have an OpenID because I have a Blogger account. But I discovered that Blogger seems to be a poor provider when I tried to identify myself on the altdotnet page and recieved the following message: You must use an OpenID persona that specifies a valid email address. Lets forget the details of this little error and assume that I want to change to a different provider. So I sign up with a different provider and get a new, different OpenID - how would I switch my existing StackOverflow account to be associated with my new OpenID? I understand this would be easy if I had my own domain set up to delegate to a provider, because I could just change the delegation. Assume I do not have my own domain. | Ideally Stack Overflow would allow you to change your OpenID. OTOH, ideally you would have set up OpenID delegation on your own site, and used that to identify yourself. With delegation, you would need only change which service you delegate to. You'd still be identified by your own URL that you control. But that doesn't help now unless Stack Overflow lets you change it. Most sites tie OpenIDs to real accounts, and would let you switch or at least add additional OpenIDs. Doesn't seem like you could map OpenIDs to accounts 1:1 unless the result of access is totally trivial; otherwise you're in a situation like this where you lose your existing questions, answers, and reputation for switching IDs. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/13768",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/48281/"
]
} |
13,786 | Are we supposed to find workarounds in our web applications so that they will work in every situation? Is it time to do away with IE6 programming? | This depends so much on the context of the application, and of its users. There are two key aspects: what browsers are your users using; and how important is it that they can access/interact with your site. The first part is generally easily establish, if you have an existing version with stats (Google Analytics or similar is simple and great) or you have access to such data from a similar app / product. The later is a little harder to decide. If you're developing a publically availalbe, ad-sponsored site for exmple, it's just a numbers game - work out how much of your audience you lose and factor what that's worth against the additional development time. If, however you're doing something specifically at the request of a group of users - like an enterprise web app for example - you may be stuck with what those users are browsing with. In my experience those two things can change significantly for different apps. We've got web apps still (stats from last week) with close to 70% IE6 usage (20% IE7, the rest split between IE5.5 and FF2) and others with close to 0% IE6. For relatively ovbivous reasons, the latter are the kind of apps where losing a few users isn't so important. Having said all that, we generally find it easy to support IE6 (and IE5.5 as others point out) simply because we've been doing so for a while. Yes, it's a pain and yes, it takes more time, but often not too much. There are very few situations where having to support IE6 drastically changes what kind development you do - it just means a little more work. The other nice benefit of supporting it (and testing for it) is that you generally end up doing better all-round browser and quirks testing as a result of the polarity of IE6's behaviours. You need to decide whether or not you're supposed to find workarounds, based on the requirements of your app/product. That's it's IE6 isn't really that relevant - this kind of problem happens all the time in other situations, it just so happens that IE6 is a great example of the costs and implications of mixed standards, versioning and legacy support. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/13786",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/184/"
]
} |
13,848 | Using TortoiseSVN against VisualSVN I delete a source file that I should not have deleted. Now this isn't a train smash because I can get the file back from the daily backup. However I would like to undelete it from SVN (VisualSVN) so that I can get the history back. However I can't work out how to do that. Anybody know how to undelete a file from VisualSVN either using the VisualSVN interface or the latest version of TortoiseSVN? | What you have to do is the following: Right click on the folder where you think it is. Choose Show Log under TortioseSVN Find the checkin that the file was deleted in Go down the list and find the file Select Revert changes for this version to undelete. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/13848",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1463/"
]
} |
13,857 | I've been reading a lot about closures and I think I understand them, but without clouding the picture for myself and others, I am hoping someone can explain closures as succinctly and clearly as possible. I'm looking for a simple explanation that might help me understand where and why I would want to use them. | Closure on closures Objects are data with methods attached, closures are functions with data attached. def make_counter(): i = 0 def counter(): # counter() is a closure nonlocal i i += 1 return i return counterc1 = make_counter()c2 = make_counter()print (c1(), c1(), c2(), c2())# -> 1 2 1 2 | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/13857",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1649/"
]
} |
13,927 | When I've registered an object foo to receive KVO notifications from another object bar (using addObserver:...), if I then deallocate foo do I need to send a removeObserver:forKeyPath: message to bar in -dealloc? | You need to use -removeObserver:forKeyPath: to remove the observer before -[NSObject dealloc] runs, so yes, doing it in the -dealloc method of your class would work. Better than that though would be to have a deterministic point where whatever owns the object that's doing the observing could tell it it's done and will (eventually) be deallocated. That way, you can stop observing immediately when the thing doing the observing is no longer needed, regardless of when it's actually deallocated. This is important to keep in mind because the lifetime of objects in Cocoa isn't as deterministic as some people seem to think it is. The various Mac OS X frameworks themselves will send your objects -retain and -autorelease , extending their lifetime beyond what you might otherwise think it would be. Furthermore, when you make the transition to Objective-C garbage collection, you'll find that -finalize will run at very different times — and in very different contexts — than -dealloc did. For one thing, finalization takes place on a different thread, so you really can't safely send -removeObserver:forKeyPath: to another object in a -finalize method. Stick to memory (and other scarce resource) management in -dealloc and -finalize , and use a separate -invalidate method to have an owner tell an object you're done with it at a deterministic point; do things like removing KVO observations there. The intent of your code will be clearer and you will have fewer subtle bugs to take care of. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/13927",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1512/"
]
} |
13,941 | I'd like to have a python program alert me when it has completed its task by making a beep noise. Currently, I use import os and then use a command line speech program to say "Process complete". I much rather it be a simple "bell." I know that there's a function that can be used in Cocoa apps, NSBeep , but I don't think that has much anything to do with this. I've also tried print(\a) but that didn't work. I'm using a Mac, if you couldn't tell by my Cocoa comment, so that may help. | Have you tried : import syssys.stdout.write('\a')sys.stdout.flush() That works for me here on Mac OS 10.5 Actually, I think your original attempt works also with a little modification: print('\a') (You just need the single quotes around the character sequence). | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/13941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1615/"
]
} |
13,963 | I want to parse a config file sorta thing, like so: [KEY:Value] [SUBKEY:SubValue] Now I started with a StreamReader , converting lines into character arrays, when I figured there's gotta be a better way. So I ask you, humble reader, to help me. One restriction is that it has to work in a Linux/Mono environment (1.2.6 to be exact). I don't have the latest 2.0 release (of Mono), so try to restrict language features to C# 2.0 or C# 1.0. | I considered it, but I'm not going to use XML. I am going to be writing this stuff by hand, and hand editing XML makes my brain hurt. :') Have you looked at YAML ? You get the benefits of XML without all the pain and suffering. It's used extensively in the ruby community for things like config files, pre-prepared database data, etc here's an example customer: name: Orion age: 26 addresses: - type: Work number: 12 street: Bob Street - type: Home number: 15 street: Secret Road There appears to be a C# library here , which I haven't used personally, but yaml is pretty simple, so "how hard can it be?" :-) I'd say it's preferable to inventing your own ad-hoc format (and dealing with parser bugs) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/13963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/61/"
]
} |
14,008 | I've been looking for some good genetic programming examples for C#. Anyone knows of good online/book resources? Wonder if there is a C# library out there for Evolutionary/Genetic programming? | After developing my own Genetic Programming didactic application , I found a complete Genetic Programming Framework called AForge.NET Genetics . It's a part of the Aforge.NET library . It's licensed under LGPL. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/14008",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/877/"
]
} |
14,031 | In the past I've never been a fan of using triggers on database tables. To me they always represented some "magic" that was going to happen on the database side, far far away from the control of my application code. I also wanted to limit the amount of work the DB had to do, as it's generally a shared resource and I always assumed triggers could get to be expensive in high load scenarios. That said, I have found a couple of instances where triggers have made sense to use (at least in my opinion they made sense). Recently though, I found myself in a situation where I sometimes might need to "bypass" the trigger. I felt really guilty about having to look for ways to do this, and I still think that a better database design would alleviate the need for this bypassing. Unfortunately this DB is used by mulitple applications, some of which are maintained by a very uncooperative development team who would scream about schema changes, so I was stuck. What's the general consesus out there about triggers? Love em? Hate em? Think they serve a purpose in some scenarios? Do think that having a need to bypass a trigger means that you're "doing it wrong"? | Triggers are generally used incorrectly, introduce bugs and therefore should be avoided. Never design a trigger to do integrity constraint checking that crosses rows in a table (e.g "the average salary by dept cannot exceed X). Tom Kyte , VP of Oracle has indicated that he would prefer to remove triggers as a feature of the Oracle database because of their frequent role in bugs. He knows it is just a dream, and triggers are here to stay, but if he could he would remove triggers from Oracle, he would (along with the WHEN OTHERS clause and autonomous transactions). Can triggers be used correctly? Absolutely. The problem is - they are not used correctly in so many cases that I'd be willing to give up any perceived benefit just to get rid of the abuses (and bugs) caused by them. - Tom Kyte | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/14031",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1680/"
]
} |
14,106 | A few weeks ago, I was assigned to evaluate all our programmers. I'm very uncomfortable with this since I was the one who taught everyone the shop's programming language (they all got out of college not knowing the language and as luck would have it, I'm very proficient with it.). On the evaluation, I was very biased on their performance (perfect scores). I'm glad that our programming shop doesn't require an average performance level but I heard horror stories of shops which do require an average level. My question are as follows: As a programmer, what evaluation questions would you like to see? As a manager, what evaluation questions would you like to see? As the evaluator, how can you prevent bias in your evaluation? I would love to remove the evaluation test. Is there any advantages to having an evaluation test? Any disadvantage? | Gets things done is really all you need to evaluate a developer. After that you look at the quality that the developer generates. Do they write unit tests and believe in testing and being responsible for the code they generate? Do they take initiative to fix bugs without being assigned them? Are they passionate about coding? Are they always constantly learning, trying to find better ways to accomplish a task or make a process better? These questions are pretty much how I judge developers directly under me. If they are not directly under you and you are not a direct report for them, then you really shouldn't be evaluating them. If you are assigned in evaluating those programmers that aren't under you, then you need to be proactive to answer the above questions about them, which can be hard. You can't remove the evaluation test. I know it can become tedious sometimes, but I actually enjoy doing it and it's invaluable for the developer you are evaluating. You need to be a manager that cares about how your developers do. You are a direct reflection on them and as they are of you. One question I always leave up to the developer is for them to evaluate me. The evaluation needs to be a two lane road. I have to also evaluate off a cookie cutter list of questions, which I do, but I always add the above and try to make the evaluation fun and a learning exercise during the time I have the developer one on one, it is all about the developer you are reviewing. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/14106",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1599/"
]
} |
14,118 | I'm writing my first Perl app -- an AOL Instant Messenger bot that talks to an Arduino microcontroller, which in turn controls a servo that will push the power button on our sysadmin's server, which freezes randomly every 28 hours or so. I've gotten all the hard stuff done, I'm just trying to add one last bit of code to break the main loop and log out of AIM when the user types 'quit'. The problem is, if I try to read from STDIN in the main program loop, it blocks the process until input is entered, essentially rendering the bot inactive. I've tried testing for EOF before reading, but no dice... EOF just always returns false. Here's below is some sample code I'm working with: while(1) { $oscar->do_one_loop();# Poll to see if any arduino data is coming in over serial port my $char = $port->lookfor();# If we get data from arduino, then print it if ($char) { print "" . $char ; } # reading STDIN blocks until input is received... AAARG! my $a = <STDIN>; print $a; if($a eq "exit" || $a eq "quit" || $a eq 'c' || $a eq 'q') {last;}}print "Signing off... ";$oscar->signoff();print "Done\n";print "Closing serial port... ";$port->close() || warn "close failed";print "Done\n"; | The Perl built-in is select() , which is a pass-through to the select() system call, but for sane people I recommend IO::Select . Code sample: #!/usr/bin/perluse IO::Select;$s = IO::Select->new();$s->add(\*STDIN);while (++$i) { print "Hiya $i!\n"; sleep(5); if ($s->can_read(.5)) { chomp($foo = <STDIN>); print "Got '$foo' from STDIN\n"; }} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/14118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
14,155 | Other than Notepad++, what text editor do you use to program in Windows? | Another vote for gvim ( about , download ). I think once you learn the keystrokes to control it, you won't want to use anything else. Plus, there is the added benefit of being able to use it on just about any platform, including the nice Windows port. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/14155",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1632/"
]
} |
14,165 | I'm seeing strange errors when my C++ code has min() or max() calls. I'm using Visual C++ compilers. | Check if your code is including the windows.h header file and either your code or other third-party headers have their own min() / max() definitions. If yes, then prepend your windows.h inclusion with a definition of NOMINMAX like this: #define NOMINMAX#include <windows.h> | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/14165",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1630/"
]
} |
14,247 | I've only done a bit of Flex development thus far, but I've preferred the approach of creating controls programmatically over mxml files, because (and please , correct me if I'm wrong!) I've gathered that you can't have it both ways -- that is to say, have the class functionality in a separate ActionScript class file but have the contained elements declared in mxml. There doesn't seem to be much of a difference productivity-wise, but doing data binding programmatically seems somewhat less than trivial. I took a look at how the mxml compiler transforms the data binding expressions. The result is a bunch of generated callbacks and a lot more lines than in the mxml representation. So here's the question: is there a way to do data binding programmatically that doesn't involve a world of hurt? | Don't be afraid of MXML. It's great for laying out views. If you write your own reusable components then writing them in ActionScript may sometimes give you a little more control, but for non-reusable views MXML is much better. It's more terse, bindings are extemely easy to set up, etc. However, bindings in pure ActionScript need not be that much of a pain. It will never be as simple as in MXML where a lot of things are done for you, but it can be done with not too much effort. What you have is BindingUtils and it's methods bindSetter and bindProperty . I almost always use the former, since I usually want to do some work, or call invalidateProperties when values change, I almost never just want to set a property. What you need to know is that these two return an object of the type ChangeWatcher , if you want to remove the binding for some reason, you have to hold on to this object. This is what makes manual bindings in ActionScript a little less convenient than those in MXML. Let's start with a simple example: BindingUtils.bindSetter(nameChanged, selectedEmployee, "name"); This sets up a binding that will call the method nameChanged when the name property on the object in the variable selectedEmployee changes. The nameChanged method will recieve the new value of the name property as an argument, so it should look like this: private function nameChanged( newName : String ) : void The problem with this simple example is that once you have set up this binding it will fire each time the property of the specified object changes. The value of the variable selectedEmployee may change, but the binding is still set up for the object that the variable pointed to before. There are two ways to solve this: either to keep the ChangeWatcher returned by BindingUtils.bindSetter around and call unwatch on it when you want to remove the binding (and then setting up a new binding instead), or bind to yourself. I'll show you the first option first, and then explain what I mean by binding to yourself. The currentEmployee could be made into a getter/setter pair and implemented like this (only showing the setter): public function set currentEmployee( employee : Employee ) : void { if ( _currentEmployee != employee ) { if ( _currentEmployee != null ) { currentEmployeeNameCW.unwatch(); } _currentEmployee = employee; if ( _currentEmployee != null ) { currentEmployeeNameCW = BindingUtils.bindSetter(currentEmployeeNameChanged, _currentEmployee, "name"); } }} What happens is that when the currentEmployee property is set it looks to see if there was a previous value, and if so removes the binding for that object ( currentEmployeeNameCW.unwatch() ), then it sets the private variable, and unless the new value was null sets up a new binding for the name property. Most importantly it saves the ChangeWatcher returned by the binding call. This is a basic binding pattern and I think it works fine. There is, however, a trick that can be used to make it a bit simpler. You can bind to yourself instead. Instead of setting up and removing bindings each time the currentEmployee property changes you can have the binding system do it for you. In your creationComplete handler (or constructor or at least some time early) you can set up a binding like so: BindingUtils.bindSetter(currentEmployeeNameChanged, this, ["currentEmployee", "name"]); This sets up a binding not only to the currentEmployee property on this , but also to the name property on this object. So anytime either changes the method currentEmployeeNameChanged will be called. There's no need to save the ChangeWatcher because the binding will never have to be removed. The second solution works in many cases, but I've found that the first one is sometimes necessary, especially when working with bindings in non-view classes (since this has to be an event dispatcher and the currentEmployee has to be bindable for it to work). | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/14247",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/266/"
]
} |
14,278 | I'd like to provide some way of creating dynamically loadable plugins in my software.Typical way to do this is using the LoadLibrary WinAPI function to load a dll and calling GetProcAddress to get an pointer to a function inside that dll. My question is how do I dynamically load a plugin in C#/.Net application? | The following code snippet (C#) constructs an instance of any concrete classes derived from Base found in class libraries (*.dll) in the application path and stores them in a list. using System.IO;using System.Reflection;List<Base> objects = new List<Base>();DirectoryInfo dir = new DirectoryInfo(Application.StartupPath);foreach (FileInfo file in dir.GetFiles("*.dll")){ Assembly assembly = Assembly.LoadFrom(file.FullName); foreach (Type type in assembly.GetTypes()) { if (type.IsSubclassOf(typeof(Base)) && type.IsAbstract == false) { Base b = type.InvokeMember(null, BindingFlags.CreateInstance, null, null, null) as Base; objects.Add(b); } }} Edit: The classes referred to by Matt are probably a better option in .NET 3.5. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/14278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1534/"
]
} |
14,287 | In my C/C++ program, I'm using OpenCV to capture images from my webcam. The camera ( Logitech QuickCam IM ) can capture at resolutions 320x240 , 640x480 and 1280x960 . But, for some strange reason, OpenCV gives me images of resolution 320x240 only. Calls to change the resolution using cvSetCaptureProperty() with other resolution values just don't work. How do I capture images with the other resolutions possible with my webcam? | There doesn't seem to be a solution. The resolution can be increased to 640x480 using this hack shared by lifebelt77 . Here are the details reproduced: Add to highgui.h : #define CV_CAP_PROP_DIALOG_DISPLAY 8#define CV_CAP_PROP_DIALOG_FORMAT 9#define CV_CAP_PROP_DIALOG_SOURCE 10#define CV_CAP_PROP_DIALOG_COMPRESSION 11#define CV_CAP_PROP_FRAME_WIDTH_HEIGHT 12 Add the function icvSetPropertyCAM_VFW to cvcap.cpp : static int icvSetPropertyCAM_VFW( CvCaptureCAM_VFW* capture, int property_id, double value ){ int result = -1; CAPSTATUS capstat; CAPTUREPARMS capparam; BITMAPINFO btmp; switch( property_id ) { case CV_CAP_PROP_DIALOG_DISPLAY: result = capDlgVideoDisplay(capture->capWnd); //SendMessage(capture->capWnd,WM_CAP_DLG_VIDEODISPLAY,0,0); break; case CV_CAP_PROP_DIALOG_FORMAT: result = capDlgVideoFormat(capture->capWnd); //SendMessage(capture->capWnd,WM_CAP_DLG_VIDEOFORMAT,0,0); break; case CV_CAP_PROP_DIALOG_SOURCE: result = capDlgVideoSource(capture->capWnd); //SendMessage(capture->capWnd,WM_CAP_DLG_VIDEOSOURCE,0,0); break; case CV_CAP_PROP_DIALOG_COMPRESSION: result = capDlgVideoCompression(capture->capWnd); break; case CV_CAP_PROP_FRAME_WIDTH_HEIGHT: capGetVideoFormat(capture->capWnd, &btmp, sizeof(BITMAPINFO)); btmp.bmiHeader.biWidth = floor(value/1000); btmp.bmiHeader.biHeight = value-floor(value/1000)*1000; btmp.bmiHeader.biSizeImage = btmp.bmiHeader.biHeight * btmp.bmiHeader.biWidth * btmp.bmiHeader.biPlanes * btmp.bmiHeader.biBitCount / 8; capSetVideoFormat(capture->capWnd, &btmp, sizeof(BITMAPINFO)); break; default: break; } return result;} and edit captureCAM_VFW_vtable as following: static CvCaptureVTable captureCAM_VFW_vtable ={6,(CvCaptureCloseFunc)icvCloseCAM_VFW,(CvCaptureGrabFrameFunc)icvGrabFrameCAM_VFW,(CvCaptureRetrieveFrameFunc)icvRetrieveFrameCAM_VFW,(CvCaptureGetPropertyFunc)icvGetPropertyCAM_VFW,(CvCaptureSetPropertyFunc)icvSetPropertyCAM_VFW, // was NULL(CvCaptureGetDescriptionFunc)0}; Now rebuilt highgui.dll . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/14287",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1630/"
]
} |
14,300 | For example; with the old command prompt it would be: cmd.exe /k mybatchfile.bat | Drop into a cmd instance (or indeed PowerShell itself) and type this: powershell -? You'll see that powershell.exe has a "-noexit" parameter which tells it not to exit after executing a "startup command". | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/14300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/887/"
]
} |
14,330 | How do I convert the RGB values of a pixel to a single monochrome value? | I found one possible solution in the Color FAQ . The luminance component Y (from the CIE XYZ system ) captures what is most perceived by humans as color in one channel. So, use those coefficients: mono = (0.2125 * color.r) + (0.7154 * color.g) + (0.0721 * color.b); | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/14330",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1630/"
]
} |
14,344 | Is it possible to compile and run OpenGL programs from under Cygwin? If yes, how? | It is possible to compile and run OpenGL programs under Cygwin. I illustrate the basic steps here: I assume you know OpenGL programming. If not, get the Red Book ( The OpenGL Programming Guide ). It is mandatory reading for OpenGL anyway. I assume you have Cygwin installed. If not, visit cygwin.com and install it. To compile and run OpenGL programs, you need the Cygwin package named opengl . In the Cygwin installer, it can be found under the Graphics section . Please install this package. Write a simple OpenGL program, say ogl.c . Compile the program using the flags -lglut32 -lglu32 -lopengl32 . (This links your program with the GLUT, GLU and OpenGL libraries. An OpenGL program might typically use functions from all the 3 of them.) For example: $ gcc ogl.c -lglut32 -lglu32 -lopengl32 Run the program. It's as simple as that! | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/14344",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1630/"
]
} |
14,378 | I want to use the mouse scrollwheel in my OpenGL GLUT program to zoom in and out of a scene? How do I do that? | Note that venerable Nate Robin's GLUT library doesn't support the scrollwheel. But, later implementations of GLUT like FreeGLUT do. Using the scroll wheel in FreeGLUT is dead simple. Here is how: Declare a callback function that shall be called whenever the scroll wheel is scrolled. This is the prototype: void mouseWheel(int, int, int, int); Register the callback with the (Free)GLUT function glutMouseWheelFunc() . glutMouseWheelFunc(mouseWheel); Define the callback function. The second parameter gives the direction of the scroll. Values of +1 is forward, -1 is backward. void mouseWheel(int button, int dir, int x, int y){ if (dir > 0) { // Zoom in } else { // Zoom out } return;} That's it! | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/14378",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1630/"
]
} |
14,386 | With the Visual Studio 2005 C++ compiler , I get the following warning when my code uses the fopen() and such calls: 1>foo.cpp(5) : warning C4996: 'fopen' was declared deprecated1> c:\program files\microsoft visual studio 8\vc\include\stdio.h(234) : see declaration of 'fopen'1> Message: 'This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_DEPRECATE. See online help for details.' How do I prevent this? | It looks like Microsoft has deprecated lots of calls which use buffers to improve code security. However, the solutions they're providing aren't portable. Anyway, if you aren't interested in using the secure version of their calls (like fopen_s ), you need to place a definition of _CRT_SECURE_NO_DEPRECATE before your included header files. For example: #define _CRT_SECURE_NO_DEPRECATE#include <stdio.h> The preprocessor directive can also be added to your project settings to effect it on all the files under the project. To do this add _CRT_SECURE_NO_DEPRECATE to Project Properties -> Configuration Properties -> C/C++ -> Preprocessor -> Preprocessor Definitions . | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/14386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1630/"
]
} |
14,389 | I have a script that parses the filenames of TV episodes (show.name.s01e02.avi for example), grabs the episode name (from the www.thetvdb.com API) and automatically renames them into something nicer (Show Name - [01x02].avi) The script works fine, that is until you try and use it on files that have Unicode show-names (something I never really thought about, since all the files I have are English, so mostly pretty-much all fall within [a-zA-Z0-9'\-] ) How can I allow the regular expressions to match accented characters and the likes? Currently the regex's config section looks like.. config['valid_filename_chars'] = """0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@£$%^&*()_+=-[]{}"'.,<>`~? """config['valid_filename_chars_regex'] = re.escape(config['valid_filename_chars'])config['name_parse'] = [ # foo_[s01]_[e01] re.compile('''^([%s]+?)[ \._\-]\[[Ss]([0-9]+?)\]_\[[Ee]([0-9]+?)\]?[^\\/]*$'''% (config['valid_filename_chars_regex'])), # foo.1x09* re.compile('''^([%s]+?)[ \._\-]\[?([0-9]+)x([0-9]+)[^\\/]*$''' % (config['valid_filename_chars_regex'])), # foo.s01.e01, foo.s01_e01 re.compile('''^([%s]+?)[ \._\-][Ss]([0-9]+)[\.\- ]?[Ee]([0-9]+)[^\\/]*$''' % (config['valid_filename_chars_regex'])), # foo.103* re.compile('''^([%s]+)[ \._\-]([0-9]{1})([0-9]{2})[\._ -][^\\/]*$''' % (config['valid_filename_chars_regex'])), # foo.0103* re.compile('''^([%s]+)[ \._\-]([0-9]{2})([0-9]{2,3})[\._ -][^\\/]*$''' % (config['valid_filename_chars_regex'])),] | Use a subrange of [\u0000-\uFFFF] for what you want. You can also use the re.UNICODE compile flag. The docs say that if UNICODE is set, \w will match the characters [0-9_] plus whatever is classified as alphanumeric in the Unicode character properties database. See also http://coding.derkeiler.com/Archive/Python/comp.lang.python/2004-05/2560.html . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/14389",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/745/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.