source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
52,312 | So, I know that try/catch does add some overhead and therefore isn't a good way of controlling process flow, but where does this overhead come from and what is its actual impact? | I'm not an expert in language implementations (so take this with a grain of salt), but I think one of the biggest costs is unwinding the stack and storing it for the stack trace. I suspect this happens only when the exception is thrown (but I don't know), and if so, this would be decently sized hidden cost every time an exception is thrown... so it's not like you are just jumping from one place in the code to another, there is a lot going on. I don't think it's a problem as long as you are using exceptions for EXCEPTIONAL behavior (so not your typical, expected path through the program). | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/52312",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4541/"
]
} |
52,313 | This question is kind of an add-on to this question In C#, a switch case cannot fall through to other cases, this causes a compilation error. In this case I am just adding some number to the month total for the selected month and each subsequent month thereafter. (simple example, not meant to be real) switch (month){ case 0: add something to month totals case 1: add something to month totals case 2: add something to month totals default: break;} Is there a logical alternative to this in C# without having to write out a ton of if statements? if (month <= 0) add something to monthif (month <= 1) add something to monthif (month <= 2) add something to month.... etc | Often times when you see the noise from a huge switch statement or many if statements that might fall into more than one block, you're trying to suppress a bad design. Instead, what if you implemented the Specification pattern to see if something matched, and then act on it? foreach(MonthSpecification spec in this.MonthSpecifications){ if(spec.IsSatisfiedBy(month)) spec.Perform(month);} then you can just add up different specs that match what you're trying to do. It's hard to tell what your domain is, so my example might be a little contrived. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/52313",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4144/"
]
} |
52,353 | I have an application that reads a CSV file with piles of data rows. I give the user a summary of the number of rows based on types of data, but I want to make sure that I don't read in too many rows of data and cause OutOfMemoryError s. Each row translates into an object. Is there an easy way to find out the size of that object programmatically? Is there a reference that defines how large primitive types and object references are for a VM ? Right now, I have code that says read up to 32,000 rows , but I'd also like to have code that says read as many rows as possible until I've used 32MB of memory. Maybe that is a different question, but I'd still like to know. | You can use the java.lang.instrument package . Compile and put this class in a JAR: import java.lang.instrument.Instrumentation;public class ObjectSizeFetcher { private static Instrumentation instrumentation; public static void premain(String args, Instrumentation inst) { instrumentation = inst; } public static long getObjectSize(Object o) { return instrumentation.getObjectSize(o); }} Add the following to your MANIFEST.MF : Premain-Class: ObjectSizeFetcher Use the getObjectSize() method: public class C { private int x; private int y; public static void main(String [] args) { System.out.println(ObjectSizeFetcher.getObjectSize(new C())); }} Invoke with: java -javaagent:ObjectSizeFetcherAgent.jar C | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/52353",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5074/"
]
} |
52,357 | I've been wondering, what is the point of clog? As near as I can tell, clog is the same as cerr but with buffering so it is more efficient. Usually stderr is the same as stdout, so clog is the same as cout. This seems pretty lame to me, so I figure I must be misunderstanding it. If I have log messages going out to the same place I have error messages going out to (perhaps something in /var/log/messages), then I probably am not writing too much out (so there isn't much lost by using non-buffered cerr). In my experience, I want my log messages up to date (not buffered) so I can help find a crash (so I don't want to be using the buffered clog). Apparently I should always be using cerr. I'd like to be able to redirect clog inside my program. It would be useful to redirect cerr so that when I call a library routine I can control where cerr and clog go to. Can some compilers support this? I just checked DJGPP and stdout is defined as the address of a FILE struct, so it is illegal to do something like "stdout = freopen(...)". Is it possible to redirect clog, cerr, cout, stdin, stdout, and/or stderr? Is the only difference between clog and cerr the buffering? How should I implement (or find) a more robust logging facility (links please)? | Is it possible to redirect clog, cerr, cout, stdin, stdout, and/or stderr? Yes. You want the rdbuf function. ofstream ofs("logfile");cout.rdbuf(ofs.rdbuf());cout << "Goes to file." << endl; Is the only difference between clog and cerr the buffering? As far as I know, yes. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/52357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4662/"
]
} |
52,438 | I've been doing c# for a long time, and have never come across an easy way to just new up a hash. I've recently become acquainted with the ruby syntax of hashes and wonder, does anyone know of a simple way to declare a hash as a literal, without doing all the add calls. { "whatever" => {i => 1}; "and then something else" => {j => 2}}; | If you're using C# 3.0 (.NET 3.5) then you can use collection initializers. They're not quite as terse as in Ruby but still an improvement. This example is based on the MSDN Example var students = new Dictionary<int, StudentName>(){ { 111, new StudentName {FirstName="Sachin", LastName="Karnik", ID=211}}, { 112, new StudentName {FirstName="Dina", LastName="Salimzianova", ID=317, }}, { 113, new StudentName {FirstName="Andy", LastName="Ruth", ID=198, }}}; | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/52438",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1220/"
]
} |
52,492 | I just started using GNU Emacs as my text editor and I am concerned about getting afflicted with " Emacs Pinky " by having to constantly press the control key with my pinky finger as is required when using Emacs. How can I avoid potentially getting this type of repetitive strain injury? | Making caps lock another control key is a good place to start. Invest in an ergonomic keyboard. Some emacs users even go as far as to get foot pedal things for control and meta... | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/52492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4872/"
]
} |
52,550 | What does the , operator do in C? | The expression: (expression1, expression2) First expression1 is evaluated, then expression2 is evaluated, and the value of expression2 is returned for the whole expression. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/52550",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2064/"
]
} |
52,557 | Anyone know this compiler feature? It seems GCC support that. How does it work? What is the potential gain? In which case it's good? Inner loops? (this question is specific, not about optimization in general, thanks) | It works by placing extra code to count the number of times each codepath is taken. When you compile a second time the compiler uses the knowledge gained about execution of your program that it could only guess at before. There are a couple things PGO can work toward: Deciding which functions should be inlined or not depending on how often they are called. Deciding how to place hints about which branch of an "if" statement should be predicted on based on the percentage of calls going one way or the other. Deciding how to optimize loops based on how many iterations get taken each time that loop is called. You never really know how much these things can help until you test it. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/52557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1277510/"
]
} |
52,563 | I'm trying to let an <input type="text"> (henceforth referred to as “textbox”) fill a parent container by settings its width to 100% . This works until I give the textbox a padding. This is then added to the content width and the input field overflows. Notice that in Firefox this only happens when rendering the content as standards compliant. In quirks mode, another box model seems to apply. Here's a minimal code to reproduce the behaviour in all modern browsers. #x { background: salmon; padding: 1em;}#y, input { background: red; padding: 0 20px; width: 100%;} <div id="x"> <div id="y">x</div> <input type="text"/></div> My question: How do I get the textbox to fit the container? Notice : for the <div id="y"> , this is straightforward: simply set width: auto . However, if I try to do this for the textbox, the effect is different and the textbox takes its default row count as width (even if I set display: block for the textbox). EDIT: David's solution would of course work. However, I do not want to modify the HTML – I do especially not want to add dummy elements with no semantic functionality. This is a typical case of divitis that I want to avoid at all cost. This can only be a last-resort hack. | With CSS3 you can use the box-sizing property on your inputs to standardise their box models.Something like this would enable you to add padding and have 100% width: input[type="text"] { -webkit-box-sizing: border-box; // Safari/Chrome, other WebKit -moz-box-sizing: border-box; // Firefox, other Gecko box-sizing: border-box; // Opera/IE 8+} Unfortunately this won't work for IE6/7 but the rest are fine ( Compatibility List ), so if you need to support these browsers your best bet would be Davids solution. If you'd like to read more check out this brilliant article by Chris Coyier . Hope this helps! | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/52563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1968/"
]
} |
52,646 | While cross-site scripting is generally regarded as negative, I've run into several situations where it's necessary. I was recently working within the confines of a very limiting content management system. I needed to include database code within the page, but the hosting server didn't have anything usable available. I set up a couple bare-bones scripts on my own server, originally thinking that I could use AJAX to import the contents of my scripts directly into the template of the CMS (thus retaining dynamic images, menu items, CSS, etc.). I was wrong. Due to the limitations of XMLHttpRequest objects, it's not possible to grab content from a different domain. So I thought iFrame - even though I'm not a fan of frames, I thought that I could create a frame that matched the width and height of the content so that it would appear native. Again, I was blocked by cross-site scripting "protections." While I could indeed load a remote file into the iFrame , I couldn't execute JavaScript to modify its size on either the host page or inside the loaded page. In this particular scenario, I wasn't able to point a subdomain to my server. I also couldn't create a script on the CMS server that could proxy content from my server, so my last thought was to use a remote JavaScript. A remote JavaScript works. It breaks when the user has JavaScript disabled, which is a downside; but it works. The "problem" I was having with using a remote JavaScript was that I had to use the JS function document.write() to output any content. Any output that isn't JS causes script errors. In addition to using document.write() for every line, you also have to ensure that the content is escaped - or else you end up with more script errors. My solution was as follows: My script received a GET parameter ("page") and then looked for the file ( {$page}.php ), and read the contents into a variable. However, I had to use awkward buffering techniques in order to actually execute the included scripts (for things like database interaction) then strip the final content of all line break characters ( \n ) followed by escaping all required characters. The end result is that my original script (which outputs JavaScript) accesses seemingly "standard" scripts on my server and converts their standard output to JavaScript for displaying within the CMS template. While this solution works, it seems like there may be a better way to accomplish the same thing. What is the best way to make cross-site scripting work specifically for the purpose of including content from a completely different domain? | You've got three choices: Create a server side proxy script . Create a remote script to read in remote dynamic HTML. Use a library like jQuery to make this easier. You can use the load function to inject HTML where needed. EDIT What I originally meant for example # 2 was utilizing JSONP , which requires the server side script to recognize the "callback=?" param. Use a client side Flash proxy and setup a crossdomain.xml file on your server's web root. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/52646",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1581/"
]
} |
52,652 | What are the available libraries for creating pretty charts and graphs in a Python application? | I'm the one supporting CairoPlot and I'm very proud it came up here.Surely matplotlib is great, but I believe CairoPlot is better looking.So, for presentations and websites, it's a very good choice. Today I released version 1.1. If interested, check it out at CairoPlot v1.1 EDIT: After a long and cold winter, CairoPlot is being developed again. Check out the new version on GitHub . | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/52652",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/473/"
]
} |
52,657 | How are people unit testing code that uses Linq to SQL? | Update: Fredrik has put an example solution on how to do unit test linq2sql applications over at his blog. You can download it at: http://web.archive.org/web/20120415022448/http://iridescence.no/post/DataContext-Repository-Pattern-Example-Code.aspx Not only do I think its great that he posted an example solution, he also managed to extract interfaces for all classes, which makes the design more decoupled. My old post: *I found these blogs that I think are a good start for making the DataContext wrapper: Link1 Link2 They cover almost the same topic except that the first one implements means for extracting interfaces for the tables as well. The second one is more extensive though, so I included it as well.* | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/52657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5189/"
]
} |
52,668 | There are many ASP.MVC blog post bits and pieces scattered over different web sites, as well as couple of resource questions here - ASP.NET Model-view-controller (MVC) - where do I start from? and MVC Learning Resources I wonder if there was a one-stop tutorial posted yet on getting started with ASP.NET MVC? Thank you! Edit: I probably need to clarify - a one-stop tutorial that'd help to get started within and hour or two and learn more as I go... Reading books is a non starter for me personally - takes more time I can afford and starts with basics... | Have you looked at MVC Samples on CodePlex? Rob Conery has some screencasts that go along with the creation of the site at http://blog.wekeroad.com/mvc-storefront/ . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/52668",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4808/"
]
} |
52,674 | Let's say you have a variable in a makefile fragment like the following: MY_LIST=a b c d How do I then reverse the order of that list? I need: $(warning MY_LIST=${MY_LIST}) to show MY_LIST=d c b a Edit: the real problem is that ld -r some_object.o ${MY_LIST} produces an a.out with undefined symbols because the items in MY_LIST are actually archives, but in the wrong order. If the order of MY_LIST is reversed, it will link correctly (I think). If you know a smarter way to get the link order right, clue me in. | A solution in pure GNU make: default: all foo = please reverse me reverse = $(if $(1),$(call reverse,$(wordlist 2,$(words $(1)),$(1)))) $(firstword $(1)) all : @echo $(call reverse,$(foo)) Gives: $ make me reverse please | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/52674",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3279/"
]
} |
52,676 | I'm a keyboard junkie. I love having a key sequence to do everything. What are your favorite keyboard shortcuts? I'll start by naming a couple of mine: 1 - Alt - Space to access the windows menu for the current window 2 - F2 to rename a file in Windows Explorer | Win + 1 .. 9 -- Start quick launch shortcut at that index (Windows Vista). Ctrl + Scroll Lock , Scroll Lock -- Crash your computer: Windows feature lets you generate a memory dump file by using the keyboard @gabr -- Win + D is show desktop , Win + M minimizes all windows. Hitting Win + D twice brings everything back as it has only shown the desktop window in front of the other windows. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/52676",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/672/"
]
} |
52,704 | How do I discard changes in my working copy that are not in the index? | For all unstaged files in current working directory use: git restore . For a specific file use: git restore path/to/file/to/revert That together with git switch replaces the overloaded git checkout ( see here ), and thus removes the argument disambiguation. If a file has both staged and unstaged changes, only the unstaged changes shown in git diff are reverted. Changes shown in git diff --staged stay intact. Before Git 2.23 For all unstaged files in current working directory: git checkout -- . For a specific file: git checkout -- path/to/file/to/revert -- here to remove ambiguity (this is known as argument disambiguation ). | {
"score": 13,
"source": [
"https://Stackoverflow.com/questions/52704",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4883/"
]
} |
52,714 | In the STL almost all containers have an erase function. The question I have is in a vector, the erase function returns an iterator pointing to the next element in the vector. The map container does not do this. Instead it returns a void. Anyone know why there is this inconsistancy? | See http://www.sgi.com/tech/stl/Map.html Map has the important property that inserting a new element into a map does not invalidate iterators that point to existing elements. Erasing an element from a map also does not invalidate any iterators, except, of course, for iterators that actually point to the element that is being erased. The reason for returning an iterator on erase is so that you can iterate over the list erasing elements as you go. If erasing an item doesn't invalidate existing iterators there is no need to do this. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/52714",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2328/"
]
} |
52,739 | Is there a good way to see what format an image is, without having to read the entire file into memory? Obviously this would vary from format to format (I'm particularly interested in TIFF files) but what sort of procedure would be useful to determine what kind of image format a file is without having to read through the entire file? BONUS : What if the image is a Base64-encoded string? Any reliable way to infer it before decoding it? | Most image file formats have unique bytes at the start. The unix file command looks at the start of the file to see what type of data it contains. See the Wikipedia article on Magic numbers in files and magicdb.org . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/52739",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2577/"
]
} |
52,753 | What is best practice when creating your exception classes in a .NET solution: To derive from System.Exception or from System.ApplicationException ? | According to Jeffery Richter in the Framework Design Guidelines book: System.ApplicationException is a class that should not be part of the .NET framework. It was intended to have some meaning in that you could potentially catch "all" the application exceptions, but the pattern was not followed and so it has no value. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/52753",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5392/"
]
} |
52,755 | I am using Windows, and I have two monitors. Some applications will always start on my primary monitor, no matter where they were when I closed them. Others will always start on the secondary monitor, no matter where they were when I closed them. Is there a registry setting buried somewhere, which I can manipulate to control which monitor applications launch into by default? @rp: I have Ultramon, and I agree that it is indispensable, to the point that Microsoft should buy it and incorporate it into their OS. But as you said, it doesn't let you control the default monitor a program launches into. | Correctly written Windows apps that want to save their location from run to run will save the results of GetWindowPlacement() before shutting down, then use SetWindowPlacement() on startup to restore their position. Frequently, apps will store the results of GetWindowPlacement() in the registry as a REG_BINARY for easy use. The WINDOWPLACEMENT route has many advantages over other methods: Handles the case where the screen resolution changed since the last run: SetWindowPlacement() will automatically ensure that the window is not entirely offscreen Saves the state (minimized/maximized) but also saves the restored (normal) size and position Handles desktop metrics correctly, compensating for the taskbar position, etc. (i.e. uses "workspace coordinates" instead of "screen coordinates" -- techniques that rely on saving screen coordinates may suffer from the "walking windows" problem where a window will always appear a little lower each time if the user has a toolbar at the top of the screen). Finally, programs that handle window restoration properly will take into account the nCmdShow parameter passed in from the shell. This parameter is set in the shortcut that launches the application (Normal, Minimized, Maximize): if(nCmdShow != SW_SHOWNORMAL) placement.showCmd = nCmdShow; //allow shortcut to override For non-Win32 applications, it's important to be sure that the method you're using to save/restore window position eventually uses the same underlying call, otherwise (like Java Swing's setBounds() / getBounds() problem) you'll end up writing a lot of extra code to re-implement functionality that's already there in the WINDOWPLACEMENT functions. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/52755",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/672/"
]
} |
52,794 | How do I create a branch in subversion that is deeper' than just the 'branches' directory? I have the standard trunk , tags and branches structure and I want to create a branch that is several directories deeper than the 'branches' tag. Using the standard svn move method, it gives me a folder not found error. I also tried copying it into the branches folder, checked it out, and the 'svn move' it into the tree structure I wanted, but also got a 'working copy admin area is missing' error. What do I need to do to create this? For the sake of illustration, let us suppose I want to create a branch to go directly into 'branches/version_1/project/subproject' (which does not exist yet)? | svn copy --parents http://url/to/subproject http://url/to/repository/branches/version_1/project/subproject That should create the directory you want to put the subproject in ( --parents means "create the intermediate directories for me"). | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/52794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/277/"
]
} |
52,797 | Is there a way to get the path for the assembly in which the current code resides? I do not want the path of the calling assembly, just the one containing the code. Basically my unit test needs to read some xml test files which are located relative to the dll. I want the path to always resolve correctly regardless of whether the testing dll is run from TestDriven.NET, the MbUnit GUI or something else. Edit : People seem to be misunderstanding what I'm asking. My test library is located in say C:\projects\myapplication\daotests\bin\Debug\daotests.dll and I would like to get this path: C:\projects\myapplication\daotests\bin\Debug\ The three suggestions so far fail me when I run from the MbUnit Gui: Environment.CurrentDirectory gives c:\Program Files\MbUnit System.Reflection.Assembly.GetAssembly(typeof(DaoTests)).Location gives C:\Documents andSettings\george\LocalSettings\Temp\ ....\DaoTests.dll System.Reflection.Assembly.GetExecutingAssembly().Location gives the same as the previous. | Note : Assembly.CodeBase is deprecated in .NET Core/.NET 5+: https://learn.microsoft.com/en-us/dotnet/api/system.reflection.assembly.codebase?view=net-5.0 Original answer: I've defined the following property as we use this often in unit testing. public static string AssemblyDirectory{ get { string codeBase = Assembly.GetExecutingAssembly().CodeBase; UriBuilder uri = new UriBuilder(codeBase); string path = Uri.UnescapeDataString(uri.Path); return Path.GetDirectoryName(path); }} The Assembly.Location property sometimes gives you some funny results when using NUnit (where assemblies run from a temporary folder), so I prefer to use CodeBase which gives you the path in URI format, then UriBuild.UnescapeDataString removes the File:// at the beginning, and GetDirectoryName changes it to the normal windows format. | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/52797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5056/"
]
} |
52,822 | How can you import a foxpro DBF file in SQL Server? | Use a linked server or use openrowset, example SELECT * into SomeTableFROM OPENROWSET('MSDASQL', 'Driver=Microsoft Visual FoxPro Driver;SourceDB=\\SomeServer\SomePath\;SourceType=DBF','SELECT * FROM SomeDBF') | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/52822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4685/"
]
} |
52,824 | Is it possible to merge to a branch that is not a direct parent or child in TFS? I suspect that the answer is no as this is what I've experienced while using it. However, it seems that at certain times it would be really useful when there are different features being worked on that may have different approval cycles (ie. feature one might be approved before feature two). This becomes exceedingly difficult when we have production branches where we have to merge some feature into a previous branch so we can release before the next full version. Our current branching strategy is to develop in the trunk (or mainline as we call it), and create a branch to stabilize and release to production. This branch can then be used to create hotfixes and other things while mainline can diverge for upcoming features. What techniques can be used otherwise to mitigate a scenario such as the one(s) described above? | I agree with Harpreet that you may want to revisit how you you have setup you branching structure. However you if you really want to perform this type of merge you can through something called a baseless merge. It runs from the tfs command prompt, Tf merge /baseless <<source path>> <<target path>> /recursive Additional info about baseless merges can be found here Also I found this document to be invaluable when constructing our tfs branching structure Microsoft Team Foundation Server Branching Guidance | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/52824",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5416/"
]
} |
52,842 | System.IO.Directory.GetFiles() returns a string[] . What is the default sort order for the returned values? I'm assuming by name, but if so how much does the current culture effect it? Can you change it to something like creation date? Update: MSDN points out that the sort order is not guaranteed for .Net 3.5, but the 2.0 version of the page doesn't say anything at all and neither page will help you sort by things like creation or modification time. That information is lost once you have the array (it contains only strings). I could build a comparer that would check for each file it gets, but that means accessing the file system repeatedly when presumably the .GetFiles() method already does this. Seems very inefficient. | If you're interested in properties of the files such as CreationTime, then it would make more sense to use System.IO.DirectoryInfo.GetFileSystemInfos(). You can then sort these using one of the extension methods in System.Linq, e.g.: DirectoryInfo di = new DirectoryInfo("C:\\");FileSystemInfo[] files = di.GetFileSystemInfos();var orderedFiles = files.OrderBy(f => f.CreationTime); Edit - sorry, I didn't notice the .NET2.0 tag so ignore the LINQ sorting. The suggestion to use System.IO.DirectoryInfo.GetFileSystemInfos() still holds though. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/52842",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3043/"
]
} |
52,874 | I have a piece of server-ish software written in Java to run on Windows and OS X. (It is not running on a server, but just a normal user's PC - something like a torrent client.) I would like the software to signal to the OS to keep the machine awake (prevent it from going into sleep mode) while it is active. Of course I don't expect there to be a cross platform solution, but I would love to have some very minimal C programs/scripts that my app can spawn to inform the OS to stay awake. Any ideas? | I use this code to keep my workstation from locking. It's currently only set to move the mouse once every minute, you could easily adjust it though. It's a hack, not an elegant solution. import java.awt.*;import java.util.*;public class Hal{ public static void main(String[] args) throws Exception{ Robot hal = new Robot(); Random random = new Random(); while(true){ hal.delay(1000 * 60); int x = random.nextInt() % 640; int y = random.nextInt() % 480; hal.mouseMove(x,y); } }} | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/52874",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/338/"
]
} |
52,880 | Does Google Reader have an API and if so, how can I get the count of the number of unread posts for a specific user knowing their username and password? | This URL will give you a count of unread posts per feed. You can then iterate over the feeds and sum up the counts. http://www.google.com/reader/api/0/unread-count?all=true Here is a minimalist example in Python...parsing the xml/json and summing the counts is left as an exercise for the reader: import urllibimport urllib2username = '[email protected]'password = '******'# Authenticate to obtain SIDauth_url = 'https://www.google.com/accounts/ClientLogin'auth_req_data = urllib.urlencode({'Email': username, 'Passwd': password, 'service': 'reader'})auth_req = urllib2.Request(auth_url, data=auth_req_data)auth_resp = urllib2.urlopen(auth_req)auth_resp_content = auth_resp.read()auth_resp_dict = dict(x.split('=') for x in auth_resp_content.split('\n') if x)auth_token = auth_resp_dict["Auth"]# Create a cookie in the header using the SID header = {}header['Authorization'] = 'GoogleLogin auth=%s' % auth_tokenreader_base_url = 'http://www.google.com/reader/api/0/unread-count?%s'reader_req_data = urllib.urlencode({'all': 'true', 'output': 'xml'})reader_url = reader_base_url % (reader_req_data)reader_req = urllib2.Request(reader_url, None, header)reader_resp = urllib2.urlopen(reader_req)reader_resp_content = reader_resp.read()print reader_resp_content And some additional links on the topic: http://code.google.com/p/pyrfeed/wiki/GoogleReaderAPI How do you access an authenticated Google App Engine service from a (non-web) python client? http://blog.gpowered.net/2007/08/google-reader-api-functions.html | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/52880",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/383/"
]
} |
52,883 | I'm looking for a graph algorithm with some unusual properties. Each edge in the graph is either an "up" edge or a "down" edge. A valid path can go an indefinite number of "up"'s followed by an indefinite number of "down"'s, or vice versa. However it cannot change direction more than once. E.g., a valid path might be A "up" B "up" C "down" E "down" Fan invalid path might be A "up" B "down" C "up" D What is a good algorithm for finding the shortest valid path between two nodes? What about finding all of the equal length shortest paths? | Assuming you don't have any heuristics, a variation of dijkstra's algorithm should suffice pretty well. Every time you consider a new edge, store information about its "ancestors". Then, check for the invariant (only one direction change), and backtrack if it is violated. The ancestors here are all the edges that were traversed to get to the current node, along the shortest path. One good way to store the ancestor information would be as a pair of numbers. If U is up, and D is down, a particular edge's ancestors could be UUUDDDD , which would be the pair 3, 4 . You will not need a third number, because of the invariant. Since we have used dijkstra's algorithm, finding multiple shortest paths is already taken care of. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/52883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3146/"
]
} |
52,898 | I've noticed that Visual Studio 2008 is placing square brackets around column names in sql. Do the brackets offer any advantage? When I hand code T-SQL I've never bothered with them. Example: Visual Studio: SELECT [column1], [column2] etc... My own way: SELECT column1, column2 etc... | The brackets are required if you use keywords or special chars in the column names or identifiers. You could name a column [First Name] (with a space) – but then you'd need to use brackets every time you referred to that column. The newer tools add them everywhere just in case or for consistency. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/52898",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5433/"
]
} |
52,927 | I frequently find myself writing code like this: List<int> list = new List<int> { 1, 3, 5 };foreach (int i in list) { Console.Write("{0}\t", i.ToString()); }Console.WriteLine(); Better would be something like this: List<int> list = new List<int> { 1, 3, 5 };Console.WriteLine("{0}\t", list); I suspect there's some clever way of doing this, but I don't see it. Does anybody have a better solution than the first block? | Do this: list.ForEach(i => Console.Write("{0}\t", i)); EDIT: To others that have responded - he wants them all on the same line, with tabs between them. :) | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/52927",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4593/"
]
} |
52,950 | I'm not too sure what is going on here, but sometimes a particular file in my repository will change the case of its name. e.g.,: before: File.h after: file.h I don't really care why this is happening, but this causes git to think it is a new file, and then I have to go and change the file name back. Can you just make git ignore case changes? [edit] I suspect it is Visual Studio doing something weird with that particular file, because it seems to happen most often when I open and save it after changes. I don't have any way to fix bugs in VS however, but git should be a bit more capable I hope. | Since version 1.5.6 there is an ignorecase option available in the [core] section of .git/config e.g. add ignorecase = true To change it for just one repo, from that folder run: git config core.ignorecase true To change it globally: git config --global core.ignorecase true | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/52950",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3146/"
]
} |
52,964 | What is the best way to sort the results of a sql query into a random order within a stored procedure? | This is a duplicate of SO# 19412 . Here's the answer I gave there: select top 1 * from mytable order by newid() In SQL Server 2005 and up, you can use TABLESAMPLE to get a random sample that's repeatable: SELECT FirstName, LastName FROM Contact TABLESAMPLE (1 ROWS) ; | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/52964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5466/"
]
} |
52,984 | How do I setup an Ant task to generate Emma code coverage reports? | To answer questions about where the source and instrumented directories are (these can be switched to whatever your standard directory structure is): <property file="build.properties" /><property name="source" location="src/main/java" /><property name="test.source" location="src/test/java" /><property name="target.dir" location="target" /><property name="target" location="${target.dir}/classes" /><property name="test.target" location="${target.dir}/test-classes" /><property name="instr.target" location="${target.dir}/instr-classes" /> Classpaths: <path id="compile.classpath"> <fileset dir="lib/main"> <include name="*.jar" /> </fileset></path><path id="test.compile.classpath"> <path refid="compile.classpath" /> <pathelement location="lib/test/junit-4.6.jar" /> <pathelement location="${target}" /></path><path id="junit.classpath"> <path refid="test.compile.classpath" /> <pathelement location="${test.target}" /></path> First you need to setup where Ant can find the Emma libraries: <path id="emma.lib" > <pathelement location="${emma.dir}/emma.jar" /> <pathelement location="${emma.dir}/emma_ant.jar" /></path> Then import the task: <taskdef resource="emma_ant.properties" classpathref="emma.lib" /> Then instrument the code: <target name="coverage.instrumentation"> <mkdir dir="${instr.target}"/> <mkdir dir="${coverage}"/> <emma> <instr instrpath="${target}" destdir="${instr.target}" metadatafile="${coverage}/metadata.emma" mode="copy"> <filter excludes="*Test*"/> </instr> </emma> <!-- Update the that will run the instrumented code --> <path id="test.classpath"> <pathelement location="${instr.target}"/> <path refid="junit.classpath"/> <pathelement location="${emma.dir}/emma.jar"/> </path></target> Then run a target with the proper VM arguments like: <jvmarg value="-Demma.coverage.out.file=${coverage}/coverage.emma" /><jvmarg value="-Demma.coverage.out.merge=true" /> Finally generate your report: <target name="coverage.report" depends="coverage.instrumentation"> <emma> <report sourcepath="${source}" depth="method"> <fileset dir="${coverage}" > <include name="*.emma" /> </fileset> <html outfile="${coverage}/coverage.html" /> </report> </emma></target> | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/52984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5118/"
]
} |
52,989 | I have a generic Repository<T> class I want to use with an ObjectDataSource. Repository<T> lives in a separate project called DataAccess. According to this post from the MS newsgroups (relevant part copied below): Internally, the ObjectDataSource is calling Type.GetType(string) to get the type, so we need to follow the guideline documented in Type.GetType on how to get type using generics. You can refer to MSDN Library on Type.GetType: http://msdn2.microsoft.com/en-us/library/w3f99sx1.aspx From the document, you will learn that you need to use backtick (`) to denotes the type name which is using generics. Also, here we must specify the assembly name in the type name string. So, for your question, the answer is to use type name like follows: TypeName="TestObjectDataSourceAssembly.MyDataHandler`1[System.String],TestObjectDataSourceAssembly" Okay, makes sense. When I try it, however, the page throws an exception: <asp:ObjectDataSource ID="MyDataSource" TypeName="MyProject.Repository`1[MyProject.MessageCategory],DataAccess" /> [InvalidOperationException: The type specified in the TypeName property of ObjectDataSource 'MyDataSource' could not be found.] The curious thing is that this only happens when I'm viewing the page. When I open the "Configure Data Source" dialog from the VS2008 designer, it properly shows me the methods on my generic Repository class. Passing the TypeName string to Type.GetType() while debugging also returns a valid type. So what gives? | Do something like this. Type type = typeof(Repository<MessageCategory);string assemblyQualifiedName = type.AssemblyQualifiedName; get the value of assemblyQualifiedName and paste it into the TypeName field. Note that Type.GetType(string), the value passed in must be The assembly-qualified name of the type to get. See AssemblyQualifiedName . If the type is in the currently executing assembly or in Mscorlib.dll, it is sufficient to supply the type name qualified by its namespace. So, it may work by passing in that string in your code, because that class is in the currently executing assembly (where you are calling it), where as the ObjectDataSource is not. Most likely the type you are looking for is MyProject.Repository`1[MyProject.MessageCategory, DataAccess, Version=1.0.0.0, Culture=neutral, PublicKey=null], DataAccess, Version=1.0.0.0, Culture=neutral, PublicKey=null | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/52989",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4160/"
]
} |
53,019 | I'm looking into sending regular automated text-messages to a list of subscribed users. Having played with Windows Mobile devices, I could easily implement this using the compact .Net framework + a device hooked up to usb and send the messages through this. I would like to explore other solutions like having a server or something similar to do this. I just have no idea what is involved in such a system. | It really all depends on how many text messages you intend to send and how critical it is that the message arrives on time (and, actually arrives). SMS Aggregators For larger volume and good reliability, you will want to go with an SMS aggregator. These aggregators have web service API's (or SMPP) that you can use to send your message and find out whether your message was delivered over time. Some examples of aggregators with whom I have experience are Air2Web, mBlox, etc. The nice thing about working with an aggregator is that they can guide you through what it takes to send effective messages. For example, if you want your own, distinct, shortcode they can navigate the process with the carriers to secure that shortcode. They can also make sure that you are in compliance with any rules regarding using SMS. Carriers will flat shut you off if you don't respect the use of SMS and only use SMS within the bounds of what you agreed to when you started to use the aggregator. If you overstep your bounds, they have the aggregator relationships to prevent any service interruptions. You'll pay per message and may have a baseline service fee. All if this is determined by your volume. SMTP to SMS If you want an unreliable, low-rent solution to a low number of known addresses, you can use an SMTP to SMS solution. In this case you simply find out the mobile provider for the recipient and use their mobile provider's e-mail scheme to send the message. An example of this is [email protected]. In this scenario, you send the message and it is gone and you hope that it gets there. You really don't know if it is making it. Also, some providers limit how messages come in via their SMTP to SMS gateway to limit SMS spam. But, that scenario is the very easiest to use from virtually any programming language. There are a million C# examples of how to send e-mail and this way would be no different. This is the most cost-effective solution (i.e. free) until you get a large volume of messages. When you start doing too much of this, the carriers might step in when they find that you are sending a ton of messages through their SMTP to SMS gateway. Effective Texting In many cases you have to make sure that recipients have properly opted-in to your service. This is only a big deal if your texts are going to a really large population. You'll want to remember that text messages are short (keep it to less than 140 to 160 characters). When you program things you'll want to bake that in or you might accidentally send multipart messages. Don't forget that you will want to make sure that your recipients realize they might have to pay for the incoming text messages. In a world of unlimited text plans this is less and less of a concern. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4298/"
]
} |
53,026 | I have a table with an XML column. This column is storing some values I keep for configuring my application. I created it to have a more flexible schema.I can't find a way to update this column directly from the table view in SQL Management Studio. Other (INT or Varchar for example) columns are editable. I know I can write an UPDATE statement or create some code to update it. But I'm looking for something more flexible that will let power users edit the XML directly. Any ideas? Reiterating again: Please don't answer I can write an application. I know that, And that is exactly what I'm trying to avoid. | This is an old question, but I needed to do this today. The best I can come up with is to write a query that generates SQL code that can be edited in the query editor - it's sort of lame but it saves you copy/pasting stuff. Note: you may need to go into Tools > Options > Query Results > Results to Text and set the maximum number of characters displayed to a large enough number to fit your XML fields. e.g. select 'update [table name] set [xml field name] = ''' + convert(varchar(max), [xml field name]) +''' where [primary key name] = ' + convert(varchar(max), [primary key name]) from [table name] which produces a lot of queries that look like this (with some sample table/field names): update thetable set thedata = '<root><name>Bob</name></root>' where thekey = 1 You then copy these queries from the results window back up to the query window, edit the xml strings, and then run the queries. (Edit: changed 10 to max to avoid error) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/53026",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1363/"
]
} |
53,041 | Visual Studio Solution files contain two GUID's per project entry. I figure one of them is from the AssemblyInfo.cs Does anyone know for sure where these come from, and what they are used for? | Neither GUID is the same GUID as from AssemblyInfo.cs (that is the GUID for the assembly itself, not tied to Visual Studio but the end product of the build). So, for a typical line in the sln file (open the .sln in notepad or editor-of-choice if you wish to see this): Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "ConsoleSandbox", "ConsoleSandbox\ConsoleSandbox.csproj", "{55A1FD06-FB00-4F8A-9153-C432357F5CAC}" The second GUID is a unique GUID for the project itself. The solution file uses this to map other settings to that project: GlobalSection(ProjectConfigurationPlatforms) = postSolution {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Debug|Any CPU.ActiveCfg = Debug|Any CPU {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Debug|Any CPU.Build.0 = Debug|Any CPU {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Release|Any CPU.ActiveCfg = Release|Any CPU {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Release|Any CPU.Build.0 = Release|Any CPUEndGlobalSection The first GUID is actually a GUID that is the unique GUID for the solution itself (I believe). If you have a solution with more than one project, you'll actually see something like the following: Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "ConsoleSandbox", "ConsoleSandbox\ConsoleSandbox.csproj", "{55A1FD06-FB00-4F8A-9153-C432357F5CAC}"EndProjectProject("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Composite", "..\CompositeWPF\Source\CAL\Composite\Composite.csproj", "{77138947-1D13-4E22-AEE0-5D0DD046CA34}"EndProject | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53041",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1965/"
]
} |
53,046 | In python, there are some special variables and filenames that are surrounded by double-underscores. For example, there is the __file__ variable. I am only able to get them to show up correctly inside of a code block. What do I need to enter to get double underscores in regular text without having them interpreted as an emphasis? | __file__ Put a backslash before the first underscore. Like this: \__file__ | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/53046",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4883/"
]
} |
53,086 | Is the return value of GetHashCode() guaranteed to be consistent assuming the same string value is being used? (C#/ASP.NET) I uploaded my code to a server today and to my surprise I had to reindex some data because my server (win2008 64-bit) was returning different values compared to my desktop computer. | If I'm not mistaken, GetHashCode is consistent given the same value, but it is NOT guaranteed to be consistent across different versions of the framework. From the MSDN docs on String.GetHashCode(): The behavior of GetHashCode is dependent on its implementation, which might change from one version of the common language runtime to another. A reason why this might happen is to improve the performance of GetHashCode. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/53086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1368/"
]
} |
53,102 | From the Immediate Window in Visual Studio: > Path.Combine(@"C:\x", "y")"C:\\x\\y"> Path.Combine(@"C:\x", @"\y")"\\y" It seems that they should both be the same. The old FileSystemObject.BuildPath() didn't work this way... | This is kind of a philosophical question (which perhaps only Microsoft can truly answer), since it's doing exactly what the documentation says. System.IO.Path.Combine "If path2 contains an absolute path, this method returns path2." Here's the actual Combine method from the .NET source. You can see that it calls CombineNoChecks , which then calls IsPathRooted on path2 and returns that path if so: public static String Combine(String path1, String path2) { if (path1==null || path2==null) throw new ArgumentNullException((path1==null) ? "path1" : "path2"); Contract.EndContractBlock(); CheckInvalidPathChars(path1); CheckInvalidPathChars(path2); return CombineNoChecks(path1, path2);}internal static string CombineNoChecks(string path1, string path2){ if (path2.Length == 0) return path1; if (path1.Length == 0) return path2; if (IsPathRooted(path2)) return path2; char ch = path1[path1.Length - 1]; if (ch != DirectorySeparatorChar && ch != AltDirectorySeparatorChar && ch != VolumeSeparatorChar) return path1 + DirectorySeparatorCharAsString + path2; return path1 + path2;} I don't know what the rationale is. I guess the solution is to strip off (or Trim) DirectorySeparatorChar from the beginning of the second path; maybe write your own Combine method that does that and then calls Path.Combine(). | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/53102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3798/"
]
} |
53,108 | I have a table similar to this: CREATE TABLE example ( id integer primary key, name char(200), parentid integer, value integer); I can use the parentid field to arrange data into a tree structure. Now here's the bit I can't work out. Given a parentid, is it possible to write an SQL statement to add up all the value fields under that parentid and recurse down the branch of the tree ? UPDATE: I'm using posgreSQL so the fancy MS-SQL features are not available to me. In any case, I'd like this to be treated as a generic SQL question. BTW, I'm very impressed to have 6 answers within 15 minutes of asking the question! Go stack overflow! | There are a few ways to do what you need in PostgreSQL. If you can install modules, look at the tablefunc contrib. It has a connectby() function that handles traversing trees. http://www.postgresql.org/docs/8.3/interactive/tablefunc.html Also check out the ltree contrib, which you could adapt your table to use: http://www.postgresql.org/docs/8.3/interactive/ltree.html Or you can traverse the tree yourself with a PL/PGSQL function. Something like this: create or replace function example_subtree (integer)returns setof example as'declare results record; child record; begin select into results * from example where parent_id = $1; if found then return next results; for child in select id from example where parent_id = $1 loop for temp in select * from example_subtree(child.id) loop return next temp; end loop; end loop; end if; return null;end;' language 'plpgsql';select sum(value) as value_sum from example_subtree(1234); | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53108",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5324/"
]
} |
53,112 | What are good ways of dealing with the issues surrounding plugin code that interacts with outside system? To give a concrete and representative example, suppose I would like to use Subversion and Eclipse to develop plugins for WordPress. The main code body of WordPress is installed on the webserver, and the plugin code needs to be available in a subdirectory of that server. I could see how you could simply checkout a copy of your code directly under the web directory on a development machine, but how would you also then integrate this with the IDE? I am making the assumption here that all the code for the plugin is located under a single directory. Do most people just add the plugin as a project in an IDE and then place the working folder for the project wherever the 'main' software system wants it to be? Or do people use some kind of symlinks to their home directory? | There are a few ways to do what you need in PostgreSQL. If you can install modules, look at the tablefunc contrib. It has a connectby() function that handles traversing trees. http://www.postgresql.org/docs/8.3/interactive/tablefunc.html Also check out the ltree contrib, which you could adapt your table to use: http://www.postgresql.org/docs/8.3/interactive/ltree.html Or you can traverse the tree yourself with a PL/PGSQL function. Something like this: create or replace function example_subtree (integer)returns setof example as'declare results record; child record; begin select into results * from example where parent_id = $1; if found then return next results; for child in select id from example where parent_id = $1 loop for temp in select * from example_subtree(child.id) loop return next temp; end loop; end loop; end if; return null;end;' language 'plpgsql';select sum(value) as value_sum from example_subtree(1234); | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/277/"
]
} |
53,135 | I know that we shouldn't being using the registry to store Application Data anymore, but in updating a Legacy application (and wanting to do the fewest changes), what Registry Hives are non-administrators allowed to use? Can I access all of HKEY_CURRENT_USER (the application currently access HKEY_LOCAL_MACHINE ) without Administrator privileges? | In general, a non-administrator user has this access to the registry: Read/Write to: HKEY_CURRENT_USER Read Only: HKEY_LOCAL_MACHINE HKEY_CLASSES_ROOT (which is just a link to HKEY_LOCAL_MACHINE\Software\Classes ) It is possible to change some of these permissions on a key-by-key basis, but it's extremely rare. You should not have to worry about that. For your purposes, your application should be writing settings and configuration to HKEY_CURRENT_USER . The canonical place is anywhere within HKEY_CURRENT_USER\Software\YourCompany\YourProduct\ You could potentially hold settings that are global (for all users) in HKEY_LOCAL_MACHINE . It is very rare to need to do this, and you should avoid it. The problem is that any user can "read" those, but only an administrator (or by extension, your setup/install program) can "set" them. Other common source of trouble: your application should not write to anything in the Program files or the Windows directories. If you need to write to files, there are several options at hand; describing all of them would be a longer discussion. All of the options end up writing to a subfolder or another under %USERPROFILE% for the user in question. Finally, your application should stay out of HKEY_CURRENT_CONFIG . This hive holds hardware configuration, services configurations and other items that 99.9999% of applications should not need to look at (for example, it holds the current plug-and-play device list). If you need anything from there, most of the information is available through supported APIs elsewhere. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/53135",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3798/"
]
} |
53,161 | what I'm after is something I can feed a number into and it will return the highest order bit. I'm sure there's a simple way. Below is an example output (left is the input) 1 -> 12 -> 23 -> 24 -> 45 -> 46 -> 47 -> 48 -> 89 -> 8...63 -> 32 | From Hacker's Delight: int hibit(unsigned int n) { n |= (n >> 1); n |= (n >> 2); n |= (n >> 4); n |= (n >> 8); n |= (n >> 16); return n - (n >> 1);} This version is for 32-bit ints, but the logic can be extended for 64-bits or higher. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/53161",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1057/"
]
} |
53,162 | Given: e = 'a' + 'b' + 'c' + 'd' How do I write the above in two lines? e = 'a' + 'b' + 'c' + 'd' | What is the line? You can just have arguments on the next line without any problems: a = dostuff(blahblah1, blahblah2, blahblah3, blahblah4, blahblah5, blahblah6, blahblah7) Otherwise you can do something like this: if (a == True and b == False): or with explicit line break: if a == True and \ b == False: Check the style guide for more information. Using parentheses, your example can be written over multiple lines: a = ('1' + '2' + '3' + '4' + '5') The same effect can be obtained using explicit line break: a = '1' + '2' + '3' + \ '4' + '5' Note that the style guide says that using the implicit continuation with parentheses is preferred, but in this particular case just adding parentheses around your expression is probably the wrong way to go. | {
"score": 12,
"source": [
"https://Stackoverflow.com/questions/53162",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4872/"
]
} |
53,178 | I would like to offer a database connection prompt to the user. I can build my own, but it would be nice if I can use something that somebody else has already built (maybe something built into Windows or a free library available on the Internet). Anybody know how to do this in .Net? EDIT: I found this and thought it was interesting: Showing a Connection String prompt in a WinForm application . This only works for SQL Server connections though. | You might want to try using SQL Server Management Objects . This MSDN article has a good sample for prompting and connecting to a SQL server. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/320/"
]
} |
53,208 | In C++ Windows app, I launch several long running child processes (currently I use CreateProcess(...) to do this. I want the child processes to be automatically closed if my main processes crashes or is closed. Because of the requirement that this needs to work for a crash of the "parent", I believe this would need to be done using some API/feature of the operating system. So that all the "child" processes are cleaned up. How do I do this? | The Windows API supports objects called "Job Objects". The following code will create a "job" that is configured to shut down all processes when the main application ends (when its handles are cleaned up). This code should only be run once.: HANDLE ghJob = CreateJobObject( NULL, NULL); // GLOBALif( ghJob == NULL){ ::MessageBox( 0, "Could not create job object", "TEST", MB_OK);}else{ JOBOBJECT_EXTENDED_LIMIT_INFORMATION jeli = { 0 }; // Configure all child processes associated with the job to terminate when the jeli.BasicLimitInformation.LimitFlags = JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE; if( 0 == SetInformationJobObject( ghJob, JobObjectExtendedLimitInformation, &jeli, sizeof(jeli))) { ::MessageBox( 0, "Could not SetInformationJobObject", "TEST", MB_OK); }} Then when each child process is created, execute the following code to launch each child each process and add it to the job object: STARTUPINFO info={sizeof(info)};PROCESS_INFORMATION processInfo;// Launch child process - example is notepad.exeif (::CreateProcess( NULL, "notepad.exe", NULL, NULL, TRUE, 0, NULL, NULL, &info, &processInfo)){ ::MessageBox( 0, "CreateProcess succeeded.", "TEST", MB_OK); if(ghJob) { if(0 == AssignProcessToJobObject( ghJob, processInfo.hProcess)) { ::MessageBox( 0, "Could not AssignProcessToObject", "TEST", MB_OK); } } // Can we free handles now? Not sure about this. //CloseHandle(processInfo.hProcess); CloseHandle(processInfo.hThread);} VISTA NOTE: See AssignProcessToJobObject always return "access denied" on Vista if you encounter access-denied issues with AssignProcessToObject() on vista. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/53208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/814/"
]
} |
53,225 | Given a reference to a method, is there a way to check whether the method is bound to an object or not? Can you also access the instance that it's bound to? | def isbound(method): return method.im_self is not None def instance(bounded_method): return bounded_method.im_self User-defined methods: When a user-defined method object iscreated by retrieving a user-definedfunction object from a class, its im_self attribute is None and themethod object is said to be unbound.When one is created by retrieving auser-defined function object from aclass via one of its instances, its im_self attribute is the instance, andthe method object is said to be bound.In either case, the new method's im_class attribute is the class fromwhich the retrieval takes place, andits im_func attribute is the originalfunction object. In Python 2.6 and 3.0 : Instance method objects have newattributes for the object and functioncomprising the method; the new synonymfor im_self is __self__ , and im_func is also available as __func__ . The oldnames are still supported in Python2.6, but are gone in 3.0. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/53225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4883/"
]
} |
53,370 | I keep running across this loading image http://georgia.ubuntuforums.com/images/misc/lightbox_progress.gif which seems to have entered into existence in the last 18 months. All of a sudden it is in every application and is on every web site. Not wanting to be left out is there somewhere I can get this logo, perhaps with a transparent background? Also where did it come from? | You can get many different AJAX loading animations in any colour you want here: ajaxload.info | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/361/"
]
} |
53,417 | If starting a new project what would you use for your ORM NHibernate or LINQ and why. What are the pros and cons of each. edit: LINQ to SQL not just LINQ (thanks @Jon Limjap) | I have asked myself a very similar question except that instead of NHibernate I was thinking about WilsonORM which I have consider pretty nice. It seems to me that there are many important differences. LINQ: is not a complete ORM tool (you can get there with some additional libraries like the latest Entity framework - I personally consider the architecture of this latest technology from MS to be about 10 years old when compared with other ORM frameworks) is primarily querying "language" supporting intellisense (compiler will check the syntax of your query) is primarily used with Microsoft SQL Server is closed source NHibernate: is ORM tool has pretty limited querying language without intellisense can be used with almost any DBMS for which you have a DB provider is open source It really depends. If you develop a Rich (Windows) desktop application where you need to construct objects, work with them and at the end persist their changes, then I would recommend ORM framework like NHibernate. If you develop a Web application that usually just query data and only occasionally writes some data back to the DB then I would recommend good querying language like Linq. So as always, it depends. :-) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53417",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2847/"
]
} |
53,426 | What memory leak detectors have people had a good experience with? Here is a summary of the answers so far: Valgrind - Instrumentation framework for building dynamic analysis tools. Electric Fence - A tool that works with GDB Splint - Annotation-Assisted Lightweight Static Checking Glow Code - This is a complete real-time performance and memory profiler for Windows and .NET programmers who develop applications with C++, C#, or any .NET Framework Also see this stackoverflow post . | second the valgrind ... and I'll add electric fence . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2064/"
]
} |
53,428 | I'm evaluating and looking at using CherryPy for a project that's basically a JavaScript front-end from the client-side (browser) that talks to a Python web service on the back-end. So, I really need something fast and lightweight on the back-end that I can implement using Python that then speaks to the PostgreSQL DB via an ORM (JSON to the browser). I'm also looking at Django, which I like, since its ORM is built-in. However, I think Django might be a little more than I really need (i.e. more features than I really need == slower?). Anyone have any experience with different Python ORM solutions that can compare and contrast their features and functionality, speed, efficiency, etc.? | SQLAlchemy is more full-featured and powerful (uses the DataMapper pattern). Django ORM has a cleaner syntax and is easier to write for (ActiveRecord pattern). I don't know about performance differences. SQLAlchemy also has a declarative layer that hides some complexity and gives it a ActiveRecord-style syntax more similar to the Django ORM. I wouldn't worry about Django being "too heavy." It's decoupled enough that you can use the ORM if you want without having to import the rest . That said, if I were already using CherryPy for the web layer and just needed an ORM, I'd probably opt for SQLAlchemy. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/53428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5499/"
]
} |
53,435 | I'm doing something bad in my ASP.NET app. It could be the any number of CTP libraries I'm using or I'm just not disposing something properly. But when I redeploy my ASP.NET to my Vista IIS7 install or my server's IIS6 install I crash an IIS worker process. I've narrowed the problem down to my HTTP crawler, which is a multithreaded beast that crawls sites for useful information when asked to. After I start a crawler and redeploy the app over the top, rather than gracefully unloading the appDomain and reloading, an IIS worker process will crash (popping up a crash message) and continue reloading the app domain. When this crash happens, where can I find the crash dump for analysis? | Download Debugging tools for Windows: http://www.microsoft.com/whdc/DevTools/Debugging/default.mspx Debugging Tools for Windows has has a script (ADPLUS) that allows you to create dumps when a process CRASHES: http://support.microsoft.com/kb/286350 The command should be something like (if you are using IIS6): cscript adplus.vbs -crash -pn w3wp.exe This command will attach the debugger to the worker process. When the crash occurs it will generate a dump (a *.DMP file). You can open it in WinDBG (also included in the Debugging Tools for Windows). File > Open Crash dump... By default, WinDBG will show you (next to the command line) the thread were the process crashed. The first thing you need to do in WinDBG is to load the .NET Framework extensions: .loadby sos mscorwks then, you will display the managed callstack: !clrstack if the thread was not running managed code, then you'll need to check the native stack: kpn 200 This should give you some ideas. To continue troubleshooting I recommend you read the following article: http://msdn.microsoft.com/en-us/library/ee817663.aspx | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53435",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/209/"
]
} |
53,439 | I want to wrap a piece of code that uses the Windows Impersonation API into a neat little helper class, and as usual, I'm looking for a way to go test-first. However, while WindowsIdentity is a managed class, the LogonUser call that is required to actually perform the logging in as another user is an unmanaged function in advapi32.dll. I think I can work around this by introducing an interface for my helper class to use and hiding the P/Invoke calls in an implementation, but testing that implementation will still be a problem. And you can imagine actually performing the impersonation in the test can be a bit problematic, given that the user would actually need to exist on the system. | Guideline: Don't test code that you haven't written. You shouldn't be concerned with WinAPI implementation not working (most probably it works as expected). Your concern should be testing the 'Wiring' i.e. if your code makes the right WinAPI call. In which case, all you need is to mock out the interface and let the mock framework tell if you the call was made with the right params. If yes, you're done. Create IWinAPIFacade (with relevant WinAPI methods) and implementation CWinAPIFacade. Write a test which plugs in a mock of IWinAPIFacade and verify that the appropriate call is made Write a test to ensure that CWinAPIFacade is created and plugged in as a default (in normal functioning) Implement CWinAPIFacade which simply blind-delegates to Platform Invoke calls - no need to auto-test this layer. Just do a manual verification. Hopefully this won't change that often and nothing breaks. If you find that it does in the future, barricade it with some tests. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53439",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/266/"
]
} |
53,464 | I have IIS 5.1 installed on Windows XP Pro SP2. Besides I have installed VS 2008 Express with .NET 3.5. So obviously IIS is configured for ASP.NET automatically for .NET 3.5 The problem is whenever I access http://localhost IE & Firefox both presents authentication box. Even if I enter Administrator user and its password, the authentication fails. I have already checked the anonymous user access (with IUSR_ user and password is controlled by IIS) in Directory Security options of default website. However other deployed web apps work fine (does not ask for any authentication). In IE this authentication process stops if I add http://localhost in Intranet sites option. Please note that the file system is FAT32 when IIS is installed. Regards,Jatan | This is most likely a NT file permissions problem. IUSR_ needs to have file system permissions to read whatever file you're requesting (like /inetpub/wwwroot/index.htm). If you still have trouble, check the IIS logs, typically at \windows\system32\logfiles\W3SVC*. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53464",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/959/"
]
} |
53,472 | I have some Ruby code which takes dates on the command line in the format: -d 20080101,20080201..20080229,20080301 I want to run for all dates between 20080201 and 20080229 inclusive and the other dates present in the list. I can get the string 20080201..20080229 , so is the best way to convert this to a Range instance? Currently, I am using eval , but it feels like there should be a better way. @Purfideas I was kind of looking for a more general answer for converting any string of type int..int to a Range I guess. | But then just do ends = '20080201..20080229'.split('..').map{|d| Integer(d)}ends[0]..ends[1] anyway I don't recommend eval, for security reasons | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4121/"
]
} |
53,479 | I have recently been doing a bit of investigation into the different types of Model View architectures, and need to decide which one to pursue for future in-house development. As I'm currently working in a Microsoft shop that has ASP.NET skills, it seems my options are between ASP.NET MVC and WCSF (Monorail is probably out of the as it wouldn't be supported by Microsoft). After reading the ASP.NET MVC framework, using the WCSF as a yardstick , I picked up the following points: ASP.NET MVC cannot use web controls that rely on postbacks, whereas WCSF can. You have more control over the urls in an ASP.NET MVC site as opposed to a WCSF site. An ASP.NET MVC site will probably be easier to test than an equivalent WCSF version. It seems that the WCSF still uses the code behind to control UI events under some circumstances, but ASP.NET MVC doesn't allow this. What are some of the other considerations? What have I misunderstood? Is there anybody out there who has used both frameworks and has advice either way? | ASP.NET MVC cannot use web controls that rely on postbacks, whereas WCSF can. You should think of WCSF as guidance about how to use the existing WebForms infrastructure, especially introducing Model-View-Presenter to help enforce separation of concerns. It also increases the testability of the resulting code. You have more control over the urls in an ASP.NET MVC site as opposed to a WCSF site. If you can target 3.5 SP1, you can use the new Routing system with a traditional WebForms site. Routing is not limited to MVC. For example, take a look at Dynamic Data (which also ships in 3.5 SP1). An ASP.NET MVC site will probably be easier to test than an equivalent WCSF version. This is true because it uses the new abstractions classes for HttpContext, HttpRequest, HttpResponse, etc. There's nothing inherently more testable about the MVC pattern than the MVP pattern. They're both instances of "Separated Presentation", and both increase testability. It seems that the WCSF still uses the code behind to control UI events under some circumstances, but ASP.NET doesn't allow this. In Model-View-Presenter, since the outside world interacts with views (i.e., the URL points to the view), the views will naturally be responding to these events. They should be as simple as possible, either by calling the presenter or by offering events that the presenter can subscribe to. Model-View-Controller overcomes this limitation by having the outside world interact with controllers. This means your views can be a lot "dumber" about non-presentation things. As for which you should use, I think the answer comes down to which one best suits your project goals. Sometimes WebForms and the rich third party control vendor availability will be preferable, and in some cases, raw simplicity and fine-grained HTML control will favor MVC. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53479",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5110/"
]
} |
53,491 | How do I enable external access to MySQL Server? I can connect locally but I cannot connect from another box on the network. I just tried grant all privileges on *.* to root@'%' identified by '*****' with grant option; And restarted MySQL Server with no success. | You probably have to edit the configuration file (usually my.cnf) to listen in the external interface instead of on localhost only. Change the bind-address parameter to your machine's IP address. If this is an old MySQL installation, you should comment out the skip-networking parameter. Afterwards, restart MySQL and you'll be set | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/53491",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4960/"
]
} |
53,497 | I'm having trouble writing a regular expression that matches valid IPv6 addresses, including those in their compressed form (with :: or leading zeros omitted from each byte pair). Can someone suggest a regular expression that would fulfill the requirement? I'm considering expanding each byte pair and matching the result with a simpler regex. | I was unable to get @Factor Mystic's answer to work with POSIX regular expressions, so I wrote one that works with POSIX regular expressions and PERL regular expressions. It should match: IPv6 addresses zero compressed IPv6 addresses ( section 2.2 of rfc5952 ) link-local IPv6 addresses with zone index ( section 11 of rfc4007 ) IPv4-Embedded IPv6 Address ( section 2 of rfc6052 ) IPv4-mapped IPv6 addresses ( section 2.1 of rfc2765 ) IPv4-translated addresses ( section 2.1 of rfc2765 ) IPv6 Regular Expression: (([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])) For ease of reading, the following is the above regular expression split at major OR points into separate lines: # IPv6 RegEx(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}| # 1:2:3:4:5:6:7:8([0-9a-fA-F]{1,4}:){1,7}:| # 1:: 1:2:3:4:5:6:7::([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}| # 1::8 1:2:3:4:5:6::8 1:2:3:4:5:6::8([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}| # 1::7:8 1:2:3:4:5::7:8 1:2:3:4:5::8([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}| # 1::6:7:8 1:2:3:4::6:7:8 1:2:3:4::8([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}| # 1::5:6:7:8 1:2:3::5:6:7:8 1:2:3::8([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}| # 1::4:5:6:7:8 1:2::4:5:6:7:8 1:2::8[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})| # 1::3:4:5:6:7:8 1::3:4:5:6:7:8 1::8 :((:[0-9a-fA-F]{1,4}){1,7}|:)| # ::2:3:4:5:6:7:8 ::2:3:4:5:6:7:8 ::8 :: fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}| # fe80::7:8%eth0 fe80::7:8%1 (link-local IPv6 addresses with zone index)::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])| # ::255.255.255.255 ::ffff:255.255.255.255 ::ffff:0:255.255.255.255 (IPv4-mapped IPv6 addresses and IPv4-translated addresses)([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]) # 2001:db8:3:4::192.0.2.33 64:ff9b::192.0.2.33 (IPv4-Embedded IPv6 Address))# IPv4 RegEx((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]) To make the above easier to understand, the following "pseudo" code replicates the above: IPV4SEG = (25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])IPV4ADDR = (IPV4SEG\.){3,3}IPV4SEGIPV6SEG = [0-9a-fA-F]{1,4}IPV6ADDR = ( (IPV6SEG:){7,7}IPV6SEG| # 1:2:3:4:5:6:7:8 (IPV6SEG:){1,7}:| # 1:: 1:2:3:4:5:6:7:: (IPV6SEG:){1,6}:IPV6SEG| # 1::8 1:2:3:4:5:6::8 1:2:3:4:5:6::8 (IPV6SEG:){1,5}(:IPV6SEG){1,2}| # 1::7:8 1:2:3:4:5::7:8 1:2:3:4:5::8 (IPV6SEG:){1,4}(:IPV6SEG){1,3}| # 1::6:7:8 1:2:3:4::6:7:8 1:2:3:4::8 (IPV6SEG:){1,3}(:IPV6SEG){1,4}| # 1::5:6:7:8 1:2:3::5:6:7:8 1:2:3::8 (IPV6SEG:){1,2}(:IPV6SEG){1,5}| # 1::4:5:6:7:8 1:2::4:5:6:7:8 1:2::8 IPV6SEG:((:IPV6SEG){1,6})| # 1::3:4:5:6:7:8 1::3:4:5:6:7:8 1::8 :((:IPV6SEG){1,7}|:)| # ::2:3:4:5:6:7:8 ::2:3:4:5:6:7:8 ::8 :: fe80:(:IPV6SEG){0,4}%[0-9a-zA-Z]{1,}| # fe80::7:8%eth0 fe80::7:8%1 (link-local IPv6 addresses with zone index) ::(ffff(:0{1,4}){0,1}:){0,1}IPV4ADDR| # ::255.255.255.255 ::ffff:255.255.255.255 ::ffff:0:255.255.255.255 (IPv4-mapped IPv6 addresses and IPv4-translated addresses) (IPV6SEG:){1,4}:IPV4ADDR # 2001:db8:3:4::192.0.2.33 64:ff9b::192.0.2.33 (IPv4-Embedded IPv6 Address) ) I posted a script on GitHub which tests the regular expression: https://gist.github.com/syzdek/6086792 | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/53497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4883/"
]
} |
53,513 | For example, if passed the following: a = [] How do I check to see if a is empty? | if not a: print("List is empty") Using the implicit booleanness of the empty list is quite Pythonic. | {
"score": 14,
"source": [
"https://Stackoverflow.com/questions/53513",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4872/"
]
} |
53,545 | I have an exe with an App.Config file. Now I want to create a wrapper dll around the exe in order to consume some of the functionalities. The question is how can I access the app.config property in the exe from the wrapper dll? Maybe I should be a little bit more in my questions, I have the following app.config content with the exe: <?xml version="1.0" encoding="utf-8" ?><configuration> <appSettings> <add key="myKey" value="myValue"/> </appSettings></configuration> The question is how to how to get "myValue" out from the wrapper dll? thanks for your solution. Actually my initial concept was to avoid XML file reading method or LINQ or whatever. My preferred solution was to use the configuration manager libraries and the like . I'll appreciate any help that uses the classes that are normally associated with accessing app.config properties. | The ConfigurationManager.OpenMappedExeConfiguration Method will allow you to do this. Sample from the MSDN page: static void GetMappedExeConfigurationSections(){ // Get the machine.config file. ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // You may want to map to your own exe.comfig file here. fileMap.ExeConfigFilename = @"C:\test\ConfigurationManager.exe.config"; System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); // Loop to get the sections. Display basic information. Console.WriteLine("Name, Allow Definition"); int i = 0; foreach (ConfigurationSection section in config.Sections) { Console.WriteLine( section.SectionInformation.Name + "\t" + section.SectionInformation.AllowExeDefinition); i += 1; } Console.WriteLine("[Total number of sections: {0}]", i); // Display machine.config path. Console.WriteLine("[File path: {0}]", config.FilePath);} EDIT: This should output the "myKey" value: ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap();fileMap.ExeConfigFilename = @"C:\test\ConfigurationManager.exe.config";System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None);Console.WriteLine(config.AppSettings.Settings["MyKey"].Value); | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53545",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3834/"
]
} |
53,569 | What is the best way to get a log of commits on a branch since the time it was branched from the current branch? My solution so far is: git log $(git merge-base HEAD branch)..branch The documentation for git-diff indicates that git diff A...B is equivalent to git diff $(git-merge-base A B) B . On the other hand, the documentation for git-rev-parse indicates that r1...r2 is defined as r1 r2 --not $(git merge-base --all r1 r2) . Why are these different? Note that git diff HEAD...branch gives me the diffs I want, but the corresponding git log command gives me more than what I want. In pictures, suppose this: x---y---z---branch /---a---b---c---d---e---HEAD I would like to get a log containing commits x, y, z. git diff HEAD...branch gives these commits however, git log HEAD...branch gives x, y, z, c, d, e. | In the context of a revision list, A...B is how git-rev-parse defines it. git-log takes a revision list. git-diff does not take a list of revisions - it takes one or two revisions, and has defined the A...B syntax to mean how it's defined in the git-diff manpage. If git-diff did not explicitly define A...B , then that syntax would be invalid. Note that the git-rev-parse manpage describes A...B in the "Specifying Ranges" section, and everything in that section is only valid in situations where a revision range is valid (i.e. when a revision list is desired). To get a log containing just x, y, and z, try git log HEAD..branch (two dots, not three). This is identical to git log branch --not HEAD , and means all commits on branch that aren't on HEAD. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/53569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/893/"
]
} |
53,609 | I hope this qualifies as a programming question, as in any programming tutorial, you eventually come across 'foo' in the code examples. (yeah, right?) what does 'foo' really mean? If it is meant to mean nothing , when did it begin to be used so? | See: RFC 3092: Etymology of "Foo", D. Eastlake 3rd et al. Quoting only the relevant definitions from that RFC for brevity: Used very generally as a sample name for absolutely anything, esp.programs and files (esp. scratch files). First on the standard list of metasyntactic variables used insyntax examples (bar, baz, qux, quux, corge, grault, garply,waldo, fred, plugh, xyzzy, thud). [JARGON] | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/53609",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/123/"
]
} |
53,629 | Is it possible to see the history of changes to a particular line of code in a Subversion repository? I'd like, for instance, to be able to see when a particular statement was added or when that statement was changed, even if its line number is not the same any more. | I don't know a method for tracking statements through time in Subversion. It is simple however to see when any particular line in a file was last changed using svn blame . Check the SVNBook: svn blame reference : Synopsis svn blame TARGET[@REV]... Description Show author and revision information in-line for the specified files or URLs. Each line of text is annotated at the beginning with the author (username) and the revision number for the last change to that line. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/53629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1428/"
]
} |
53,649 | Using reflection, I need to investigate a user DLL and create an object of a class in it. What is the simple way of doing it? | Try Activator.CreateInstance . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/195/"
]
} |
53,664 | I've started using Vim to develop Perl scripts and am starting to find it very powerful. One thing I like is to be able to open multiple files at once with: vi main.pl maintenance.pl and then hop between them with: :n:prev and see which file are open with: :args And to add a file, I can say: :n test.pl which I expect would then be added to my list of files, but instead it wipes out my current file list and when I type :args I only have test.pl open. So how can I add and remove files in my args list? | Why not use tabs (introduced in Vim 7)?You can switch between tabs with :tabn and :tabp ,With :tabe <filepath> you can add a new tab; and with a regular :q or :wq you close a tab.If you map :tabn and :tabp to your F7 / F8 keys you can easily switch between files. If there are not that many files or you don't have Vim 7 you can also split your screen in multiple files: :sp <filepath> . Then you can switch between splitscreens with Ctrl + W and then an arrow key in the direction you want to move (or instead of arrow keys, w for next and W for previous splitscreen) | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/53664",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4639/"
]
} |
53,676 | When trying to connect to an ORACLE user via TOAD (Quest Software) or any other means ( Oracle Enterprise Manager ) I get this error: ORA-011033: ORACLE initialization or shutdown in progress | After some googling, I found the advice to do the following, and it worked: SQL> startup mountORACLE Instance startedSQL> recover database Media recovery completeSQL> alter database open;Database altered | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/53676",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5351/"
]
} |
53,693 | Has anyone considered using something along the lines of the Amazon SimpleDB data store as their backend database? SQL Server hosting (at least in the UK) is expensive so could something like this along with cloud file storage (S3) be used for building apps that could grow with your application. Great in theory but would anyone consider using it. In fact is anyone actually using it now for real production software as I would love to read your comments. | This is a good analysis of Amazon services from Dare . S3 handled what I've typically heard described as "blob storage". A typical Web application typically has media files and other resources (images, CSS stylesheets, scripts, video files, etc) that is simply accessed by name/path. However a lot of these resources also have metadata (e.g. a video file on YouTube has metadata about it's rating, who uploaded it, number of views, etc) which need to be stored as well. This need for queryable, schematized storage is where SimpleDB comes in. EC2 provides a virtual server that can be used for computation complete with a local file system instance which isn't persistent if the virtual server goes down for any reason. With SimpleDB and S3 you have the building blocks to build a large class of "Web 2.0" style applications when you throw in the computational capabilities provided by EC2. However neither S3 nor SimpleDB provides a solution for a developer who simply wants the typical LAMP or WISC developer experience of building a database driven Web application or for applications that may have custom storage needs that don't fit neatly into the buckets of blob storage or schematized storage. Without access to a persistent filesystem, developers on Amazon's cloud computing platform have had to come up with sophisticated solutions involving backing data up manually from EC2 to S3 to get the desired experience. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2041/"
]
} |
53,715 | Does Delphi call inherited on overridden procedures if there is no explicit call in the code ie (inherited;), I have the following structure (from super to sub class) TForm >> TBaseForm >> TAnyOtherForm All the forms in the project will be derived from TBaseForm, as this will have all the standard set-up and destructive parts that are used for every form (security, validation ect). TBaseForm has onCreate and onDestroy procedures with the code to do this, but if someone (ie me) forgot to add inherited to the onCreate on TAnyOtherForm would Delphi call it for me? I have found references on the web that say it is not required, but nowhere says if it gets called if it is omitted from the code. Also if it does call inherited for me, when will it call it? | No, if you leave the call to inherited away, it will not be called. Otherwise it would not be possible to override a method and totally ommit the parent version of it. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53715",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2098/"
]
} |
53,728 | I am not concerned about other kinds of attacks. Just want to know whether HTML Encode can prevent all kinds of XSS attacks. Is there some way to do an XSS attack even if HTML Encode is used? | No. Putting aside the subject of allowing some tags (not really the point of the question), HtmlEncode simply does NOT cover all XSS attacks. For instance, consider server-generated client-side javascript - the server dynamically outputs htmlencoded values directly into the client-side javascript, htmlencode will not stop injected script from executing. Next, consider the following pseudocode: <input value=<%= HtmlEncode(somevar) %> id=textbox> Now, in case its not immediately obvious, if somevar (sent by the user, of course) is set for example to a onclick=alert(document.cookie) the resulting output is <input value=a onclick=alert(document.cookie) id=textbox> which would clearly work. Obviously, this can be (almost) any other script... and HtmlEncode would not help much. There are a few additional vectors to be considered... including the third flavor of XSS, called DOM-based XSS (wherein the malicious script is generated dynamically on the client, e.g. based on # values). Also don't forget about UTF-7 type attacks - where the attack looks like +ADw-script+AD4-alert(document.cookie)+ADw-/script+AD4- Nothing much to encode there... The solution, of course (in addition to proper and restrictive white-list input validation), is to perform context-sensitive encoding: HtmlEncoding is great IF you're output context IS HTML, or maybe you need JavaScriptEncoding, or VBScriptEncoding, or AttributeValueEncoding, or... etc. If you're using MS ASP.NET, you can use their Anti-XSS Library, which provides all of the necessary context-encoding methods. Note that all encoding should not be restricted to user input, but also stored values from the database, text files, etc. Oh, and don't forget to explicitly set the charset, both in the HTTP header AND the META tag, otherwise you'll still have UTF-7 vulnerabilities... Some more information, and a pretty definitive list (constantly updated), check out RSnake's Cheat Sheet: http://ha.ckers.org/xss.html | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/53728",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/184/"
]
} |
53,744 | I would like to know how can i escape a # in velocity. Backslash seems to escape it but it prints itself as well This: \#\# prints: \#\# I would like: ## | If you don't want to bother with the EscapeTool, you can do this: #set( $H = '#' )$H$H | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/53744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2138/"
]
} |
53,757 | Which compiles to faster code: "ans = n * 3" or "ans = n+(n*2)"? Assuming that n is either an int or a long, and it is is running on a modern Win32 Intel box. Would this be different if there was some dereferencing involved, that is, which of these would be faster? long a;long *pn;long ans;...*pn = some_number;ans = *pn * 3; Or ans = *pn+(*pn*2); Or, is it something one need not worry about as optimizing compilers are likely to account for this in any case? | IMO such micro-optimization is not necessary unless you work with some exotic compiler. I would put readability on the first place. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/53757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3137/"
]
} |
53,796 | A GUI driven application needs to host some prebuilt WinForms based components.These components provide high performance interactive views using a mixture of GDI+ and DirectX.The views handle control input and display custom graphical renderings.The components are tested in a WinForms harness by the supplier. Can a commericial application use WPF for its GUI and rely on WindowsFormsHost to host the WinForms components or have you experience of technical glitches e.g. input lags, update issues that would make you cautious? | We're currently using WindowsFormsHost in our software to host the WinForms DataGridView control, and we've not had any real problems with it. A few things to watch out for though: The first is the air-space restrictions . Practically speaking, this means that WinForms content always appears on top of WPF content. So if you are using WPF adorners they will appear to be "trimmed" if they hit up against a WinForms region in your app. The second is that, because they use Windows resources, you have to manage the lifetimes of WinForms components more carefully. Unlike WPF components, WinForms controls expect to be Disposed when they're finished with. This makes it tricky to include them in a pure XAML view. The last thing is that WinForms controls don't seem to resize as smoothly as the rest of the WPF display: they tend to snap to their new size once you've finished making an adjustment. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53796",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5427/"
]
} |
53,806 | What was the motivation for having the reintroduce keyword in Delphi? If you have a child class that contains a function with the same name as a virtual function in the parent class and it is not declared with the override modifier then it is a compile error. Adding the reintroduce modifier in such situations fixes the error, but I have never grasped the reasoning for the compile error. | If you declare a method in a descendant class that has the same name as a method in an ancestor class then you are hiding that ancestor method — meaning if you have an instance of that descendant class (that is referenced as that class) then you will not get the behavior of the ancestor. When the ancestor's method is virtual or dynamic, the compiler will give you a warning. Now you have one of two choices to suppress that warning message: Adding the keyword reintroduce just tells the compiler you know you are hiding that method and it suppresses the warning. You can still use the inherited keyword within your implementation of that descended method to call the ancestor method. If the ancestor's method was virtual or dynamic then you can use override . It has the added behavior that if this descendant object is accessed through an expression of the ancestor type, then the call to that method will still be to the descendant method (which then may optionally call the ancestor through inherited ). So difference between override and reintroduce is in polymorphism. With reintroduce , if you cast the descendant object as the parent type, then call that method you will get the ancestor method, but if you access it the descendant type then you will get the behavior of the descendant. With override you always get the descendant. If the ancestor method was neither virtual nor dynamic , then reintroduce does not apply because that behavior is implicit. (Actually you could use a class helper, but we won't go there now.) In spite of what Malach said, you can still call inherited in a reintroduced method, even if the parent was neither virtual nor dynamic . Essentially reintroduce is just like override , but it works with non- dynamic and non- virtual methods, and it does not replace the behavior if the object instance is accessed via an expression of the ancestor type. Further Explanation: Reintroduce is a way of communicating intent to the compiler that you did not make an error. We override a method in an ancestor with the override keyword, but it requires that the ancestor method be virtual or dynamic , and that you want the behavior to change when the object is accessed as the ancestor class. Now enter reintroduce . It lets you tell the compiler that you did not accidentally create a method with the same name as a virtual or dynamic ancestor method (which would be annoying if the compiler didn't warn you about). | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/53806",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4474/"
]
} |
53,827 | I'm using MinGW with GCC 3.4.5 (mingw-special vista r3). My C application uses a lot of stack so I was wondering is there any way I can tell programatically how much stack is remaining so I can cleanly handle the situation if I find that I'm about to run out. If not what other ways would you work around the problem of potentially running out of stack space? I've no idea what size of stack I'll start with so would need to identify that programatically also. | The getrusage function gets you the current usage . (see man getrusage ). The getrlimit in Linux would help fetching the stack size with the RLIMIT_STACK parameter. #include <sys/resource.h>int main (void){ struct rlimit limit; getrlimit (RLIMIT_STACK, &limit); printf ("\nStack Limit = %ld and %ld max\n", limit.rlim_cur, limit.rlim_max);} Please give a look at man getrlimit .The same information could be fetched by ulimit -s or ulimit -a stack size row.Also have a look at setrlimit function which would allow to set the limits.But as the mentioned in the other answers if you need to adjust stack then probably you should re consider your design. If you want a big array why not take the memory from the heap ? | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5330/"
]
} |
53,844 | I would like to do the equivalent of: object result = Eval("1 + 3");string now = Eval("System.DateTime.Now().ToString()") as string Following Biri s link , I got this snippet (modified to remove obsolete method ICodeCompiler.CreateCompiler() : private object Eval(string sExpression){ CSharpCodeProvider c = new CSharpCodeProvider(); CompilerParameters cp = new CompilerParameters(); cp.ReferencedAssemblies.Add("system.dll"); cp.CompilerOptions = "/t:library"; cp.GenerateInMemory = true; StringBuilder sb = new StringBuilder(""); sb.Append("using System;\n"); sb.Append("namespace CSCodeEvaler{ \n"); sb.Append("public class CSCodeEvaler{ \n"); sb.Append("public object EvalCode(){\n"); sb.Append("return " + sExpression + "; \n"); sb.Append("} \n"); sb.Append("} \n"); sb.Append("}\n"); CompilerResults cr = c.CompileAssemblyFromSource(cp, sb.ToString()); if (cr.Errors.Count > 0) { throw new InvalidExpressionException( string.Format("Error ({0}) evaluating: {1}", cr.Errors[0].ErrorText, sExpression)); } System.Reflection.Assembly a = cr.CompiledAssembly; object o = a.CreateInstance("CSCodeEvaler.CSCodeEvaler"); Type t = o.GetType(); MethodInfo mi = t.GetMethod("EvalCode"); object s = mi.Invoke(o, null); return s;} | I have written an open source project, Dynamic Expresso , that can convert text expression written using a C# syntax into delegates (or expression tree). Text expressions are parsed and transformed into Expression Trees without using compilation or reflection. You can write something like: var interpreter = new Interpreter();var result = interpreter.Eval("8 / 2 + 2"); or var interpreter = new Interpreter() .SetVariable("service", new ServiceExample());string expression = "x > 4 ? service.aMethod() : service.AnotherMethod()";Lambda parsedExpression = interpreter.Parse(expression, new Parameter("x", typeof(int)));parsedExpression.Invoke(5); My work is based on Scott Gu article http://weblogs.asp.net/scottgu/archive/2008/01/07/dynamic-linq-part-1-using-the-linq-dynamic-query-library.aspx . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53844",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2260/"
]
} |
53,849 | Java has a convenient split method: String str = "The quick brown fox";String[] results = str.split(" "); Is there an easy way to do this in C++? | C++ standard library algorithms are pretty universally based around iterators rather than concrete containers. Unfortunately this makes it hard to provide a Java-like split function in the C++ standard library, even though nobody argues that this would be convenient. But what would its return type be? std::vector<std::basic_string<…>> ? Maybe, but then we’re forced to perform (potentially redundant and costly) allocations. Instead, C++ offers a plethora of ways to split strings based on arbitrarily complex delimiters, but none of them is encapsulated as nicely as in other languages. The numerous ways fill whole blog posts . At its simplest, you could iterate using std::string::find until you hit std::string::npos , and extract the contents using std::string::substr . A more fluid (and idiomatic, but basic) version for splitting on whitespace would use a std::istringstream : auto iss = std::istringstream{"The quick brown fox"};auto str = std::string{};while (iss >> str) { process(str);} Using std::istream_iterator s , the contents of the string stream could also be copied into a vector using its iterator range constructor. Multiple libraries (such as Boost.Tokenizer ) offer specific tokenisers. More advanced splitting require regular expressions. C++ provides the std::regex_token_iterator for this purpose in particular: auto const str = "The quick brown fox"s;auto const re = std::regex{R"(\s+)"};auto const vec = std::vector<std::string>( std::sregex_token_iterator{begin(str), end(str), re, -1}, std::sregex_token_iterator{}); | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/53849",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1288/"
]
} |
53,911 | Programming in vim I often go search for something, yank it, then go back to where I was, insert it, modify it. The problem is that after I search and find, I need to MANUALLY find my way back to where I was. Is there an automatic way to go back to where I was when I initiated my last search? | Ctrl + O takes me to the previous location. Don't know about location before the search. Edit: Also, ` . will take you to the last change you made. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/53911",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4639/"
]
} |
53,945 | I am currently loading a lightbox style popup that loads it's HTML from an XHR call. This content is then displayed in a 'modal' popup using element.innerHTML = content This works like a charm. In another section of this website I use a Flickr 'badge' ( http://www.elliotswan.com/2006/08/06/custom-flickr-badge-api-documentation/ ) to load flickr images dynamically. This is done including a script tag that loads a flickr javascript, which in turn does some document.write statments. Both of them work perfectly when included in the HTML. Only when loading the flickr badge code inside the lightbox, no content is rendered at all. It seems that using innerHTML to write document.write statements is taking it a step too far, but I cannot find any clue in the javascript implementations (FF2&3, IE6&7) of this behavior. Can anyone clarify if this should or shouldn't work? Thanks. | In general, script tags aren't executed when using innerHTML. In your case, this is good, because the document.write call would wipe out everything that's already in the page. However, that leaves you without whatever HTML document.write was supposed to add. jQuery's HTML manipulation methods will execute scripts in HTML for you, the trick is then capturing the calls to document.write and getting the HTML in the proper place. If it's simple enough, then something like this will do: var content = '';document.write = function(s) { content += s;};// execute the script$('#foo').html(markupWithScriptInIt);$('#foo .whereverTheDocumentWriteContentGoes').html(content); It gets complicated though. If the script is on another domain, it will be loaded asynchronously, so you'll have to wait until it's done to get the content. Also, what if it just writes the HTML into the middle of the fragment without a wrapper element that you can easily select? writeCapture.js (full disclosure: I wrote it) handles all of these problems. I'd recommend just using it, but at the very least you can look at the code to see how it handles everything. EDIT: Here is a page demonstrating what sounds like the effect you want. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53945",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4174/"
]
} |
53,967 | I have a class that map objects to objects, but unlike dictionary it maps them both ways. I am now trying to implement a custom IEnumerator interface that iterates through the values. public class Mapper<K,T> : IEnumerable<T>, IEnumerator<T>{ C5.TreeDictionary<K,T> KToTMap = new TreeDictionary<K,T>(); C5.HashDictionary<T,K> TToKMap = new HashDictionary<T,K>(); public void Add(K key, T value) { KToTMap.Add(key, value); TToKMap.Add(value, key); } public int Count { get { return KToTMap.Count; } } public K this[T obj] { get { return TToKMap[obj]; } } public T this[K obj] { get { return KToTMap[obj]; } } public IEnumerator<T> GetEnumerator() { return KToTMap.Values.GetEnumerator(); } public T Current { get { throw new NotImplementedException(); } } public void Dispose() { throw new NotImplementedException(); } object System.Collections.IEnumerator.Current { get { throw new NotImplementedException(); } } public bool MoveNext() { ; } public void Reset() { throw new NotImplementedException(); }} | First, don't make your collection object implement IEnumerator<>. This leads to bugs. (Consider the situation where two threads are iterating over the same collection). Implementing an enumerator correctly turns out to be non-trivial, so C# 2.0 added special language support for doing it, based on the 'yield return' statement. Raymond Chen's recent series of blog posts ("The implementation of iterators in C# and its consequences") is a good place to get up to speed. Part 1: https://web.archive.org/web/20081216071723/http://blogs.msdn.com/oldnewthing/archive/2008/08/12/8849519.aspx Part 2: https://web.archive.org/web/20080907004812/http://blogs.msdn.com/oldnewthing/archive/2008/08/13/8854601.aspx Part 3: https://web.archive.org/web/20080824210655/http://blogs.msdn.com/oldnewthing/archive/2008/08/14/8862242.aspx Part 4: https://web.archive.org/web/20090207130506/http://blogs.msdn.com/oldnewthing/archive/2008/08/15/8868267.aspx | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53967",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4694/"
]
} |
53,997 | I am trying to implement AJAX in my Google App Engine application, and so I am looking for a good AJAX framework that will help me. Anyone has any idea? I am thinking about Google Web Toolkit, how good it is in terms of creating AJAX for Google App Engine? | As Google Web Toolkit is a subset of Java it works best when you Java at the backend too. Since Google App Engine is currently Python only I think you'd have to do a lot of messing about to get your server and client to talk nicely to each other. jQuery seems to be the most popular JavaScript library option in the AJAX Tag at DjangoSnippets.com . Edit: The above is only true of Google App Engine applications written in Python. As Google App Engine now supports Java, GWT could now be a good choice for writing an AJAX front end. Google even have a tutorial showing you how to do it. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/53997",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3834/"
]
} |
54,001 | Migrating a project from ASP.NET 1.1 to ASP.NET 2.0 and I keep hitting this error. I don't actually need Global because I am not adding anything to it, but after I remove it I get more errors. | There are a few things you can try with this, seems to happen alot and the solution varies for everyone it seems. If you are still using the IIS virtual directory make sure its pointed to the correct directory and also check the ASP.NET version it is set to, make sure it is set to ASP.NET 2.0. Clear out your bin/debug/obj all of them. Do a Clean solution and then a Build Solution . Check your project file in a text editor and make sure where its looking for the global file is correct, sometimes it doesnt change the directory. Remove the global from the solution and add it back after saving and closing. make sure all the script tags in the ASPX file point to the correct one after. You can try running the Convert to Web Application tool, that redoes all of the code and project files. IIS Express is using the wrong root directory (see answer in VS 2012 launching app based on wrong path ) Make sure you close VS after you try them. Those are some things I know to try. Hope one of them works for you. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/54001",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3208/"
]
} |
54,037 | Say you've got a credit card number with an expiration date of 05/08 - i.e. May 2008. Does that mean the card expires on the morning of the 1st of May 2008, or the night of the 31st of May 2008? | It took me a couple of minutes to find a site that I could source for this. The card is valid until the last day of the month indicated, after the last [sic] 1 day of the next month; the card cannot be used to make a purchase if the merchant attempts to obtain an authorization. - Source Also, while looking this up, I found an interesting article on Microsoft's website using an example like this, exec summary: Access 2000 for a month/year defaults to the first day of the month, here's how to override that to calculate the end of the month like you'd want for a credit card . Additionally, this page has everything you ever wanted to know about credit cards . This is assumed to be a typo and that it should read "..., after the first day of the next month; ..." | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/54037",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/797/"
]
} |
54,052 | Are there any free tools available to view the contents of the solution user options file (the .suo file that accompanies solution files)? I know it's basically formatted as a file system within the file, but I'd like to be able to view the contents so that I can figure out which aspects of my solution and customizations are causing it grow very large over time. | The .SUO file is effectively disposable. If it's getting too large, just delete it. Visual Studio will create a fresh one. If you do want to go poking around in it, it looks like an OLE Compound Document File. You should be able to use the StgOpenStorage function to get hold of an IStorage pointer. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/54052",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/507/"
]
} |
54,059 | Say I have a linked list of numbers of length N . N is very large and I don’t know in advance the exact value of N . How can I most efficiently write a function that will return k completely random numbers from the list? | There's a very nice and efficient algorithm for this using a method called reservoir sampling . Let me start by giving you its history : Knuth calls this Algorithm R on p. 144 of his 1997 edition of Seminumerical Algorithms (volume 2 of The Art of Computer Programming), and provides some code for it there. Knuth attributes the algorithm to Alan G. Waterman. Despite a lengthy search, I haven't been able to find Waterman's original document, if it exists, which may be why you'll most often see Knuth quoted as the source of this algorithm. McLeod and Bellhouse, 1983 (1) provide a more thorough discussion than Knuth as well as the first published proof (that I'm aware of) that the algorithm works. Vitter 1985 (2) reviews Algorithm R and then presents an additional three algorithms which provide the same output, but with a twist. Rather than making a choice to include or skip each incoming element, his algorithm predetermines the number of incoming elements to be skipped. In his tests (which, admittedly, are out of date now) this decreased execution time dramatically by avoiding random number generation and comparisons on each in-coming number. In pseudocode the algorithm is: Let R be the result array of size sLet I be an input queue> Fill the reservoir arrayfor j in the range [1,s]: R[j]=I.pop()elements_seen=swhile I is not empty: elements_seen+=1 j=random(1,elements_seen) > This is inclusive if j<=s: R[j]=I.pop() else: I.pop() Note that I've specifically written the code to avoid specifying the size of the input. That's one of the cool properties of this algorithm: you can run it without needing to know the size of the input beforehand and it still assures you that each element you encounter has an equal probability of ending up in R (that is, there is no bias). Furthermore, R contains a fair and representative sample of the elements the algorithm has considered at all times. This means you can use this as an online algorithm . Why does this work? McLeod and Bellhouse (1983) provide a proof using the mathematics of combinations. It's pretty, but it would be a bit difficult to reconstruct it here. Therefore, I've generated an alternative proof which is easier to explain. We proceed via proof by induction. Say we want to generate a set of s elements and that we have already seen n>s elements. Let's assume that our current s elements have already each been chosen with probability s/n . By the definition of the algorithm, we choose element n+1 with probability s/(n+1) . Each element already part of our result set has a probability 1/s of being replaced. The probability that an element from the n -seen result set is replaced in the n+1 -seen result set is therefore (1/s)*s/(n+1)=1/(n+1) . Conversely, the probability that an element is not replaced is 1-1/(n+1)=n/(n+1) . Thus, the n+1 -seen result set contains an element either if it was part of the n -seen result set and was not replaced---this probability is (s/n)*n/(n+1)=s/(n+1) ---or if the element was chosen---with probability s/(n+1) . The definition of the algorithm tells us that the first s elements are automatically included as the first n=s members of the result set. Therefore, the n-seen result set includes each element with s/n (=1) probability giving us the necessary base case for the induction. References McLeod, A. Ian, and David R. Bellhouse. "A convenient algorithm for drawing a simple random sample." Journal of the Royal Statistical Society. Series C (Applied Statistics) 32.2 (1983): 182-184. ( Link ) Vitter, Jeffrey S. "Random sampling with a reservoir." ACM Transactions on Mathematical Software (TOMS) 11.1 (1985): 37-57. ( Link ) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/54059",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/797/"
]
} |
54,068 | I'm looking at a new computer which will probably have vista on it. But there are so many editions of vista; are there any weird restrictions on what you can run on the various editions? For instance you couldn't run IIS on Windows ME. Can you still run IIS on the home editions of vista? | There's a very nice and efficient algorithm for this using a method called reservoir sampling . Let me start by giving you its history : Knuth calls this Algorithm R on p. 144 of his 1997 edition of Seminumerical Algorithms (volume 2 of The Art of Computer Programming), and provides some code for it there. Knuth attributes the algorithm to Alan G. Waterman. Despite a lengthy search, I haven't been able to find Waterman's original document, if it exists, which may be why you'll most often see Knuth quoted as the source of this algorithm. McLeod and Bellhouse, 1983 (1) provide a more thorough discussion than Knuth as well as the first published proof (that I'm aware of) that the algorithm works. Vitter 1985 (2) reviews Algorithm R and then presents an additional three algorithms which provide the same output, but with a twist. Rather than making a choice to include or skip each incoming element, his algorithm predetermines the number of incoming elements to be skipped. In his tests (which, admittedly, are out of date now) this decreased execution time dramatically by avoiding random number generation and comparisons on each in-coming number. In pseudocode the algorithm is: Let R be the result array of size sLet I be an input queue> Fill the reservoir arrayfor j in the range [1,s]: R[j]=I.pop()elements_seen=swhile I is not empty: elements_seen+=1 j=random(1,elements_seen) > This is inclusive if j<=s: R[j]=I.pop() else: I.pop() Note that I've specifically written the code to avoid specifying the size of the input. That's one of the cool properties of this algorithm: you can run it without needing to know the size of the input beforehand and it still assures you that each element you encounter has an equal probability of ending up in R (that is, there is no bias). Furthermore, R contains a fair and representative sample of the elements the algorithm has considered at all times. This means you can use this as an online algorithm . Why does this work? McLeod and Bellhouse (1983) provide a proof using the mathematics of combinations. It's pretty, but it would be a bit difficult to reconstruct it here. Therefore, I've generated an alternative proof which is easier to explain. We proceed via proof by induction. Say we want to generate a set of s elements and that we have already seen n>s elements. Let's assume that our current s elements have already each been chosen with probability s/n . By the definition of the algorithm, we choose element n+1 with probability s/(n+1) . Each element already part of our result set has a probability 1/s of being replaced. The probability that an element from the n -seen result set is replaced in the n+1 -seen result set is therefore (1/s)*s/(n+1)=1/(n+1) . Conversely, the probability that an element is not replaced is 1-1/(n+1)=n/(n+1) . Thus, the n+1 -seen result set contains an element either if it was part of the n -seen result set and was not replaced---this probability is (s/n)*n/(n+1)=s/(n+1) ---or if the element was chosen---with probability s/(n+1) . The definition of the algorithm tells us that the first s elements are automatically included as the first n=s members of the result set. Therefore, the n-seen result set includes each element with s/n (=1) probability giving us the necessary base case for the induction. References McLeod, A. Ian, and David R. Bellhouse. "A convenient algorithm for drawing a simple random sample." Journal of the Royal Statistical Society. Series C (Applied Statistics) 32.2 (1983): 182-184. ( Link ) Vitter, Jeffrey S. "Random sampling with a reservoir." ACM Transactions on Mathematical Software (TOMS) 11.1 (1985): 37-57. ( Link ) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/54068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/361/"
]
} |
54,104 | Surprisingly as you get good at vim, you can code even faster than standard IDEs such as Eclipse. But one thing I really miss is code completion, especially for long variable names and functions. Is there any way to enable code completion for Perl in vim? | Ctrl - P (Get Previous Match) and Ctrl - N (Get Next Match) are kind of pseudo code completion. They basically search the file (Backwards for Ctrl - P , Forwards for Ctrl - N ) you are editing (and any open buffers, and if you are using TAGS anything in your TAG file) for words that start with what you are typing and add a drop down list. It works surprisingly well for variables and function names, even if it isn't intellisense. Generally I use Ctrl - P as the variable or function I am looking for is usually behind in the code. Also if you keep the same copy of Vim open, it will search the files you have previously opened. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/54104",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4639/"
]
} |
54,142 | How does the comma operator work in C++? For instance, if I do: a = b, c; Does a end up equaling b or c? (Yes, I know this is easy to test - just documenting on here for someone to find the answer quickly.) Update: This question has exposed a nuance when using the comma operator. Just to document this: a = b, c; // a is set to the value of b!a = (b, c); // a is set to the value of c! This question was actually inspired by a typo in code. What was intended to be a = b;c = d; Turned into a = b, // <- Note comma typo!c = d; | It would be equal to b . The comma operator has a lower precedence than assignment. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/54142",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1541/"
]
} |
54,188 | I have two threads, one updating an int and one reading it. This is a statistic value where the order of the reads and writes is irrelevant. My question is, do I need to synchronize access to this multi-byte value anyway? Or, put another way, can part of the write be complete and get interrupted, and then the read happen. For example, think of a value = 0x0000FFFF that gets incremented value of 0x00010000. Is there a time where the value looks like 0x0001FFFF that I should be worried about? Certainly the larger the type, the more possible something like this to happen. I've always synchronized these types of accesses, but was curious what the community thinks. | At first one might think that reads and writes of the native machine size are atomic but there are a number of issues to deal with including cache coherency between processors/cores. Use atomic operations like Interlocked* on Windows and the equivalent on Linux. C++0x will have an "atomic" template to wrap these in a nice and cross-platform interface. For now if you are using a platform abstraction layer it may provide these functions. ACE does, see the class template ACE_Atomic_Op . | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/54188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2167252/"
]
} |
54,199 | How to implement Repository pattern withe LinqToEntitieshow to implement the interface | At first one might think that reads and writes of the native machine size are atomic but there are a number of issues to deal with including cache coherency between processors/cores. Use atomic operations like Interlocked* on Windows and the equivalent on Linux. C++0x will have an "atomic" template to wrap these in a nice and cross-platform interface. For now if you are using a platform abstraction layer it may provide these functions. ACE does, see the class template ACE_Atomic_Op . | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/54199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2442689/"
]
} |
54,200 | I am developing a web app which requires a username and password to be stored in the web.Config, it also refers to some URLs which will be requested by the web app itself and never the client. I know the .Net framework will not allow a web.config file to be served, however I still think its bad practice to leave this sort of information in plain text. Everything I have read so far requires me to use a command line switch or to store values in the registry of the server. I have access to neither of these as the host is online and I have only FTP and Control Panel (helm) access. Can anyone recommend any good, free encryption DLL's or methods which I can use? I'd rather not develop my own! Thanks for the feedback so far guys but I am not able to issue commands and and not able to edit the registry. Its going to have to be an encryption util/helper but just wondering which one! | Encrypting and Decrypting Configuration Sections (ASP.NET) on MSDN Encrypting Web.Config Values in ASP.NET 2.0 on ScottGu's blog Encrypting Custom Configuration Sections on K. Scott Allen's blog EDIT: If you can't use asp utility, you can encrypt config file using SectionInformation.ProtectSection method. Sample on codeproject: Encryption of Connection Strings inside the Web.config in ASP.Net 2.0 | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/54200",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2208/"
]
} |
54,222 | Right now I have an SSIS package that runs every morning and gives me a report on the number of packages that failed or succeeded from the day before. The information for these packages is contained partly within the sysjobs table (a system table) within the msdb database (a system database) in SQL Server 2005 . When trying to move the package to a C# executable (mostly to gain better formatting over the email that gets sent out), I wasn't able to find a way to create a dbml file that allowed me to access these tables through LINQ . I tried to look for any properties that would make these tables visible, but I haven't had much luck. Is this possible with LINQ to SQL ? | If you're in Server Explorer, you can make them visible this way: Create a connection to the server you want. Right-click the server and choose Change View > Object Type. You should now see System Tables and User Tables. You should see sysjobs there, and you can easily drag it onto a .dbml surface. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/54222",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1108/"
]
} |
54,255 | Using Vim I often want to replace a block of code with a block that I just yanked. But when I delete the block of code that is to be replaced, that block itself goes into the register which erases the block I just yanked. So I've got in the habit of yanking, then inserting, then deleting what I didn't want, but with large blocks of code this gets messy trying to keep the inserted block and the block to delete separate. So what is the slickest and quickest way to replace text in Vim? is there a way to delete text without putting it into the register? is there a way to say e.g. "replace next word" or "replace up to next paragraph" or is the best way to somehow use the multi-register feature? | To delete something without saving it in a register, you can use the "black hole register": "_d Of course you could also use any of the other registers that don't hold anything you are interested in. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/54255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4639/"
]
} |
54,295 | I'd like to store a properties file as XML. Is there a way to sort the keys when doing this so that the generated XML file will be in alphabetical order? String propFile = "/path/to/file";Properties props = new Properties();/*set some properties here*/try { FileOutputStream xmlStream = new FileOutputStream(propFile); /*this comes out unsorted*/ props.storeToXML(xmlStream,"");} catch (IOException e) { e.printStackTrace();} | Here's a quick and dirty way to do it: String propFile = "/path/to/file";Properties props = new Properties();/* Set some properties here */Properties tmp = new Properties() { @Override public Set<Object> keySet() { return Collections.unmodifiableSet(new TreeSet<Object>(super.keySet())); }};tmp.putAll(props);try { FileOutputStream xmlStream = new FileOutputStream(propFile); /* This comes out SORTED! */ tmp.storeToXML(xmlStream,"");} catch (IOException e) { e.printStackTrace();} Here are the caveats: The tmp Properties (an anonymoussubclass) doesn't fulfill thecontract of Properties. For example, if you got its keySet and tried to remove an element from it, an exception would be raised. So, don't allow instances of this subclass to escape! In the snippet above, you are never passing it to another object or returning it to a caller who has a legitimate expectation that it fulfills the contract of Properties, so it is safe. The implementation ofProperties.storeToXML could change,causing it to ignore the keySetmethod. For example, a future release, or OpenJDK, could use the keys() method of Hashtable instead of keySet . This is one of the reasons why classes should always document their "self-use" (Effective Java Item 15). However, in this case, the worst that would happen is that your output would revert to unsorted. Remember that the Properties storagemethods ignore any "default"entries. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/54295",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5084/"
]
} |
54,299 | My question is simple; is it possible to over object-orient your code? How much is too much? At what point are you giving up readability and maintainability for the sake of OO? I am a huge OO person but sometimes I wonder if I am over-complicating my code.... Thoughts? | is it possible to over object-orient your code Yes | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/54299",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4144/"
]
} |
54,334 | The following SQL: SELECT notes + 'SomeText'FROM NotesTable a Give the error: The data types nvarchar and text are incompatible in the add operator. | The only way would be to convert your text field into an nvarchar field. Select Cast(notes as nvarchar(4000)) + 'SomeText'From NotesTable a Otherwise, I suggest doing the concatenation in your application. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/54334",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2017/"
]
} |
54,365 | This is probably a complex solution . I am looking for a simple operator like ">>", but for prepending. I am afraid it does not exist. I'll have to do something like mv myfile tmp cat myheader tmp > myfile Anything smarter? | This still uses a temp file, but at least it is on one line: echo "text" | cat - yourfile > /tmp/out && mv /tmp/out yourfile Credit: BASH: Prepend A Text / Lines To a File | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/54365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1277510/"
]
} |
54,380 | I am adding a ADO.Net Data Service lookup feature to an existing web page. Everything works great when running from visual studio, but when I roll it out to IIS, I get the following error: Request Error The server encountered an error processing the request. See server logs for more details. I get this even when trying to display the default page, i.e.: http://server/FFLookup.svc I have 3.5 SP1 installed on the server. What am I missing, and which "Server Logs" is it refering to? I can't find any further error messages. There is nothing in the Event Viewer logs (System or Application), and nothing in the IIS logs other than the GET: 2008-09-10 15:20:19 10.7.131.71 GET /FFLookup.svc - 8082 - 10.7.131.86 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-US)+AppleWebKit/525.13+(KHTML,+like+Gecko)+Chrome/0.2.149.29+Safari/525.13 401 2 2148074254 There is no stack trace returned. The only response I get is the "Request Error" as noted above. Thanks Patrick | In order to verbosely display the errors resulting from your data service you can place the following tag above your dataservice definition: [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] This will then display the error in your browser window as well as a stack trace. In addition to this dataservices throws all exceptions to the HandleException method so if you implement this method on your dataservice class you can put a break point on it and see the exception: protected override void HandleException(HandleExceptionArgs e){ try { e.UseVerboseErrors = true; } catch (Exception ex) { Console.WriteLine(ex.Message); }} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/54380",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5431/"
]
} |
54,418 | I need to retrieve all rows from a table where 2 columns combined are all different. So I want all the sales that do not have any other sales that happened on the same day for the same price. The sales that are unique based on day and price will get updated to an active status. So I'm thinking: UPDATE salesSET status = 'ACTIVE'WHERE id IN (SELECT DISTINCT (saleprice, saledate), id, count(id) FROM sales HAVING count = 1) But my brain hurts going any farther than that. | SELECT DISTINCT a,b,c FROM t is roughly equivalent to: SELECT a,b,c FROM t GROUP BY a,b,c It's a good idea to get used to the GROUP BY syntax, as it's more powerful. For your query, I'd do it like this: UPDATE salesSET status='ACTIVE'WHERE id IN( SELECT id FROM sales S INNER JOIN ( SELECT saleprice, saledate FROM sales GROUP BY saleprice, saledate HAVING COUNT(*) = 1 ) T ON S.saleprice=T.saleprice AND s.saledate=T.saledate ) | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/54418",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4915/"
]
} |
54,419 | I have a WCF application that has two Services that I am trying to host in a single Windows Service using net.tcp. I can run either of the services just fine, but as soon as I try to put them both in the Windows Service only the first one loads up. I have determined that the second services ctor is being called but the OnStart never fires. This tells me that WCF is finding something wrong with loading up that second service. Using net.tcp I know I need to turn on port sharing and start the port sharing service on the server. This all seems to be working properly. I have tried putting the services on different tcp ports and still no success. My service installer class looks like this: [RunInstaller(true)] public class ProjectInstaller : Installer { private ServiceProcessInstaller _process; private ServiceInstaller _serviceAdmin; private ServiceInstaller _servicePrint; public ProjectInstaller() { _process = new ServiceProcessInstaller(); _process.Account = ServiceAccount.LocalSystem; _servicePrint = new ServiceInstaller(); _servicePrint.ServiceName = "PrintingService"; _servicePrint.StartType = ServiceStartMode.Automatic; _serviceAdmin = new ServiceInstaller(); _serviceAdmin.ServiceName = "PrintingAdminService"; _serviceAdmin.StartType = ServiceStartMode.Automatic; Installers.AddRange(new Installer[] { _process, _servicePrint, _serviceAdmin }); } } and both services looking very similar class PrintService : ServiceBase { public ServiceHost _host = null; public PrintService() { ServiceName = "PCTSPrintingService"; CanStop = true; AutoLog = true; } protected override void OnStart(string[] args) { if (_host != null) _host.Close(); _host = new ServiceHost(typeof(Printing.ServiceImplementation.PrintingService)); _host.Faulted += host_Faulted; _host.Open(); }} | Base your service on this MSDN article and create two service hosts. But instead of actually calling each service host directly, you can break it out to as many classes as you want which defines each service you want to run: internal class MyWCFService1{ internal static System.ServiceModel.ServiceHost serviceHost = null; internal static void StartService() { if (serviceHost != null) { serviceHost.Close(); } // Instantiate new ServiceHost. serviceHost = new System.ServiceModel.ServiceHost(typeof(MyService1)); // Open myServiceHost. serviceHost.Open(); } internal static void StopService() { if (serviceHost != null) { serviceHost.Close(); serviceHost = null; } }}; In the body of the windows service host, call the different classes: // Start the Windows service. protected override void OnStart( string[] args ) { // Call all the set up WCF services... MyWCFService1.StartService(); //MyWCFService2.StartService(); //MyWCFService3.StartService(); } Then you can add as many WCF services as you like to one windows service host. REMEBER to call the stop methods as well.... | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/54419",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5408/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.