source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
14,395
For my own project at home, I'm using the rather excellent managed subversion hosting from CVSDude . As it's only me working on the code right now, I'm not using CruiseControl.net, however I expect this will change in the next couple of months and will want a full build process to kick off upon check-in. Has anyone managed to get CruiseControl.net working with CVSDude? My collegue Mike has this blog post where someone from CVSDude said: "Your can use our post-commit call back facility to call a URL on your server, which passes variables relating to the last checkin (variables detailed in our specification). Your CGI script will these variables and perform whatever tasks are required i.e. updating Cruise Control, etc." Sounds lovely. But has anyone actually done it with cruisecontrol?
Use a subrange of [\u0000-\uFFFF] for what you want. You can also use the re.UNICODE compile flag. The docs say that if UNICODE is set, \w will match the characters [0-9_] plus whatever is classified as alphanumeric in the Unicode character properties database. See also http://coding.derkeiler.com/Archive/Python/comp.lang.python/2004-05/2560.html .
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/14395", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1122/" ] }
14,398
Does anybody use the Class Designer much in Visual Studio? I have downloaded the Modeling Power Toys for 2005 and have been impressed with what I've seen so far. The MSDN Class Designer Blog doesn't seem to have been updated for a while but it still looks quite useful. Is the Class Designer a quick way to build the basic application or should I just work out on paper and then start coding? Thanks
As a visualization tool, or for exploratory purposes (drawing up multiple options to see what they look like) it's not bad, but generally I find the object browser does fine for most stuff I care about. As a code generation tool, it's a terrible idea. The whole idea that we will design all our code structure first, then fill in the blanks with small bits of implementation is fundamentally broken. The only time you actually know what the code structure should look like, is if you've done the exact same thing before - however then you can just use your previous code, and you don't need to draw up any new code in any kind of designer. If you decide ahead of time to use a particular class structure before you've actually tried to solve the problem, there is a 100% chance that you will pick the wrong design, and shoot yourself in the foot.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/14398", "https://Stackoverflow.com", "https://Stackoverflow.com/users/982/" ] }
14,402
In my simple OpenGL program I get the following error about exit redefinition: 1>c:\program files\microsoft visual studio 8\vc\include\stdlib.h(406) : error C2381: 'exit' : redefinition; __declspec(noreturn) differs1> c:\program files\microsoft visual studio 8\vc\platformsdk\include\gl\glut.h(146) : see declaration of 'exit' I'm using Nate Robins' GLUT for Win32 and get this error with Visual Studio 2005 or Visual C++ 2005 (Express Edition). What is the cause of this error and how do I fix it?
Cause: The stdlib.h which ships with the recent versions of Visual Studio has a different (and conflicting) definition of the exit() function. It clashes with the definition in glut.h . Solution: Override the definition in glut.h with that in stdlib.h. Place the stdlib.h line above the glut.h line in your code. #include <stdlib.h>#include <GL/glut.h>
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/14402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1630/" ] }
14,410
Pretty much every other editor that isn't a vi descendant (vim, cream, vi-emu) seems to use the emacs shortcuts ( ctrl + w to delete back a word and so on)
Early software was often modal, but usability took a turn at some point, away from this style. VI-based editors are total enigmas -- they're the only real surviving members of that order of software. Modes are a no-no in usability and interaction design because we humans are fickle mammals who cannot be trusted to remember what mode the application is in. If you think you are in one "mode" when you are actually in another, then all sorts of badness can ensue. What you believe to be a series of harmless keystrokes can (in the wrong mode) cause unlimited catastrophe. This is known as a "mode error". To learn more, search for the term "modeless" (and "usability") As mentioned in the comments below, a Modal interface in the hands of an experienced and non-fickle person can be extremely efficient.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/14410", "https://Stackoverflow.com", "https://Stackoverflow.com/users/745/" ] }
14,413
I want to use the functions exposed under the OpenGL extensions. I'm on Windows, how do I do this?
Easy solution : Use GLEW . See how here . Hard solution :If you have a really strong reason not to use GLEW, here's how to achieve the same without it: Identify the OpenGL extension and the extension APIs you wish to use. OpenGL extensions are listed in the OpenGL Extension Registry . Example: I wish to use the capabilities of the EXT_framebuffer_object extension. The APIs I wish to use from this extension are: glGenFramebuffersEXT()glBindFramebufferEXT()glFramebufferTexture2DEXT()glCheckFramebufferStatusEXT()glDeleteFramebuffersEXT() Check if your graphic card supports the extension you wish to use. If it does, then your work is almost done! Download and install the latest drivers and SDKs for your graphics card. Example: The graphics card in my PC is a NVIDIA 6600 GT . So, I visit the NVIDIA OpenGL Extension Specifications webpage and find that the EXT_framebuffer_object extension is supported. I then download the latest NVIDIA OpenGL SDK and install it. Your graphic card manufacturer provides a glext.h header file (or a similarly named header file) with all the declarations needed to use the supported OpenGL extensions. (Note that not all extensions might be supported.) Either place this header file somewhere your compiler can pick it up or include its directory in your compiler's include directories list. Add a #include <glext.h> line in your code to include the header file into your code. Open glext.h , find the API you wish to use and grab its corresponding ugly-looking declaration. Example: I search for the above framebuffer APIs and find their corresponding ugly-looking declarations: typedef void (APIENTRYP PFNGLGENFRAMEBUFFERSEXTPROC) (GLsizei n, GLuint *framebuffers); for GLAPI void APIENTRY glGenFramebuffersEXT (GLsizei, GLuint *); All this means is that your header file has the API declaration in 2 forms. One is a wgl-like ugly function pointer declaration. The other is a sane looking function declaration. For each extension API you wish to use, add in your code declarations of the function name as a type of the ugly-looking string. Example: PFNGLGENFRAMEBUFFERSEXTPROC glGenFramebuffersEXT;PFNGLBINDFRAMEBUFFEREXTPROC glBindFramebufferEXT;PFNGLFRAMEBUFFERTEXTURE2DEXTPROC glFramebufferTexture2DEXT;PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC glCheckFramebufferStatusEXT;PFNGLDELETEFRAMEBUFFERSEXTPROC glDeleteFramebuffersEXT; Though it looks ugly, all we're doing is to declare function pointers of the type corresponding to the extension API. Initialize these function pointers with their rightful functions. These functions are exposed by the library or driver. We need to use wglGetProcAddress() function to do this. Example: glGenFramebuffersEXT = (PFNGLGENFRAMEBUFFERSEXTPROC) wglGetProcAddress("glGenFramebuffersEXT");glBindFramebufferEXT = (PFNGLBINDFRAMEBUFFEREXTPROC) wglGetProcAddress("glBindFramebufferEXT");glFramebufferTexture2DEXT = (PFNGLFRAMEBUFFERTEXTURE2DEXTPROC) wglGetProcAddress("glFramebufferTexture2DEXT");glCheckFramebufferStatusEXT = (PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC) wglGetProcAddress("glCheckFramebufferStatusEXT");glDeleteFramebuffersEXT = (PFNGLDELETEFRAMEBUFFERSEXTPROC) wglGetProcAddress("glDeleteFramebuffersEXT"); Don't forget to check the function pointers for NULL . If by chance wglGetProcAddress() couldn't find the extension function, it would've initialized the pointer with NULL. Example: if (NULL == glGenFramebuffersEXT || NULL == glBindFramebufferEXT || NULL == glFramebufferTexture2DEXT || NULL == glCheckFramebufferStatusEXT || NULL == glDeleteFramebuffersEXT){ // Extension functions not loaded! exit(1);} That's it, we're done! You can now use these function pointers just as if the function calls existed. Example: glGenFramebuffersEXT(1, &fbo);glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, colorTex[0], 0); Reference: Moving Beyond OpenGL 1.1 for Windows by Dave Astle — The article is a bit dated, but has all the information you need to understand why this pathetic situation exists on Windows and how to get around it.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/14413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1630/" ] }
14,422
For example, http://developer.apple.com/cocoa/pyobjc.html is still for OS X 10.4 Tiger, not 10.5 Leopard.. And that's the official Apple documentation for it.. The official PyObjC page is equally bad, http://pyobjc.sourceforge.net/ It's so bad it's baffling.. I'm considering learning Ruby primarily because the RubyCocoa stuff is so much better documented, and there's lots of decent tutorials ( http://www.rubycocoa.com/ for example), and because of the Shoes GUI toolkit.. Even this badly-auto-translated Japanese tutorial is more useful than the rest of the documentation I could find.. All I want to do is create fairly simple Python applications with Cocoa GUI's.. Can anyone shed light on the horrible documentation, or point me at some tutorials that don't just give you huge blocks of code and assume you know what NSThread.detachNewThreadSelector_toTarget_withObject_("queryController", self, None) does..?
I agree that that tutorial is flawed, throwing random, unexplained code right in front of your eyes. It introduces concepts such as the autorelease pool and user defaults without explaining why you would want them ("Autorelease pool for memory management" is hardly an explanation). That said… basically all I want to do is write Cocoa applications without having to learn ObjC. I'm afraid that for the time being, you will need a basic grasp of ObjC in order to benefit from any language that uses Cocoa. PyObjC, RubyCocoa, Nu and others are niches at best, and all of them were developed by people intimately familiar with the ins and outs of ObjC and Cocoa. For now, you will benefit the most if you realistically see those bridges as useful where scripting languages truly shine, rather than trying to build a whole application with them. While this has been done (with LimeChat, I'm using a RubyCocoa-written app right now), it is rare and likely will be for a while.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/14422", "https://Stackoverflow.com", "https://Stackoverflow.com/users/745/" ] }
14,432
I know that .NET is JIT compiled to the architecture you are running on just before the app runs, but does the JIT compiler optimize for 64bit architecture at all? Is there anything that needs to be done or considered when programming an app that will run on a 64bit system ? (i.e. Will using Int64 improve performance and will the JIT compiler automatically make Int64 work on 32bit systems?)
The 64bit JIT is different from the one for 32bit, so I would expect some differences in the output - but I wouldn't switch to 64bit just for that, and I wouldn't expect to gain much speed (if any) in CPU time by switching to 64bit. You will notice a big performance improvement if your app uses a lot of memory and the PC has enough RAM to keep up with it. I've found that 32bit .NET apps tend to start throwing out of memory exceptions when you get to around 1.6gb in use, but they start to thrash the disk due to paging long before that - so you end being I/O bound. Basically, if you're bottleneck is CPU then 64bit is unlikely to help. If your bottleneck is is memory then you should see a big improvement. Will using Int64 improve performance and will the JIT compiler automatically make Int64 work on 32bit systems Int64 already works on both 32bit and 64bit systems, but it'll be faster running on 64bit. So if you're mostly number crunching with Int64, running on a 64bit system should help. The most important thing is to measure your performance.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/14432", "https://Stackoverflow.com", "https://Stackoverflow.com/users/194/" ] }
14,451
What is the best way to make a delphi application (delphi 2007 for win32 here) go completely full screen, removing the application border and covering windows task bar ? I am looking for something similar to what IE does when you hit F11. I wish this to be a run time option for the user not a design time decision by my good self. As Mentioned in the accepted answer BorderStyle := bsNone; was part of the way to do it. Strangely I kept getting a E2010 Incompatible types: 'TFormBorderStyle' and 'TBackGroundSymbol' error when using that line (another type had bsNone defined). To overcome this I had to use : BorderStyle := Forms.bsNone;
Well, this has always worked for me. Seems a bit simpler... procedure TForm52.Button1Click(Sender: TObject);begin BorderStyle := bsNone; WindowState := wsMaximized;end;
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/14451", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1724/" ] }
14,464
I have a structure which I need to populate and write to disk (several actually). An example is: byte-6 bit0 - original_or_copy bit1 - copyright bit2 - data_alignment_indicator bit3 - PES_priority bit4-bit5 - PES_scrambling control. bit6-bit7 - reserved In C I might do something like the following: struct PESHeader { unsigned reserved:2; unsigned scrambling_control:2; unsigned priority:1; unsigned data_alignment_indicator:1; unsigned copyright:1; unsigned original_or_copy:1;}; Is there any way to do this in C# that would enable me to access the bits using the struct dereferencing dot operator? For a couple of structures, I can just do bit shifting wrapped in an accessor function. I have loads of structures to handle in this way, so I'm looking for something that's easier to read and quicker to write.
I'd probably knock together something using attributes, then a conversion class to convert suitably attributed structures to the bitfield primitives. Something like... using System;namespace BitfieldTest{ [global::System.AttributeUsage(AttributeTargets.Field, AllowMultiple = false)] sealed class BitfieldLengthAttribute : Attribute { uint length; public BitfieldLengthAttribute(uint length) { this.length = length; } public uint Length { get { return length; } } } static class PrimitiveConversion { public static long ToLong<T>(T t) where T : struct { long r = 0; int offset = 0; // For every field suitably attributed with a BitfieldLength foreach (System.Reflection.FieldInfo f in t.GetType().GetFields()) { object[] attrs = f.GetCustomAttributes(typeof(BitfieldLengthAttribute), false); if (attrs.Length == 1) { uint fieldLength = ((BitfieldLengthAttribute)attrs[0]).Length; // Calculate a bitmask of the desired length long mask = 0; for (int i = 0; i < fieldLength; i++) mask |= 1 << i; r |= ((UInt32)f.GetValue(t) & mask) << offset; offset += (int)fieldLength; } } return r; } } struct PESHeader { [BitfieldLength(2)] public uint reserved; [BitfieldLength(2)] public uint scrambling_control; [BitfieldLength(1)] public uint priority; [BitfieldLength(1)] public uint data_alignment_indicator; [BitfieldLength(1)] public uint copyright; [BitfieldLength(1)] public uint original_or_copy; }; public class MainClass { public static void Main(string[] args) { PESHeader p = new PESHeader(); p.reserved = 3; p.scrambling_control = 2; p.data_alignment_indicator = 1; long l = PrimitiveConversion.ToLong(p); for (int i = 63; i >= 0; i--) { Console.Write( ((l & (1l << i)) > 0) ? "1" : "0"); } Console.WriteLine(); return; } }} Which produces the expected ...000101011. Of course, it needs more error checking and a slightly saner typing, but the concept is (I think) sound, reusable, and lets you knock out easily maintained structures by the dozen. adamw
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/14464", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1006/" ] }
14,491
What is the proper way for an MFC application to cleanly close itself?
Programatically Terminate an MFC Application void ExitMFCApp() { // same as double-clicking on main window close box ASSERT(AfxGetMainWnd() != NULL); AfxGetMainWnd()->SendMessage(WM_CLOSE); } http://support.microsoft.com/kb/117320
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/14491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1743/" ] }
14,504
I would really like to see a proportional font IDE, even if I have to build it myself (perhaps as an extension to Visual Studio). What I basically mean is MS Word style editing of code that sort of looks like the typographical style in The C++ Programming Language book . I want to set tab stops for my indents and lining up function signatures and rows of assignment statements, which could be specified in points instead of fixed character positions. I would also like bold and italics. Various font sizes and even style sheets would be cool. Has anyone seen anything like this out there or know the best way to start building one?
I'd still like to see a popular editor or IDE implement elastic tabstops .
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/14504", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1659/" ] }
14,527
I need to be able to find the last occurrence of a character within an element. For example: <mediaurl>http://www.blah.com/path/to/file/media.jpg</mediaurl> If I try to locate it through using substring-before(mediaurl, '.') and substring-after(mediaurl, '.') then it will, of course, match on the first dot. How would I get the file extension? Essentially, I need to get the file name and the extension from a path like this, but I am quite stumped as to how to do it using XSLT.
The following is an example of a template that would produce the required output in XSLT 1.0: <xsl:template name="getExtension"><xsl:param name="filename"/> <xsl:choose> <xsl:when test="contains($filename, '.')"> <xsl:call-template name="getExtension"> <xsl:with-param name="filename" select="substring-after($filename, '.')"/> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$filename"/> </xsl:otherwise> </xsl:choose></xsl:template><xsl:template match="/"> <xsl:call-template name="getExtension"> <xsl:with-param name="filename" select="'http://www.blah.com/path/to/file/media.jpg'"/> </xsl:call-template></xsl:template>
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/14527", "https://Stackoverflow.com", "https://Stackoverflow.com/users/274/" ] }
14,530
I took a look at the "Beginner's Guide to LINQ" post here on StackOverflow ( Beginners Guide to LINQ ), but had a follow-up question: We're about to ramp up a new project where nearly all of our database op's will be fairly simple data retrievals (there's another segment of the project which already writes the data). Most of our other projects up to this point make use of stored procedures for such things. However, I'd like to leverage LINQ-to-SQL if it makes more sense. So, the question is this: For simple data retrievals, which approach is better, LINQ-to-SQL or stored procs? Any specific pro's or con's? Thanks.
Some advantages of LINQ over sprocs: Type safety : I think we all understand this. Abstraction : This is especially true with LINQ-to-Entities . This abstraction also allows the framework to add additional improvements that you can easily take advantage of. PLINQ is an example of adding multi-threading support to LINQ. Code changes are minimal to add this support. It would be MUCH harder to do this data access code that simply calls sprocs. Debugging support : I can use any .NET debugger to debug the queries. With sprocs, you cannot easily debug the SQL and that experience is largely tied to your database vendor (MS SQL Server provides a query analyzer, but often that isn't enough). Vendor agnostic : LINQ works with lots of databases and the number of supported databases will only increase. Sprocs are not always portable between databases, either because of varying syntax or feature support (if the database supports sprocs at all). Deployment : Others have mentioned this already, but it's easier to deploy a single assembly than to deploy a set of sprocs. This also ties in with #4. Easier : You don't have to learn T-SQL to do data access, nor do you have to learn the data access API (e.g. ADO.NET) necessary for calling the sprocs. This is related to #3 and #4. Some disadvantages of LINQ vs sprocs: Network traffic : sprocs need only serialize sproc-name and argument data over the wire while LINQ sends the entire query. This can get really bad if the queries are very complex. However, LINQ's abstraction allows Microsoft to improve this over time. Less flexible : Sprocs can take full advantage of a database's featureset. LINQ tends to be more generic in it's support. This is common in any kind of language abstraction (e.g. C# vs assembler). Recompiling : If you need to make changes to the way you do data access, you need to recompile, version, and redeploy your assembly. Sprocs can sometimes allow a DBA to tune the data access routine without a need to redeploy anything. Security and manageability are something that people argue about too. Security : For example, you can protect your sensitive data by restricting access to the tables directly, and put ACLs on the sprocs. With LINQ, however, you can still restrict direct access to tables and instead put ACLs on updatable table views to achieve a similar end (assuming your database supports updatable views). Manageability : Using views also gives you the advantage of shielding your application non-breaking from schema changes (like table normalization). You can update the view without requiring your data access code to change. I used to be a big sproc guy, but I'm starting to lean towards LINQ as a better alternative in general. If there are some areas where sprocs are clearly better, then I'll probably still write a sproc but access it using LINQ. :)
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/14530", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1683/" ] }
14,582
I've been using Subversion for code control with TortoiseSVN to interface with the server for the past few months, and in general it's been going great! However, occasionally my FoxPro IDE will change the case of a file extension without warning where " program.prg " becomes " program. PRG ") TortoiseSVN apparently takes this to mean the first file was removed, becoming flagged as "missing" and the second name comes up as "non-versioned", wreaking havoc on my ability to track changes to the file. I understand that Subversion has it origins in the case-sensitive world of *nix but, is there any way to control this behavior in either Subversion or TortoiseSVN to be file name case-insensitive when used with Windows?
Unfortunately, Subversion is case-sensitive. This is due to the fact that files from Subversion can be checked out on both case-sensitive file systems (e.g., *nix) and case-insensitive file systems (e.g., Windows, Mac). This pre-commit hook script may help you avoid problems when you check in files. If it doesn't solve your problem, my best suggestion is to write a little script to make sure that all extensions are lowercase and run it every time before you check in/check out. It'll be a PITA, but maybe your best bet.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/14582", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1339/" ] }
14,617
I'm trying to retrieve a file from a server using SFTP (as opposed to FTPS) using Java. How can I do this?
Another option is to consider looking at the JSch library . JSch seems to be the preferred library for a few large open source projects, including Eclipse, Ant and Apache Commons HttpClient, amongst others. It supports both user/pass and certificate-based logins nicely, as well as all a whole host of other yummy SSH2 features. Here's a simple remote file retrieve over SFTP. Error handling is left as an exercise for the reader :-) JSch jsch = new JSch();String knownHostsFilename = "/home/username/.ssh/known_hosts";jsch.setKnownHosts( knownHostsFilename );Session session = jsch.getSession( "remote-username", "remote-host" ); { // "interactive" version // can selectively update specified known_hosts file // need to implement UserInfo interface // MyUserInfo is a swing implementation provided in // examples/Sftp.java in the JSch dist UserInfo ui = new MyUserInfo(); session.setUserInfo(ui); // OR non-interactive version. Relies in host key being in known-hosts file session.setPassword( "remote-password" );}session.connect();Channel channel = session.openChannel( "sftp" );channel.connect();ChannelSftp sftpChannel = (ChannelSftp) channel;sftpChannel.get("remote-file", "local-file" );// ORInputStream in = sftpChannel.get( "remote-file" ); // process inputstream as neededsftpChannel.exit();session.disconnect();
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/14617", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1769/" ] }
14,646
When I create a new project (or even when I edit the Sample Project) there is no way to add Description to the project. Or am I blind to the obvious?
There's no such thing as a project description, really. There's a column in the Projects page which is used so you can see which project is the default, built-in inbox, and we couldn't think of anything better to put as the column header for that column.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/14646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/123/" ] }
14,656
Can a (||any) proxy server cache content that is requested by a client over https? As the proxy server can't see the querystring, or the http headers, I reckon they can't. I'm considering a desktop application, run by a number of people behind their companies proxy. This application may access services across the internet and I'd like to take advantage of the in-built internet caching infrastructure for 'reads'. If the caching proxy servers can't cache SSL delivered content, would simply encrypting the content of a response be a viable option? I am considering all GET requests that we wish to be cachable be requested over http with the body encrypted using asymmetric encryption, where each client has the decryption key. Anytime we wish to perform a GET that is not cachable, or a POST operation, it will be performed over SSL.
No, it's not possible to cache https directly. The whole communication between the client and the server is encrypted. A proxy sits between the server and the client, in order to cache it, you need to be able to read it, ie decrypt the encryption. You can do something to cache it. You basically do the SSL on your proxy, intercepting the SSL sent to the client. Basically the data is encrypted between the client and your proxy, it's decrypted, read and cached, and the data is encrypted and sent on the server. The reply from the server is likewise descrypted, read and encrypted. I'm not sure how you do this on major proxy software (like squid), but it is possible. The only problem with this approach is that the proxy will have to use a self signed cert to encrypt it to the client. The client will be able to tell that a proxy in the middle has read the data, since the certificate will not be from the original site.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/14656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
14,708
What's the DOS FINDSTR equivalent for PowerShell ? I need to search a bunch of log files for "ERROR".
Here's the quick answer Get-ChildItem -Recurse -Include *.log | select-string ERROR I found it here which has a great indepth answer!
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/14708", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1351/" ] }
14,717
We're having sporadic, random query timeouts on our SQL Server 2005 cluster. I own a few apps that use it, so I'm helping out in the investigation. When watching the % CPU time in regular ol' Perfmon, you can certainly see it pegging out. However, SQL activity monitor only gives cumulative CPU and IO time used by a process, not what it's using right then, or over a specific timeframe. Perhaps I could use the profiler and run a trace, but this cluster is very heavily used and I'm afraid I'd be looking for a needle in a haystack. Am I barking up the wrong tree? Does anyone have some good methods for tracking down expensive queries/processes in this environment?
This will give you the top 50 statements by average CPU time, check here for other scripts: http://www.microsoft.com/technet/scriptcenter/scripts/sql/sql2005/default.mspx?mfr=true SELECT TOP 50 qs.total_worker_time/qs.execution_count as [Avg CPU Time], SUBSTRING(qt.text,qs.statement_start_offset/2, (case when qs.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text)) * 2 else qs.statement_end_offset end -qs.statement_start_offset)/2) as query_text, qt.dbid, dbname=db_name(qt.dbid), qt.objectid FROM sys.dm_exec_query_stats qscross apply sys.dm_exec_sql_text(qs.sql_handle) as qtORDER BY [Avg CPU Time] DESC
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/14717", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1212/" ] }
14,731
Normally I would just use: HttpContext.Current.Server.UrlEncode("url"); But since this is a console application, HttpContext.Current is always going to be null . Is there another method that does the same thing that I could use?
Try this! Uri.EscapeUriString(url); Or Uri.EscapeDataString(data) No need to reference System.Web. Edit: Please see another SO answer for more...
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/14731", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1469/" ] }
14,770
In .NET there are two version numbers available when building a project, File Version and Assembly Version. How are you using these numbers? Keeping them the same? Auto-incrementing one, but manually changing the other? Also what about the AssemblyInformationalVersion attribute? I'd found this support Microsoft Knowledge Base (KB) article that provided some help: How to use Assembly Version and Assembly File Version .
In solutions with multiple projects, one thing I've found very helpful is to have all the AssemblyInfo files point to a single project that governs the versioning. So my AssemblyInfos have a line: [assembly: AssemblyVersion(Foo.StaticVersion.Bar)] I have a project with a single file that declares the string: namespace Foo{ public static class StaticVersion { public const string Bar= "3.0.216.0"; // 08/01/2008 17:28:35 }} My automated build process then just changes that string by pulling the most recent version from the database and incrementing the second last number. I only change the Major build number when the featureset changes dramatically. I don't change the file version at all.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/14770", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1752/" ] }
14,843
On a quest to migrate some new UI into Managed/C# land, I have recently turned on Common Language Runtime Support (/clr) on a large legacy project, which uses MFC in a Shared DLL and relies on about a dozen other projects within our overall solution. This project is the core of our application, and would drive any managed UI code that is produced (hence the need to turn on clr support for interop). After fixing a ton of little niggly errors and warnings, I finally managed to get the application to compile.. However, running the application causes an EETypeLoadException and leaves me unable to debug... Doing some digging, I found the cause to be "System.TypeLoadException: Internal limitation: too many fields." which occurs right at the end of compilation. I then found this link which suggests to break the assembly down into two or more dlls. However, this is not possible in my case, as a limitation I have is that the legacy code basically remains untouched. Can anyone suggest any other possible solutions? I'm really at a dead end here.
Make sure the Enable String Pooling option under C/C++ Code Generation is turned on. That usually fixes this issue, which is one of those "huh?" MS limitations like the 64k limit on Excel spreadsheets. Only this one affects the number of symbols that may appear in an assembly.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/14843", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1169/" ] }
14,873
I've noticed that a few Wordpress blogs have query statistics present in their footer that simply state the number of queries and the total time required to process them for the particular page, reading something like: 23 queries. 0.448 seconds I was wondering how this is accomplished. Is it through the use of a particular Wordpress plug-in or perhaps from using some particular php function in the page's code?
Try adding this to the bottom of the footer in your template: <?php echo $wpdb->num_queries; ?> <?php _e('queries'); ?>. <?php timer_stop(1); ?> <?php _e('seconds'); ?>
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/14873", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1339/" ] }
14,893
Or, actually establishing a build process when there isn't much of one in place to begin with. Currently, that's pretty much the situation my group faces. We do web-app development primarily (but no desktop development at this time). Software deployments are ugly and unwieldy even with our modest apps, and we've had far too many issues crop up in the two years I have been a part of this team (and company). It's past time to do something about that, and the upshot is that we'll be able to kill two Joel Test birds with one stone (daily builds and one-step builds, neither of which exists in any form whatsoever). What I'm after here is some general insight on the kinds of things I need to be doing or thinking about, from people who have been in software development for longer than I have and also have bigger brains. I'm confident that will be most of the people currently posting in the beta. Relevant Tools:Visual BuildSource Safe 6.0 (I know, but I can't do anything about whether or not we use Source Safe at this time. That might be the next battle I fight.) Tentatively, I've got a Visual Build project that does this: Get source and place in local directory, including necessary DLLs needed for project. Get config files and rename as needed (we're storing them in a special sub directory that isn't part of the actual application, and they are named according to use). Build using Visual Studio Precompile using command line, copying into what will be a "build" directory Copy to destination. Get any necessary additional resources - mostly things like documents, images, and reports that are associated with the project (and put into directory from step 5). There's a lot of this stuff, and I didn't want to include it previously. However, I'm going to only copy changed items, so maybe it's irrelevant. I wasn't sure whether I really wanted to include this stuff in earlier steps. I still need to coax some logging out of Visual Build for all of this, but I'm not at a point where I need to do that yet. Does anyone have any advice or suggestions to make? We're not currently using a Deployment Project, I'll note. It would remove some of the steps necessary in this build I presume (like web.config swapping).
When taking on a project that has never had an automated build process, it is easier to take it in steps. Do not try to swallow to much at one time, otherwise it can feel overwhelming. First get your code compiling with one step using an automated build program (i.e. nant/msbuild). I am not going to debate which one is better. Find one that feels comfortable to you and use it. Have the build scripts live with the project in source control. Figure out how you want your automated build to be triggered. Whether it is hooking it up to CruiseControl or running a nightly build task using Scheduled Tasks. CruiseControl or TeamCity is probably the best choice for this, because they include a lot of tools you can use to make this step easier. CruiseControl is free and TeamCity is free to a point, where you might have to pay for it depending on how big the project is. Ok, by this point you will be pretty comfortable with the tools. Now you are ready to add more tasks based on what you want to do for testing, deployment, and etc... Hope this helps.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/14893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1734/" ] }
14,943
What is the best way to disable Alt + F4 in a c# win form to prevent the user from closing the form? I am using a form as a popup dialog to display a progress bar and I do not want the user to be able to close it.
This does the job: private void Form1_FormClosing(object sender, FormClosingEventArgs e){ e.Cancel = true;} Edit: In response to pix0rs concern - yes you are correct that you will not be able to programatically close the app. However, you can simply remove the event handler for the form_closing event before closing the form: this.FormClosing -= new System.Windows.Forms.FormClosingEventHandler(this.Form1_FormClosing);this.Close();
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/14943", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1375/" ] }
14,967
I'm a recent AI graduate (circa 2 years) working for a modest operation. It has fallen to me (primarily as I'm the first 'adopter' in the department) to create a basic (read useful?) C# coding standards document. I think I should explain that I'm probably the most junior software engineer going, but I'm looking forward to this task as hopefully I might actually be able to produce something half usable. I've done a pretty extensive search of the Internet and read articles on what a coding standards document should / should not contain. This seems like a good as place as any to ask for some suggestions. I realise that I am potentially opening a door to a whole world of disagreement about 'the best way to do things'. I both understand and respect the undeniable fact that each programmer has a preferred method of solving each individual task, as a result I'm not looking to write anything so draconianly proscriptive as to stifle personal flair but to try and get a general methodology and agreed standards (e.g. naming conventions) to help make individuals code more readable. So here goes .... any suggestions? Any at all?
We start with Microsoft's .NET guidelines: http://msdn.microsoft.com/en-us/library/ms229042.aspx (link updated for .NET 4.5) Microsoft's C# guidelines: http://blogs.msdn.com/brada/articles/361363.aspx . and then document the differences from and additions to that baseline.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/14967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1816/" ] }
15,024
Questions #1 through #4 on the Joel Test in my opinion are all about the development tools being used and the support system in place for developers: Do you use source control? Can you make a build in one step? Do you make daily builds? Do you have a bug database? I'm just curious what free/cheap (but good) tools exist for the small development shops that don't have large bank accounts to use to achieve a positive answer on these questions. For source control I know Subversion is a great solution, and if you are a one man shop you could even use SourceGear's Vault . I use NAnt for my larger projects, but have yet to set up a script to build my installers as well as running the obfusication tools all as a single step. Any other suggestions? If you can answer yes to the building in a single step, I think creating daily builds would be easy, but what tools would you recommend for automating those daily builds? For a one or two man team, it's already been discussed on SO that you can use FogBugz On Demand, but what other bug tracking solutions exist for small teams?
source control: Subversion or Mercurial or Git build automation: NAnt , MSBuild , Rake , Maven continuous integration: CruiseControl.NET or Continuum or Jenkins issue tracking: Trac , Bugzilla , Gemini (if it must be .NET and free-ish) Don't forget automated testing with NUnit , Fit , and WatiN .
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1795/" ] }
15,047
I'm working on a WinForm .Net application with the basic UI that includes toolbar buttons, menu items and keystrokes that all initiate the same underlying code. Right now the event handlers for each of these call a common method to perform the function. From what I've read this type of action could be handled by the Command design pattern with the additional benefit of automatically enabling/disabling or checking/unchecking the UI elements. I've been searching the net for a good example project, but really haven't found one. Does anyone have a good example that can be shared?
Let's first make sure we know what the Command pattern is: Command pattern encapsulates a request as an object and gives it a known public interface. Command Pattern ensures that every object receives its own commands and provides a decoupling between sender and receiver. A sender is an object that invokes an operation, and a receiver is an object that receives the request and acts on it. Here's an example for you. There are many ways you can do this, but I am going to take an interface base approach to make the code more testable for you. I am not sure what language you prefer, but I am writing this in C#. First, create an interface that describes a Command. public interface ICommand{ void Execute();} Second, create command objects that will implement the command interface. public class CutCommand : ICommand{ public void Execute() { // Put code you like to execute when the CutCommand.Execute method is called. }} Third, we need to setup our invoker or sender object. public class TextOperations{ public void Invoke(ICommand command) { command.Execute(); }} Fourth, create the client object that will use the invoker/sender object. public class Client{ static void Main() { TextOperations textOperations = new TextOperations(); textOperation.Invoke(new CutCommand()); }} I hope you can take this example and put it into use for the application you are working on. If you would like more clarification, just let me know.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15047", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1752/" ] }
15,062
How do I convert function input parameters to the right type? I want to return a string that has part of the URL passed into it removed. This works, but it uses a hard-coded string: function CleanUrl($input){ $x = "http://google.com".Replace("http://", "") return $x}$SiteName = CleanUrl($HostHeader)echo $SiteName This fails: function CleanUrl($input){ $x = $input.Replace("http://", "") return $x}Method invocation failed because [System.Array+SZArrayEnumerator] doesn't contain a method named 'Replace'.At M:\PowerShell\test.ps1:13 char:21+ $x = $input.Replace( <<<< "http://", "")
The concept here is correct. The problem is with the variable name you have chosen. $input is a reserved variable used by PowerShell to represent an array of pipeline input. If you change your variable name, you should not have any problem. PowerShell does have a replace operator , so you could make your function into function CleanUrl($url){ return $url -replace 'http://'}
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636/" ] }
15,133
Does anyone have any recommendations of tools that can be of assistance with moving literal values into resource files for localization? I've used a resharper plugin called RGreatX but was wondering if there is anything else out there. It's one heck of a long manual process for moving the strings across and think there must be a better way! RGreatX is OK but could be a bit slicker I feel.
Here's one: http://www.codeplex.com/ResourceRefactoring It'a actually a Microsoft "open source" Visual Studio(2005 and up) tool that integrates with the IDE. You can easily replace every occurence of a string with a ressource reference with a few clicks.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15133", "https://Stackoverflow.com", "https://Stackoverflow.com/users/942/" ] }
15,139
With the increased power of JavaScript frameworks like YUI, JQuery, and Prototype, and debugging tools like Firebug, doing an application entirely in browser-side JavaScript looks like a great way to make simple applications like puzzle games and specialized calculators. Is there any downside to this other than exposing your source code? How should you handle data storage for this kind of program? Edit: yes, Gears and cookies can be used for local storage, but you can't easily get access to files and other objects the user already has around. You also can't save data to a file for a user without having them invoke some browser feature like printing to PDF or saving page as a file.
I've written several application in JS including a spreadsheet. Upside: great language short code-run-review cycle DOM manipulation is great for UI design clients on every computer (and phone) Downside: differences between browsers (especially IE) code base scalability (with no intrinsic support for namespaces and classes) no good debuggers (especially, again, for IE) performance (even though great progress has been made with FireFox and Safari) You need to write some server code as well. Bottom line: Go for it. I did.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1323/" ] }
15,142
What are the advantages/disadvantages of keeping SQL in your C# source code or in Stored Procs? I've been discussing this with a friend on an open source project that we're working on (C# ASP.NET Forum). At the moment, most of the database access is done by building the SQL inline in C# and calling to the SQL Server DB. So I'm trying to establish which, for this particular project, would be best. So far I have: Advantages for in Code: Easier to maintain - don't need to run a SQL script to update queries Easier to port to another DB - no procs to port Advantages for Stored Procs: Performance Security
I am not a fan of stored procedures Stored Procedures are MORE maintainable because: * You don't have to recompile your C# app whenever you want to change some SQL You'll end up recompiling it anyway when datatypes change, or you want to return an extra column, or whatever. The number of times you can 'transparently' change the SQL out from underneath your app is pretty small on the whole You end up reusing SQL code. Programming languages, C# included, have this amazing thing, called a function. It means you can invoke the same block of code from multiple places! Amazing! You can then put the re-usable SQL code inside one of these, or if you want to get really high tech, you can use a library which does it for you. I believe they're called Object Relational Mappers, and are pretty common these days. Code repetition is the worst thing you can do when you're trying to build a maintainable application! Agreed, which is why storedprocs are a bad thing. It's much easier to refactor and decompose (break into smaller parts) code into functions than SQL into... blocks of SQL? You have 4 webservers and a bunch of windows apps which use the same SQL code Now you realized there is a small problem with the SQl code so do you rather...... change the proc in 1 place or push the code to all the webservers, reinstall all the desktop apps(clickonce might help) on all the windows boxes Why are your windows apps connecting directly to a central database? That seems like a HUGE security hole right there, and bottleneck as it rules out server-side caching. Shouldn't they be connecting via a web service or similar to your web servers? So, push 1 new sproc, or 4 new webservers? In this case it is easier to push one new sproc, but in my experience, 95% of 'pushed changes' affect the code and not the database. If you're pushing 20 things to the webservers that month, and 1 to the database, you hardly lose much if you instead push 21 things to the webservers, and zero to the database. More easily code reviewed. Can you explain how? I don't get this. Particularly seeing as the sprocs probably aren't in source control, and therefore can't be accessed via web-based SCM browsers and so on. More cons: Storedprocs live in the database, which appears to the outside world as a black box. Simple things like wanting to put them in source control becomes a nightmare. There's also the issue of sheer effort. It might make sense to break everything down into a million tiers if you're trying to justify to your CEO why it just cost them 7 million dollars to build some forums, but otherwise creating a storedproc for every little thing is just extra donkeywork for no benefit.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/15142", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1463/" ] }
15,171
In the linux file system, where should user scripts be placed? I'm thinking specifically python scripts to be called by cron.
the information i got: /usr/local/sbin custom script meant for root/usr/local/bin custom script meant for all users including non-root chatlog snips from irc.debian.org #debian: (02:48:49) c33s: question: where is the _correct_ location, to put custom scriptsfor the root user (like a script on a webserver for createing everything needed for a new webuser)? is it /bin, /usr/local/bin,...? /usr/local/scripts is mentioned in (*link to this page*)(02:49:15) Hydroxide: c33s: typically /usr/local/sbin(02:49:27) Hydroxide: c33s: no idea what /usr/local/scripts would be(02:49:32) Hydroxide: it's nonstandard(02:49:53) Hydroxide: if it's a custom script meant for all users including non-root, then /usr/local/bin(02:52:43) Hydroxide: c33s: Debian follows the Filesystem Hierarchy Standard, with a very small number of exceptions, which is online in several formats at http://www.pathname.com/fhs/ (also linked from http://www.debian.org/devel/ and separately online at http://www.debian.org/doc/packaging-manuals/fhs/fhs-2.3.html)(02:53:03) Hydroxide: c33s: if you have the debian-policy package installed, it's also in several formats at /usr/share/doc/debian-policy/fhs/ on your system(02:53:37) Hydroxide: c33s: most linux distributions follow that standard, though usually less strictly and with more deviations than Debian. thanks go out to Hydroxide
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/15171", "https://Stackoverflow.com", "https://Stackoverflow.com/users/479/" ] }
15,204
What is the best way to iterate through a strongly-typed generic List in C#.NET and VB.NET?
For C#: foreach(ObjectType objectItem in objectTypeList){ // ...do some stuff} Answer for VB.NET from Purple Ant : For Each objectItem as ObjectType in objectTypeList 'Do some stuff 'Next
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/15204", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1224/" ] }
15,240
I'd like to make a debug logging function with the same parameters as printf . But one that can be removed by the pre-processor during optimized builds. For example: Debug_Print("Warning: value %d > 3!\n", value); I've looked at variadic macros but those aren't available on all platforms. gcc supports them, msvc does not.
I still do it the old way, by defining a macro (XTRACE, below) which correlates to either a no-op or a function call with a variable argument list. Internally, call vsnprintf so you can keep the printf syntax: #include <stdio.h>void XTrace0(LPCTSTR lpszText){ ::OutputDebugString(lpszText);}void XTrace(LPCTSTR lpszFormat, ...){ va_list args; va_start(args, lpszFormat); int nBuf; TCHAR szBuffer[512]; // get rid of this hard-coded buffer nBuf = _vsnprintf(szBuffer, 511, lpszFormat, args); ::OutputDebugString(szBuffer); va_end(args);} Then a typical #ifdef switch: #ifdef _DEBUG#define XTRACE XTrace#else#define XTRACE#endif Well that can be cleaned up quite a bit but it's the basic idea.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/15240", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1841/" ] }
15,241
The main web application of my company is crying out for a nifty set of libraries to make it in some way maintainable and scalable, and one of my colleagues has suggested CSLA. So I've bought the book but as : programmers don't read books anymore I wanted to gauge the SOFlow community's opinion of it. So here are my questions: How may people are using CSLA? What are the pros and cons? Does CSLA really not fit in with TDD? What are my alternatives? If you have stopped using it or decided against why?
Before I specifically answer your question, I'd like to put a few thoughts down. Is CSLA right for your project? It depends. I would personally consider CSLA for desktop based applications that does not value unit testing as a high priority. CSLA is great if you want to easily scale to an n-tier application. CSLA tends to get some flack because it does not allow pure unit testing. This is true, however like anything in technology, I believe that there is No One True Way . Unit testing may not be something you are undertaking for a specific project. What works for one team and one project may not work for another team or other project. There are also many misconceptions in regards to CSLA. It is not an ORM. it is not a competitor to NHibernate (in fact using CLSA Business Objects & NHibernate as data access fit really well together). It formalises the concept of a Mobile Object . 1. How many people are using CSLA? Based on the CSLA Forums , I would say there are quite a number of CSLA based projects out there. Honestly though, I have no idea how many people are actually using it. I have used it in the past on two projects. 2. What are the pros and cons? While it is difficult to summarise in a short list, here is some of the pro/con's that come to mind. Pros: It's easy to get new developers upto speed. The CSLA book and sampleapp are great resources to get up to speed. The Validation framework is truly world class - and has been "borrowed" for many many other non-CSLA projects and technologies. n-Level Undo within your business objects Config line change for n-Tier scalability (Note: not even arecompile is necessary) Key technologies are abstracted from the "real" code. When WCF wasintroduced, it had minimal impact onCSLA code. It is possible to share your business objects between windows and web projects. CSLA promotes the normalization of behaviour rather than the normalization of data (leaving the database for data normalization). Cons: Difficulty in unit testing Lack of Separation of Concern (generally your business objects have data access code inside them). As CSLA promotes the normalization of behavior , rather than the normalization of data , and this can result in business objects that are named similarly, but have different purposes. This can cause some confusion and a feeling like you are not reusing objects appropriately. That said, once the physiological leap is taken, it more than makes sense - it seems inappropriate to structure objects the "old" way. It's not "in fashion" to build applications this way. You may struggle to get developers who are passionate about the technology. 3. After reading this does CSLA really not fit in with TDD? I haven't found an effective way to do TDD with CSLA. That said, I am sure there are many smarter people out there than me that may have tried this with greater success. 4. What are my alternatives? Domain-Driven-Design is getting big push at the moment (and rightfully so - it's fantastic for some applications). There are also a number of interesting patterns developing from the introduction of LINQ (and LINQ to SQL, Entity Framework, etc). Fowlers book PoEAA , details many patterns that may be suitable for your application. Note that some patterns are competing (i.e. Active Record and Repository), and thus are meant to be used for specific scenarios. While CSLA doesn't exactly match any of the patterns described in that book, it most closely resembles Active Record (although I feel it is short-sighted to claim an exact match for this pattern). 5. If you have stopped using it or decided against why? I didn't fully recommend CSLA for my last project, because I believe the scope of the application is too large for the benefits CSLA provides. I would not use CSLA on a web project. I feel there are other technologies better suited to building applications in that environment. In summary, while CSLA is anything but a silver bullet , it is appropriate for some scenarios. Hope this helps!
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/15241", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1116/" ] }
15,247
Given a list of locations such as <td>El Cerrito, CA</td> <td>Corvallis, OR</td> <td>Morganton, NC</td> <td>New York, NY</td> <td>San Diego, CA</td> What's the easiest way to generate a Google Map with pushpins for each location?
I'm assuming you have the basics for Maps in your code already with your API Key. <head> <script type="text/javascript" href="http://maps.google.com/maps? file=api&v=2&key=xxxxx"> function createMap() { var map = new GMap2(document.getElementById("map")); map.setCenter(new GLatLng(37.44, -122.14), 14); } </script></head><body onload="createMap()" onunload="GUnload()"> Everything in Google Maps is based off of latitude (lat) and longitude (lng). So to create a simple marker you will just create a GMarker with the lat and lng. var where = new GLatLng(37.925243,-122.307358); //Lat and Lng for El Cerrito, CAvar marker = new GMarker(where); // Create marker (Pinhead thingy)map.setCenter(where); // Center map on markermap.addOverlay(marker); // Add marker to map However if you don't want to look up the Lat and Lng for each city you can use Google's Geo Coder. Heres an example: var address = "El Cerrito, CA";var geocoder = new GClientGeocoder;geocoder.getLatLng(address, function(point) { if (point) { map.clearOverlays(); // Clear all markers map.addOverlay(new GMarker(point)); // Add marker to map map.setCenter(point, 10); // Center and zoom map on marker }}); So I would just create an array of GLatLng's of every city from the GeoCoder and then draw them on the map.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15247", "https://Stackoverflow.com", "https://Stackoverflow.com/users/116/" ] }
15,254
Is it possible to actually make use of placement new in portable code when using it for arrays? It appears that the pointer you get back from new[] is not always the same as the address you pass in (5.3.4, note 12 in the standard seems to confirm that this is correct), but I don't see how you can allocate a buffer for the array to go in if this is the case. The following example shows the problem. Compiled with Visual Studio, this example results in memory corruption: #include <new>#include <stdio.h>class A{ public: A() : data(0) {} virtual ~A() {} int data;};int main(){ const int NUMELEMENTS=20; char *pBuffer = new char[NUMELEMENTS*sizeof(A)]; A *pA = new(pBuffer) A[NUMELEMENTS]; // With VC++, pA will be four bytes higher than pBuffer printf("Buffer address: %x, Array address: %x\n", pBuffer, pA); // Debug runtime will assert here due to heap corruption delete[] pBuffer; return 0;} Looking at the memory, the compiler seems to be using the first four bytes of the buffer to store a count of the number of items in it. This means that because the buffer is only sizeof(A)*NUMELEMENTS big, the last element in the array is written into unallocated heap. So the question is can you find out how much additional overhead your implementation wants in order to use placement new[] safely? Ideally, I need a technique that's portable between different compilers. Note that, at least in VC's case, the overhead seems to differ for different classes. For instance, if I remove the virtual destructor in the example, the address returned from new[] is the same as the address I pass in.
Personally I'd go with the option of not using placement new on the array and instead use placement new on each item in the array individually. For example: int main(int argc, char* argv[]){ const int NUMELEMENTS=20; char *pBuffer = new char[NUMELEMENTS*sizeof(A)]; A *pA = (A*)pBuffer; for(int i = 0; i < NUMELEMENTS; ++i) { pA[i] = new (pA + i) A(); } printf("Buffer address: %x, Array address: %x\n", pBuffer, pA); // dont forget to destroy! for(int i = 0; i < NUMELEMENTS; ++i) { pA[i].~A(); } delete[] pBuffer; return 0;} Regardless of the method you use, make sure you manually destroy each of those items in the array before you delete pBuffer, as you could end up with leaks ;) Note : I haven't compiled this, but I think it should work (I'm on a machine that doesn't have a C++ compiler installed). It still indicates the point :) Hope it helps in some way! Edit: The reason it needs to keep track of the number of elements is so that it can iterate through them when you call delete on the array and make sure the destructors are called on each of the objects. If it doesn't know how many there are it wouldn't be able to do this.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/15254", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1739/" ] }
15,302
What does it mean when you get or create a date in UTC format in JavaScript?
A date represents a specific point in time. This point in time will be called differently in different places. As I write this, it's 00:27 on Tuesday in Germany, 23:27 on Monday in the UK and 18:27 on Monday in New York. To take an example method: getDay returns the day of the week in the local timezone. Right now, for a user in Germany, it would return 2. For a user in the UK or US, it would return 1. In an hour's time, it will return 2 for the user in the UK (because it will then be 00:27 on Tuesday there). The ..UTC.. methods deal with the representation of the time in UTC (also known as GMT). In winter, this is the same timezone as the UK, in summer it's an hour behind the time in the UK. It's summer as I write this. getUTCDay will return 1 (Monday), getUTCHours will return 22, getUTCMinutes will return 27. So it's 22:27 on Monday in the UTC timezone. Whereas the plain get... functions will return different values depending on where the user is, the getUTC.. functions will return those same values no matter where the user is.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15302", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1538/" ] }
15,334
I have recently started using Vim as my text editor and am currently working on my own customizations. I suppose keyboard mappings can do pretty much anything, but for the time being I'm using them as a sort of snippets facility almost exclusively. So, for example, if I type def{TAB} ( :imap def{TAB} def ():<ESC>3ha ), it expands to: def |(): # '|' represents the caret This works as expected, but I find it annoying when Vim waits for a full command while I'm typing a word containing "def" and am not interested in expanding it. Is there a way to avoid this or use this function more effectively to this end? Is any other Vim feature better suited for this? After taking a quick look at SnippetsEmu , it looks like it's the best option and much easier to customize than I first thought. To continue with the previous example: :Snippet def <{}>(): Once defined, you can expand your snippet by typing def{TAB} .
Snipmate - like texmate :) http://www.vim.org/scripts/script.php?script_id=2540 video: http://vimeo.com/3535418 snippet def """ ${1:docstring} """ def ${2:name}: return ${3:value}
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15334", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1670/" ] }
15,376
I'm trying to choose a tool for creating UML diagrams of all flavours. Usability is a major criteria for me, but I'd still take more power with a steeper learning curve and be happy. Free (as in beer) would be nice, but I'd be willing to pay if the tool's worth it. What should I be using?
Some context: Recently for graduate school I researched UML tools for usability and UML comprehension in general for an independent project. I also model/architect for a living. The previous posts have too many answers and not enough questions. A common misunderstanding is that UML is about creating diagrams. Sure, diagrams are important, but really you are creating a model. Here are the questions that should be answered as each vendor product/solution does some things better than others. Note: The listed answers are my view as the best even if other products support a given feature or need. Are you modeling or drawing? (Drawing - ArgoUML , free implementations, and Visio ) Will you be modeling in the future? (For basic modeling - Community editions of pay products) Do you want to formalize your modeling through profiles or meta-models? OCL? ( Sparx , RSM, Visual Paradigm ) Are you concerned about model portability, XMI support? ( GenMyModel , Sparx , Visual Paradigm , Altova ) Do you have an existing set of documents that you need to work with? (Depends on the documents) Would you want to generate code stubs or full functioning code?( GenMyModel , Visual Paradigm , Sparx , Altova ) Do you need more mature processes such as use case management, pattern creation, asset creation, RUP integration, etc? (RSA/RSM/IBM Rational Products) Detailed Examples: IBM Rational Software Architect did not implement UML 2.0 all the way when it comes to realizes type relationships when creating a UML profile, but Visual Paradigm and Sparx got it right. Ok, that was way too detailed, so a simpler example would be ArgoUML , which has no code generation features and focuses on drawing more than the modeling aspect of UML. Sparx and Visual Paradigm do UML really well and generate code well, however, hooking into project lifecycles and other process is where RSM/RSA is strong. Watch out for closed or product specific code generation processes or frameworks as you could end up stuck with that product. This is a straight brain dump so a couple details may not be perfect, however, this should provide a general map to the questions and solutions to looking into. NEW - Found a good list of many UML tools with descriptions. Wiki UML Tool List
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/15376", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1370/" ] }
15,390
What are some standard practices for managing a medium-large JavaScript application? My concerns are both speed for browser download and ease and maintainability of development. Our JavaScript code is roughly "namespaced" as: var Client = { var1: '', var2: '', accounts: { /* 100's of functions and variables */ }, orders: { /* 100's of functions and variables and subsections */ } /* etc, etc for a couple hundred kb */} At the moment, we have one (unpacked, unstripped, highly readable) JavaScript file to handle all the business logic on the web application. In addition, there is jQuery and several jQuery extensions. The problem we face is that it takes forever to find anything in the JavaScript code and the browser still has a dozen files to download. Is it common to have a handful of "source" JavaScript files that gets "compiled" into one final, compressed JavaScript file? Any other handy hints or best practices?
The approach that I've found works for me is having seperate JS files for each class (just as you would in Java, C# and others). Alternatively you can group your JS into application functional areas if that's easier for you to navigate. If you put all your JS files into one directory, you can have your server-side environment (PHP for instance) loop through each file in that directory and output a <script src='/path/to/js/$file.js' type='text/javascript'> in some header file that is included by all your UI pages. You'll find this auto-loading especially handy if you're regularly creating and removing JS files. When deploying to production, you should have a script that combines them all into one JS file and "minifies" it to keep the size down.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15390", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1848/" ] }
15,412
An answer to a Stack Overflow question stated that a particular framework violated a plain and simple OOP rule: Single Responsibility Principle (SRP). Is the Single Responsibility Principle really a rule of OOP? My understanding of the definition of Object Orientated Programming is "a paradigm where objects and their behaviour are used to create software". This includes the following techniques: Encapsulation, Polymorphism & Inheritance. Now don't get me wrong - I believe SRP to be the key to most good OO designs, but I feel there are cases where this principle can and should be broken (just like database normalization rules). I aggressively push the benefits of SRP, and the great majority of my code follows this principle. But, is it a rule, and thus implies that it shouldn't be broken?
Very few rules, if any, in software development are without exception. Some people think there are no place for goto but they're wrong. As far as OOP goes, there isn't a single definition of object-orientedness so depending on who you ask you'll get a different set of hard and soft principles, patterns, and practices. The classic idea of OOP is that messages are sent to otherwise opaque objects and the objects interpret the message with knowledge of their own innards and then perform a function of some sort. SRP is a software engineering principle that can apply to the role of a class, or a function, or a module. It contributes to the cohesion of something so that it behaves well put together without unrelated bits hanging off of it or having multiple roles that intertwine and complicate things. Even with just one responsibilty, that can still range from a single function to a group of loosely related functions that are part of a common theme. As long as you're avoiding jury-rigging an element to take the responsibilty of something it wasn't primarily designed for or doing some other ad-hoc thing that dilute the simplicity of an object, then violate whatever principle you want. But I find that it's easier to get SRP correct then to do something more elaborate that is just as robust.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/15412", "https://Stackoverflow.com", "https://Stackoverflow.com/users/708/" ] }
15,423
I'd like to know what's the way to actually set the icon of a .bat file to an arbitrary icon.How would I go about doing that programmatically, independently of the language I may be using.
Assuming you're referring to MS-DOS batch files: as it is simply a text file with a special extension, a .bat file doesn't store an icon of its own. You can, however, create a shortcut in the .lnk format that stores an icon.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/15423", "https://Stackoverflow.com", "https://Stackoverflow.com/users/883/" ] }
15,470
I know this site is written using ASP.Net MVC and I do not see "/Home" in the url. This proves to me that it can be done. What special route and do I need?
Just change "Home" to an empty string. routes.MapRoute( "Home", "", new { action = Index, controller = Home });
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/692/" ] }
15,481
Sometimes a labeled break or continue can make code a lot more readable. OUTERLOOP: for ( ;/*stuff*/; ) { //...lots of code if ( isEnough() ) break OUTERLOOP; //...more code} I was wondering what the common convention for the labels was. All caps? first cap?
If you have to use them use capitals, this draws attention to them and singles them out from being mistakenly interpreted as "Class" names. Drawing attention to them has the additional benefit of catching someone's eye that will come along and refactor your code and remove them. ;)
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15481", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1870/" ] }
15,486
So I came across an interesting problem today. We have a WCF web service that returns an IList. Not really a big deal until I wanted to sort it. Turns out the IList interface doesn't have a sort method built in. I ended up using the ArrayList.Adapter(list).Sort(new MyComparer()) method to solve the problem but it just seemed a bit "ghetto" to me. I toyed with writing an extension method, also with inheriting from IList and implementing my own Sort() method as well as casting to a List but none of these seemed overly elegant. So my question is, does anyone have an elegant solution to sorting an IList
How about using LINQ To Objects to sort for you? Say you have a IList<Car> , and the car had an Engine property, I believe you could sort as follows: from c in listorderby c.Engineselect c; Edit: You do need to be quick to get answers in here. As I presented a slightly different syntax to the other answers, I will leave my answer - however, the other answers presented are equally valid.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/15486", "https://Stackoverflow.com", "https://Stackoverflow.com/users/493/" ] }
15,496
After reading Hidden Features of C# I wondered, What are some of the hidden features of Java?
Double Brace Initialization took me by surprise a few months ago when I first discovered it, never heard of it before. ThreadLocals are typically not so widely known as a way to store per-thread state. Since JDK 1.5 Java has had extremely well implemented and robust concurrency tools beyond just locks, they live in java.util.concurrent and a specifically interesting example is the java.util.concurrent.atomic subpackage that contains thread-safe primitives that implement the compare-and-swap operation and can map to actual native hardware-supported versions of these operations.
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/15496", "https://Stackoverflow.com", "https://Stackoverflow.com/users/486/" ] }
15,593
I understand what System.WeakReference does, but what I can't seem to grasp is a practical example of what it might be useful for. The class itself seems to me to be, well, a hack. It seems to me that there are other, better means of solving a problem where a WeakReference is used in examples I've seen. What's the canonical example of where you've really got to use a WeakReference? Aren't we trying to get farther away from this type of behavior and use of this class?
One useful example is the guys who run DB4O object oriented database. There, WeakReferences are used as a kind of light cache: it will keep your objects in memory only as long as your application does, allowing you to put a real cache on top. Another use would be in the implementation of weak event handlers. Currently, one big source of memory leaks in .NET applications is forgetting to remove event handlers. E.g. public MyForm(){ MyApplication.Foo += someHandler;} See the problem? In the above snippet, MyForm will be kept alive in memory forever as long as MyApplication is alive in memory. Create 10 MyForms, close them all, your 10 MyForms will still be in memory, kept alive by the event handler. Enter WeakReference. You can build a weak event handler using WeakReferences so that someHandler is a weak event handler to MyApplication.Foo, thus fixing your memory leaks! This isn't just theory. Dustin Campbell from the DidItWith.NET blog posted an implementation of weak event handlers using System.WeakReference.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/15593", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1875/" ] }
15,656
Another SSRS question here: We have a development, a QA, a Prod-Backup and a Production SSRS set of servers. On our production and prod-backup, SSRS will go to sleep if not used for a period of time. This does not occur on our development or QA server. In the corporate environment we're in, we don't have physical (or even remote login) access to these machines, and have to work with a team of remote administrators to configure our SSRS application. We have asked that they fix, if possible, this issue. So far, they haven't been able to identify the issue, and I would like to know if any of my peers know the answer to this question. Thanks.
For anybody using the integrated webserver that is built into SQL Reporting Services (and hence IIS may not even be installed on the box), the setting to control this actually lives in: C:\Program Files\Microsoft SQL Server\ MSRS10_50.MSSQLSERVER\Reporting Services\ReportServer\rsreportserver.config Your directory may be different; version 10_50 maps to SQL 2008 R2. You'll be looking for the setting called RecycleTime . Default is 720 (12 hours). Setting it to 0 will disable.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1580/" ] }
15,681
I'd like to start experimenting with Cocoa and programming for Mac OSX. I'm not terribly concerned with Objective C syntax/constructs/bheaviors at this point, but more curious as to an efficient setup on in terms of an editor and/or IDE that will get me going quickly. Is there any IDE even remotely similar to Visual Studio (since that's where I've spent most of my time over the last 7 years) in terms of it's solution/project concept? Any other tools, tips, suggestions and/or resources to get up and experimenting quickly? I'd like to avoid a lot of the intro stuff and get into things like "If you want to create an Mac desktop application, you can use Acme IDE and set up your project like this." I was afraid Xcode was going to be the answer! :P I tried playing around with that -- literally just getting it off the DVD and just diving in with no idea what to expect (before I even knew that you used Objective C as the language). I figured, the wise guy that I am, that I could just sort of fumble around and get a simple app working ... wrong. @Andrew - Thanks for the insight on those config settings. Based on my Xcode first impression, I think those may help.
I'd suggest you pick a fun little product and dive in. If you're looking for a book I'd suggest Cocoa Programming for Max OSX which is a very good introduction both to Objective-C and Cocoa. XCode is pretty much the de facto IDE and free with OSX. It should be on your original install DVD. It's good but not as good as Visual Studio (sorry, it's really not). As a long-time VS user I found the default XCode config a little odd and hard to adjust to, particularly the way a new floating window would open for every sourcefile. Some tweaks I found particularly helpful; Settings/General -> All-In-One (unifies editor/debugger window) Settings/General -> Open counterparts in same editor (single-window edit) Settings/Debugging - "In Editor Debugger Controls" Settings/Debugging - "Auto Clear Debug Console" Settings/Key-binding - lots of binding to match VS (Ctrl+F5/Shift+F5,Shift+Home, Shift+End etc) I find the debugger has some annoying issues such as breakpoints not correctly mapping to lines and exceptions aren't immediately trapped by the debugger. Nothing deal-breaking but a bit cumbersome. I would recommend that you make use of the new property syntax that was introduced for Objective-C 2.0. They make for a heck of a lot less typing in many many places. They're limited to OSX 10.5 only though (yeah, language features are tied to OS versions which is a bit odd). Also don't be fooled into downplaying the differences between C/C++ and Objective-C. They're very much related but ARE different languages. Try and start Objective-C without thinking about how you'd do X,Y,Z in C/C++. It'll make it a lot easier.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1875/" ] }
15,687
So, you are all ready to do a big SVN Commit and it bombs because you have inconsistent line endings in some of your files. Fun part is, you're looking at 1,000s of files spanning dozens of folders of different depths. What do you do?
I don't think the pre-commit hook can actually change the data that is being committed - it can disallow a commit, but I don't think it can do the conversion for you. It sounds like you want the property 'svn:eol-style' set to 'native' - this will automatically convert newlines to whatever is used on your platform (use 'CRLF', 'CR' or 'LF' to get those regardless of what the OS wants). You can use auto-properties so that all future files you create will have this property set (auto props are handled client-side, so you'd have to set this up for each user).
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/15687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/307/" ] }
15,690
It's been mentioned to me that I'll be the sole developer behind a large new system. Among other things I'll be designing a UI and database schema. I'm sure I'll receive some guidance, but I'd like to be able to knock their socks off. What can I do in the meantime to prepare, and what will I need to keep in mind when I sit down at my computer with the spec? A few things to keep in mind: I'm a college student at my first real programming job. I'll be using Java. We already have SCM set up with automated testing, etc...so tools are not an issue.
Do you know much about OOP? If so, look into Spring and Hibernate to keep your implementation clean and orthogonal . If you get that, you should find TDD a good way to keep your design compact and lean, especially since you have "automated testing" up and running. UPDATE:Looking at the first slew of answers, I couldn't disagree more. Particularly in the Java space, you should find plenty of mentors/resources on working out your application with Objects, not a database-centric approach . Database design is typically the first step for Microsoft folks (which I do daily, but am in a recovery program, er, Alt.Net). If you keep the focus on what you need to deliver to a customer and let your ORM figure out how to persist your objects, your design should be better.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1650/" ] }
15,708
One of my favourite tools for linux is lsof - a real swiss army knife! Today I found myself wondering which programs on a WinXP system had a specific file open. Is there any equivalent utility to lsof? Additionally, the file in question was over a network share so I'm not sure if that complicates matters.
Use Process Explorer from the Sysinternals Suite, the Find Handle or DLL function will let you search for the process with that file open.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/15708", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1848/" ] }
15,732
I'm generating some xml files that needs to conform to an xsd file that was given to me. How should I verify they conform?
The Java runtime library supports validation. Last time I checked this was the Apache Xerces parser under the covers. You should probably use a javax.xml.validation.Validator . import javax.xml.XMLConstants;import javax.xml.transform.Source;import javax.xml.transform.stream.StreamSource;import javax.xml.validation.*;import java.net.URL;import org.xml.sax.SAXException;//import java.io.File; // if you use Fileimport java.io.IOException;...URL schemaFile = new URL("http://host:port/filename.xsd");// webapp example xsd: // URL schemaFile = new URL("http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd");// local file example:// File schemaFile = new File("/location/to/localfile.xsd"); // etc.Source xmlFile = new StreamSource(new File("web.xml"));SchemaFactory schemaFactory = SchemaFactory .newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);try { Schema schema = schemaFactory.newSchema(schemaFile); Validator validator = schema.newValidator(); validator.validate(xmlFile); System.out.println(xmlFile.getSystemId() + " is valid");} catch (SAXException e) { System.out.println(xmlFile.getSystemId() + " is NOT valid reason:" + e);} catch (IOException e) {} The schema factory constant is the string http://www.w3.org/2001/XMLSchema which defines XSDs. The above code validates a WAR deployment descriptor against the URL http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd but you could just as easily validate against a local file. You should not use the DOMParser to validate a document (unless your goal is to create a document object model anyway). This will start creating DOM objects as it parses the document - wasteful if you aren't going to use them.
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/15732", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1650/" ] }
15,734
I know that there is no official API for Google Analytics but is there a way to access Google Analytics Reports with C#?
Update : Google launched a Google Analytics API today. Google Analytics Blog - API Launched
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15734", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1890/" ] }
15,744
I'm wondering about instances when it makes sent to use #define and #if statements. I've known about it for a while, but never incorporated it into my way of coding. How exactly does this affect the compilation? Is #define the only thing that determines if the code is included when compiled? If I have #define DEBUGme as a custom symbol, the only way to exclude it from compile is to remove this #define statement?
In C# #define macros, like some of Bernard's examples, are not allowed. The only common use of #define / #if s in C# is for adding optional debug only code. For example: static void Main(string[] args) {#if DEBUG //this only compiles if in DEBUG Console.WriteLine("DEBUG")#endif #if !DEBUG //this only compiles if not in DEBUG Console.WriteLine("RELEASE")#endif //This always compiles Console.ReadLine() }
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15744", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1888/" ] }
15,798
I need to validate an XML string (and not a file)against a DTD description file. How can that be done in python ?
Another good option is lxml's validation which I find quite pleasant to use. A simple example taken from the lxml site: from StringIO import StringIOfrom lxml import etreedtd = etree.DTD(StringIO("""<!ELEMENT foo EMPTY>"""))root = etree.XML("<foo/>")print(dtd.validate(root))# Trueroot = etree.XML("<foo>bar</foo>")print(dtd.validate(root))# Falseprint(dtd.error_log.filter_from_errors())# <string>:1:0:ERROR:VALID:DTD_NOT_EMPTY: Element foo was declared EMPTY this one has content
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/15798", "https://Stackoverflow.com", "https://Stackoverflow.com/users/446497/" ] }
15,816
I use VNC to connect to a Linux workstation at work. At work I have a 20" monitor that runs at 1600x1200, while at home I use my laptop with its resolution of 1440x900.If I set the vncserver to run at 1440x900 I miss out on a lot of space on my monitor, whereas if I set it to run at 1600x1200 it doesn't fit on the laptop's screen, and I have to scroll it all the time. Is there any good way to resize a VNC session on the fly? My VNC server is RealVNC E4.x (I don't remember the exact version) running on SuSE64.
Real VNC server 4.4 includes support for Xrandr, which allows resizing the VNC. Start the server with: vncserver -geometry 1600x1200 -randr 1600x1200,1440x900,1024x768 Then resize with: xrandr -s 1600x1200xrandr -s 1440x900xrandr -s 1024x768
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/15816", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1084/" ] }
15,828
Is there a free or open source library to read Excel files (.xls) directly from a C# program? It does not need to be too fancy, just to select a worksheet and read the data as strings. So far, I've been using Export to Unicode text function of Excel, and parsing the resulting (tab-delimited) file, but I'd like to eliminate the manual step.
var fileName = string.Format("{0}\\fileNameHere", Directory.GetCurrentDirectory());var connectionString = string.Format("Provider=Microsoft.Jet.OLEDB.4.0; data source={0}; Extended Properties=Excel 8.0;", fileName);var adapter = new OleDbDataAdapter("SELECT * FROM [workSheetNameHere$]", connectionString);var ds = new DataSet();adapter.Fill(ds, "anyNameHere");DataTable data = ds.Tables["anyNameHere"]; This is what I usually use. It is a little different because I usually stick a AsEnumerable() at the edit of the tables: var data = ds.Tables["anyNameHere"].AsEnumerable(); as this lets me use LINQ to search and build structs from the fields. var query = data.Where(x => x.Field<string>("phoneNumber") != string.Empty).Select(x => new MyContact { firstName= x.Field<string>("First Name"), lastName = x.Field<string>("Last Name"), phoneNumber =x.Field<string>("Phone Number"), });
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/15828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/838/" ] }
15,838
Can the performance of this sequential search algorithm (taken from The Practice of Programming ) be improved using any of C's native utilities, e.g. if I set the i variable to be a register variable ? int lookup(char *word, char*array[]){ int i for (i = 0; array[i] != NULL; i++) if (strcmp(word, array[i]) == 0) return i; return -1;}
Yes, but only very slightly. A much bigger performance improvement can be achieved by using better algorithms (for example keeping the list sorted and doing a binary search). In general optimizing a given algorithm only gets you so far. Choosing a better algorithm (even if it's not completely optimized) can give you a considerable (order of magnitude) performance improvement.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/15838", "https://Stackoverflow.com", "https://Stackoverflow.com/users/381/" ] }
15,851
After I was convinced that labeled breaks/continues are a total "nono" over here , I need help to remove the label out of my code. I have a square matrix and a vector that has the same length. The vector has already some values in it an depending on the values in the matrix the vector is changed in the loop. I hope, the code-fragment is basically understandable… vectorLoop:for( int idx = 0; idx < vectorLength; idx++) { if( conditionAtVectorPosition( v, idx ) ) continue vectorLoop; matrixLoop: for( rowIdx = 0; rowIdx < n; rowIdx++ ) { if( anotherConditionAtVector( v, rowIdx ) ) continue matrixLoop; if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) continue vectorLoop; } setValueInVector( v, idx );} Please convince me, that there is a more readable/better version without the labels.
Looking at the solutions presented so far: They all look less readable than the original, in that they involve spending more code on the mechanism of the code rather than on the algorithm itself Some of them are broken, or were before they were edited. Most damning is the fact that people are having to think quite hard about how to write the code without labels and not break anything. Some come with a performance penalty of running the same test twice, which may not always be trivial. The alternative to that is storing and passing round booleans, which gets ugly. Refactoring the relevant part of the code into a method is effectively a no-op: it rearranges how the code is laid out in the file, but has no effect on how it's executed. All of which makes me believe that, at least in the case of this question as phrased, the label is the correct solution and doesn't need to be refactored away. Certainly there are cases where labels are used incorrectly and should be refactored away. I just don't think it should be treated as some unbreakable rule.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/15851", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1870/" ] }
15,871
I work on a complex application where different teams work on their own modules with a degree of overlap. A while back we got a Mediawiki instance set up, partly at my prompting. I have a hard job getting people to actually use it, let alone contribute. I can see a lot of benefit in sharing information. It may at least reduce the times we reinvent the wheel. The wiki is not very structured, but I'm not sure that is a problem as long as you can search for what you need. Any hints?
Some tips: Any time someone sends information by email that really should be in a wiki, make a page for that topic and add what they put in the email. Then reply "Thanks for that info, I've put it into the wiki here so that it's easier to find in the future." Likewise, if you have information you need to share that should be in the wiki, put it there and just send an email with a link to it, rather than email people. When you ask people for information, phrase it so that putting such documentation in the wiki should be considered the default or standard: "I searched in the wiki but I couldn't find it. Have you put that info up there yet?" If you are the "wiki champion", make sure other people know how to use it, e.g. "Did I go through how to create a new page with you yet?" Edit the sidebar to make sure it is relevant to your work. Use "nav box" style templates on related pages for easier navigation. Put something like {{Special:NewPages/5}} on the front page, or recent changes, so that people can see the activity. Take a peek at Recent changes every few days or week, and if you notice someone adding information without being prodded, send them an email or drop by and give them a little compliment.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15871", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1895/" ] }
15,917
I'm using NHibernate on a project and I need to do data auditing. I found this article on codeproject which discusses the IInterceptor interface. What is your preferred way of auditing data? Do you use database triggers? Do you use something similar to what's dicussed in the article?
For NHibernate 2.0, you should also look at Event Listeners . These are the evolution of the IInterceptor interface and we use them successfully for auditing.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15917", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1122/" ] }
15,949
I have a tomcat instance setup but the database connection I have configured in context.xml keeps dying after periods of inactivity. When I check the logs I get the following error: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was68051 seconds ago. The last packet sent successfully to the server was 68051 seconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. Here is the configuration in context.xml: <Resource name="dataSourceName" auth="Container" type="javax.sql.DataSource" maxActive="100" maxIdle="30" maxWait="10000" username="username" password="********" removeAbandoned = "true" logAbandoned = "true" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://127.0.0.1:3306/databasename?autoReconnect=true&amp;useEncoding=true&amp;characterEncoding=UTF-8" /> I am using autoReconnect=true like the error says to do, but the connection keeps dying. I have never seen this happen before. I have also verified that all database connections are being closed properly.
Tomcat Documentation DBCP uses the Jakarta-Commons Database Connection Pool. It relies on number of Jakarta-Commons components: * Jakarta-Commons DBCP* Jakarta-Commons Collections* Jakarta-Commons Pool This attribute may help you out. removeAbandonedTimeout="60" I'm using the same connection pooling stuff and I'm setting these properties to prevent the same thing it's just not configured through tomcat.But if the first thing doesn't work try these. testWhileIdle=truetimeBetweenEvictionRunsMillis=300000
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/15949", "https://Stackoverflow.com", "https://Stackoverflow.com/users/22/" ] }
15,995
Does anyone here have any useful code which uses reduce() function in python? Is there any code other than the usual + and * that we see in the examples? Refer Fate of reduce() in Python 3000 by GvR
The other uses I've found for it besides + and * were with and and or, but now we have any and all to replace those cases. foldl and foldr do come up in Scheme a lot... Here's some cute usages: Flatten a list Goal: turn [[1, 2, 3], [4, 5], [6, 7, 8]] into [1, 2, 3, 4, 5, 6, 7, 8] . reduce(list.__add__, [[1, 2, 3], [4, 5], [6, 7, 8]], []) List of digits to a number Goal: turn [1, 2, 3, 4, 5, 6, 7, 8] into 12345678 . Ugly, slow way: int("".join(map(str, [1,2,3,4,5,6,7,8]))) Pretty reduce way: reduce(lambda a,d: 10*a+d, [1,2,3,4,5,6,7,8], 0)
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/15995", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1448/" ] }
16,007
Basically I have some code to check a specific directory to see if an image is there and if so I want to assign a URL to the image to an ImageControl. if (System.IO.Directory.Exists(photosLocation)){ string[] files = System.IO.Directory.GetFiles(photosLocation, "*.jpg"); if (files.Length > 0) { // TODO: return the url of the first file found; }}
As far as I know, there's no method to do what you want; at least not directly. I'd store the photosLocation as a path relative to the application; for example: "~/Images/" . This way, you could use MapPath to get the physical location, and ResolveUrl to get the URL (with a bit of help from System.IO.Path ): string photosLocationPath = HttpContext.Current.Server.MapPath(photosLocation);if (Directory.Exists(photosLocationPath)){ string[] files = Directory.GetFiles(photosLocationPath, "*.jpg"); if (files.Length > 0) { string filenameRelative = photosLocation + Path.GetFilename(files[0]) return Page.ResolveUrl(filenameRelative); }}
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/16007", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1762/" ] }
16,067
I have been mulling over writing a peak-fitting library for a while. I know Python fairly well and plan on implementing everything in Python to begin with but envisage that I may have to re-implement some core routines in a compiled language eventually. IIRC, one of Python's original remits was as a prototyping language, however Python is pretty liberal in allowing functions, functors, objects to be passed to functions and methods, whereas I suspect the same is not true of say C or Fortran. What should I know about designing functions/classes which I envisage will have to interface into the compiled language? And how much of these potential problems are dealt with by libraries such as cTypes, bgen, SWIG , Boost.Python , Cython or Python SIP ? For this particular use case (a fitting library), I imagine allowing users to define mathematical functions (Guassian, Lorentzian etc.) as Python functions which can then to be passed an interpreted by the compiled code fitting library. Passing and returning arrays is also essential.
Finally a question that I can really put a value answer to :). I have investigated f2py, boost.python, swig, cython and pyrex for my work (PhD in optical measurement techniques). I used swig extensively, boost.python some and pyrex and cython a lot. I also used ctypes. This is my breakdown: Disclaimer : This is my personal experience. I am not involved with any of these projects. swig: does not play well with c++. It should, but name mangling problems in the linking step was a major headache for me on linux & Mac OS X. If you have C code and want it interfaced to python, it is a good solution. I wrapped the GTS for my needs and needed to write basically a C shared library which I could connect to. I would not recommend it. Ctypes: I wrote a libdc1394 (IEEE Camera library) wrapper using ctypes and it was a very straigtforward experience. You can find the code on https://launchpad.net/pydc1394 . It is a lot of work to convert headers to python code, but then everything works reliably. This is a good way if you want to interface an external library. Ctypes is also in the stdlib of python, so everyone can use your code right away. This is also a good way to play around with a new lib in python quickly. I can recommend it to interface to external libs. Boost.Python : Very enjoyable. If you already have C++ code of your own that you want to use in python, go for this. It is very easy to translate c++ class structures into python class structures this way. I recommend it if you have c++ code that you need in python. Pyrex/Cython: Use Cython, not Pyrex. Period. Cython is more advanced and more enjoyable to use. Nowadays, I do everything with cython that i used to do with SWIG or Ctypes. It is also the best way if you have python code that runs too slow. The process is absolutely fantastic: you convert your python modules into cython modules, build them and keep profiling and optimizing like it still was python (no change of tools needed). You can then apply as much (or as little) C code mixed with your python code. This is by far faster then having to rewrite whole parts of your application in C; you only rewrite the inner loop. Timings : ctypes has the highest call overhead (~700ns), followed by boost.python (322ns), then directly by swig (290ns). Cython has the lowest call overhead (124ns) and the best feedback where it spends time on (cProfile support!). The numbers are from my box calling a trivial function that returns an integer from an interactive shell; module import overhead is therefore not timed, only function call overhead is. It is therefore easiest and most productive to get python code fast by profiling and using cython. Summary : For your problem, use Cython ;). I hope this rundown will be useful for some people. I'll gladly answer any remaining question. Edit : I forget to mention: for numerical purposes (that is, connection to NumPy) use Cython; they have support for it (because they basically develop cython for this purpose). So this should be another +1 for your decision.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/16067", "https://Stackoverflow.com", "https://Stackoverflow.com/users/199/" ] }
16,074
Is it possible to open a project in Visual Studio 2008 without opening all the files that were previously opened last time I had the project open. I have a habit of keeping many files open as I am working on them, so next time I open the project, it (very slowly) loads up a bunch of files into the editor that I may not even need open. I have searched through the settings and cannot find anything to stop this behavior.
Simply delete the .suo file.It contains the list of open files.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/16074", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1749/" ] }
16,096
In WPF, how would I apply multiple styles to a FrameworkElement ? For instance, I have a control which already has a style. I also have a separate style which I would like to add to it without blowing away the first one. The styles have different TargetTypes, so I can't just extend one with the other.
I think the simple answer is that you can't do (at least in this version of WPF) what you are trying to do. That is, for any particular element only one Style can be applied. However, as others have stated above, maybe you can use BasedOn to help you out. Check out the following piece of loose xaml. In it you will see that I have a base style that is setting a property that exists on the base class of the element that I want to apply two styles to. And, in the second style which is based on the base style, I set another property. So, the idea here ... is if you can somehow separate the properties that you want to set ... according the inheritance hierarchy of the element you want to set multiple styles on ... you might have a workaround. <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Page.Resources> <Style x:Key="baseStyle" TargetType="FrameworkElement"> <Setter Property="HorizontalAlignment" Value="Left"/> </Style> <Style TargetType="Button" BasedOn="{StaticResource baseStyle}"> <Setter Property="Content" Value="Hello World"/> </Style> </Page.Resources> <Grid> <Button Width="200" Height="50"/> </Grid></Page> Hope this helps. Note: One thing in particular to note. If you change the TargetType in the second style (in first set of xaml above) to ButtonBase , the two Styles do not get applied. However, check out the following xaml below to get around that restriction. Basically, it means you need to give the Style a key and reference it with that key. <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Page.Resources> <Style x:Key="baseStyle" TargetType="FrameworkElement"> <Setter Property="HorizontalAlignment" Value="Left"/> </Style> <Style x:Key="derivedStyle" TargetType="ButtonBase" BasedOn="{StaticResource baseStyle}"> <Setter Property="Content" Value="Hello World"/> </Style> </Page.Resources> <Grid> <Button Width="200" Height="50" Style="{StaticResource derivedStyle}"/> </Grid></Page>
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/16096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/93/" ] }
16,100
What's the best way to convert a string to an enumeration value in C#? I have an HTML select tag containing the values of an enumeration. When the page is posted, I want to pick up the value (which will be in the form of a string) and convert it to the corresponding enumeration value. In an ideal world, I could do something like this: StatusEnum MyStatus = StatusEnum.Parse("Active"); but that isn't a valid code.
In .NET Core and .NET Framework ≥4.0 there is a generic parse method : Enum.TryParse("Active", out StatusEnum myStatus); This also includes C#7's new inline out variables, so this does the try-parse, conversion to the explicit enum type and initialises+populates the myStatus variable. If you have access to C#7 and the latest .NET this is the best way. Original Answer In .NET it's rather ugly (until 4 or above): StatusEnum MyStatus = (StatusEnum) Enum.Parse(typeof(StatusEnum), "Active", true); I tend to simplify this with: public static T ParseEnum<T>(string value){ return (T) Enum.Parse(typeof(T), value, true);} Then I can do: StatusEnum MyStatus = EnumUtil.ParseEnum<StatusEnum>("Active"); One option suggested in the comments is to add an extension, which is simple enough: public static T ToEnum<T>(this string value){ return (T) Enum.Parse(typeof(T), value, true);}StatusEnum MyStatus = "Active".ToEnum<StatusEnum>(); Finally, you may want to have a default enum to use if the string cannot be parsed: public static T ToEnum<T>(this string value, T defaultValue) { if (string.IsNullOrEmpty(value)) { return defaultValue; } T result; return Enum.TryParse<T>(value, true, out result) ? result : defaultValue;} Which makes this the call: StatusEnum MyStatus = "Active".ToEnum(StatusEnum.None); However, I would be careful adding an extension method like this to string as (without namespace control) it will appear on all instances of string whether they hold an enum or not (so 1234.ToString().ToEnum(StatusEnum.None) would be valid but nonsensical) . It's often be best to avoid cluttering Microsoft's core classes with extra methods that only apply in very specific contexts unless your entire development team has a very good understanding of what those extensions do.
{ "score": 12, "source": [ "https://Stackoverflow.com/questions/16100", "https://Stackoverflow.com", "https://Stackoverflow.com/users/203/" ] }
16,113
Can I get some recommendations (preferably with some reasons) for good log analysis software for Apache 2.2 access log files? I have heard of Webalizer and AWStats , but have never really used any of them, and would like to know: What they can do Why they are useful Interesting uses for them Any and all comments and thoughts are welcome.
AWStats and Webalizer are both good and free (I think both free speech as well as free beer). I generally prefer the look of AWStats - it has a nice modern look whereas Webalizer looks like something created in about 1992. They both give roughly the same information which includes: Most frequently accessed pages Which hosts (IPs and Domain Names) visitors come from Proportion of users using different browsers Proportion of downloads of different file types All of this information is usually viewable on a hour by hour, day by day, month by month and year by year basis. Normally the raw data is available but also with bar charts and pie charts. Both AWStats and Webalizer will (I think) try and work out where your visitors come from by using services such as GeoIP, although I never bothered to set this up. Some also try to work out what order people have visited pages in and things like that - but that is very difficult to do so the results are guesses at the best. I generally find them both useful - even if just to get an overview of what is going on with my server and who is accessing it. They are both relatively easy to install - although I seem to remember Webalizer being a little easier than AWStats, and they both have varied configuration options to let you decide exactly what you want to get out of them. For more information see their sites at awstats.sourceforge.net/ and http://www.webalizer.org/ . Hope that helps. Robin
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/16113", "https://Stackoverflow.com", "https://Stackoverflow.com/users/277/" ] }
16,140
What makes a module/service/bit of application functionality a particularly good candidate for an OSGi module? I'm interested in using OSGi in my applications. We're a Java shop and we use Spring pretty extensively, so I'm leaning toward using Spring Dynamic Modules for OSGi(tm) Service Platforms . I'm looking for a good way to incorporate a little bit of OSGi into an application as a trial. Has anyone here used this or a similar OSGi technology? Are there any pitfalls? @Nicolas - Thanks, I've seen that one. It's a good tutorial, but I'm looking more for ideas on how to do my first "real" OSGi bundle, as opposed to a Hello World example. @david - Thanks for the link! Ideally, with a greenfield app, I'd design the whole thing to be dynamic. What I'm looking for right now, though, is to introduce it in a small piece of an existing application. Assuming I can pick any piece of the app, what are some factors to consider that would make that piece better or worse as an OSGi guinea pig?
Well, since you can not have one part OSGi and one part non-OSGi you'll need to make your entire app OSGi. In its simplest form you make a single OSGi bundle out of your entire application. Clearly this is not a best practice but it can be useful to get a feel for deploying a bundle in an OSGi container (Equinox, Felix, Knoplerfish, etc). To take it to the next level you'll want to start splitting your app into components, components should typically have a set of responsibilities that can be isolated from the rest of your application through a set of interfaces and class dependencies. Identifying these purely by hand can range from rather straightforward for a well designed highly cohesive but loosely coupled application to a nightmare for interlocked source code that you are not familiar with. Some help can come from tools like JDepend which can show you the coupling of Java packages against other packages/classes in your system. A package with low efferent coupling should be easier to extract into an OSGi bundle than one with high efferent coupling. Even more architectural insight can be had with pro tools like Structure 101 . Purely on a technical level, working daily with an application that consists of 160 OSGi bundles and using Spring DM I can confirm that the transition from "normal" Spring to Spring DM is largely pain free. The extra namespace and the fact that you can (and should) isolate your OSGi specific Spring configuration in separate files makes it even easier to have both with and without OSGi deployment scenarios. OSGi is a deep and wide component model, documentation I recommend: OSGi R4 Specification : Get the PDFs of the Core and Compendium specification, they are canonical, authoritative and very readable. Have a shortcut to them handy at all times, you will consult them. Read up on OSGi best practices, there is a large set of things you can do but a somewhat smaller set of things you should do and there are some things you should never do (DynamicImport: * for example). Some links: OSGi best practices and using Apache Felix Peter Kriens and BJ Hargrave in a Sun presentation on OSGi best practices one key OSGi concept are Services, learn why and how they supplant the Listener pattern with the Whiteboard pattern The Spring DM Google Group is very responsive and friendly in my experience The Spring DM Google Group is no longer active and has moved to Eclipse.org as the Gemini Blueprint project which has a forum here .
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/16140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/765/" ] }
16,142
I've seen these words a lot around Subversion (and I guess general repository) discussions. I have been using SVN for my projects for the last few years, but I've never grasped the complete concept of these directories. What do they mean?
Hmm, not sure I agree with Nick re tag being similar to a branch. A tag is just a marker Trunk would be the main body of development, originating from the start of the project until the present. Branch will be a copy of code derived from a certain point in the trunk that is used for applying major changes to the code while preserving the integrity of the code in the trunk. If the major changes work according to plan, they are usually merged back into the trunk. Tag will be a point in time on the trunk or a branch that you wish to preserve. The two main reasons for preservation would be that either this is a major release of the software, whether alpha, beta, RC or RTM, or this is the most stable point of the software before major revisions on the trunk were applied. In open source projects, major branches that are not accepted into the trunk by the project stakeholders can become the bases for forks -- e.g., totally separate projects that share a common origin with other source code. The branch and tag subtrees are distinguished from the trunk in the following ways: Subversion allows sysadmins to create hook scripts which are triggered for execution when certain events occur; for instance, committing a change to the repository. It is very common for a typical Subversion repository implementation to treat any path containing "/tag/" to be write-protected after creation; the net result is that tags, once created, are immutable (at least to "ordinary" users). This is done via the hook scripts, which enforce the immutability by preventing further changes if tag is a parent node of the changed object. Subversion also has added features, since version 1.5, relating to "branch merge tracking" so that changes committed to a branch can be merged back into the trunk with support for incremental, "smart" merging.
{ "score": 11, "source": [ "https://Stackoverflow.com/questions/16142", "https://Stackoverflow.com", "https://Stackoverflow.com/users/914/" ] }
16,167
The Visual Studio refactoring support for C# is quite good nowadays (though not half as good as some Java IDE's I've seen already) but I'm really missing C++ support. I have seen Refactor! and am currently trying it out, but maybe one of you guys know a better tool or plugin? I've been working with Visual Assist X now for a week or two and got totally addicted. Thanks for the tip, I'll try to convince my boss to get me a license at work too. I've been bughunting for a few days since Visual Assist X kept messing up my Visual Studio after a few specific refactorings, It took me (and customer support) a week to hunt down but let's say for now that Visual Assist X is not a good combination with ClipX .
Visual Assist X by Whole Tomato software is not free, but it's absolutely worth the money if you use Visual Studio for C++. http://www.wholetomato.com/
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/16167", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1830/" ] }
16,233
The collection of fonts available to a web developer is depressingly limited. I remember reading long ago about TrueDoc, as a way of shipping fonts alongside a website - but it seems to have languished. Has anybody used this, or something similar? Is it supported by enough browsers? Am I missing a good solution? Note that a responsible web developer does not use fonts that are only available on Windows (and especially ones that are only available on Vista), nor do they use a technology that isn't supported by at least the majority of browsers. Update: As several people have pointed out, there's nothing wrong with providing a list of fallback fonts for people who don't have the specific font you use. I do in fact always do this, and didn't mean to suggest that this was wrong. While my question was badly phrased, what I meant was that a designer should not make too many assumptions about what the client will have available. You should plan for how all users will see your site, not just for people using your own preferred setup.
Safari, and to a lesser extent, Firefox 3 have support for @font-face in CSS, which lets you use custom fonts. You need to have the appropriate licence to distribute the font files though. These articles explain it in more detail: http://www.css3.info/preview/web-fonts-with-font-face/ http://www.alistapart.com/articles/cssatten http://www.sitepoint.com/blogs/2008/07/30/custom-web-fonts-pick-your-poison/
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/16233", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1000/" ] }
16,248
I'd like to implement a way of recording the version of a project within code, so that it can be used when testing and to help track bugs. It seems the best version number to use would just be the current revision number from Subversion. Is there an easy way to hook this number into a (C++ in my case) header file or something, which I can then get at in code? I guess this is a post commit hook or something? Does anyone have any experience of implementing this (with code to share, please?), or can suggest a better alternative?Thanks.
Two ways: Embed $Id$ or $Revision$ within the code. Then set svn:keywords="Id Revision" property on the file. This will give you the last modified revision of that source file. Good for smaller projects and scripts. Alternatively, use a Makefile driven process and the command line tool svnversion. (Language specific - this should work for C/C++) echo -n "#define VERSION 1.0.1-" > version.hsvnversion -n . >> version.h Or some more complex build script with sed and version.h.in. Then just #include version.h That will give you the repository version number, which will change with every commit / update, and is probably a more appropriate version number for most projects. Note: I also used a human readable version string that I manually update. The example would give: Version: 1.0.1-r13445 ~J
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/16248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1169/" ] }
16,298
I have 2 hosts and I would like to point a subdomain on host one to a subdomain on host two: subdomain.hostone.com --> subdomain.hosttwo.com I added a CNAME record to host one that points to subdomain.hosttwo.com but all I get is a ' 400 Bad Request ' Error. Can anyone see what I'm doing wrong?
Try changing it to "subdomain -> subdomain.hosttwo.com" The CNAME is an alias for a certain domain, so when you go to the control panel for hostone.com, you shouldn't have to enter the whole name into the CNAME alias. As far as the error you are getting, can you log onto subdomain.hostwo.com and check the logs?
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/16298", "https://Stackoverflow.com", "https://Stackoverflow.com/users/117/" ] }
16,306
What would be the easiest way to separate the directory name from the file name when dealing with SaveFileDialog.FileName in C#?
Use: System.IO.Path.GetDirectoryName(saveDialog.FileName) (and the corresponding System.IO.Path.GetFileName ). The Path class is really rather useful.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/16306", "https://Stackoverflow.com", "https://Stackoverflow.com/users/41/" ] }
16,340
Say I have an object that stores a byte array and I want to be able to efficiently generate a hashcode for it. I've used the cryptographic hash functions for this in the past because they are easy to implement, but they are doing a lot more work than they should to be cryptographically oneway, and I don't care about that (I'm just using the hashcode as a key into a hashtable). Here's what I have today: struct SomeData : IEquatable<SomeData>{ private readonly byte[] data; public SomeData(byte[] data) { if (null == data || data.Length <= 0) { throw new ArgumentException("data"); } this.data = new byte[data.Length]; Array.Copy(data, this.data, data.Length); } public override bool Equals(object obj) { return obj is SomeData && Equals((SomeData)obj); } public bool Equals(SomeData other) { if (other.data.Length != data.Length) { return false; } for (int i = 0; i < data.Length; ++i) { if (data[i] != other.data[i]) { return false; } } return true; } public override int GetHashCode() { return BitConverter.ToInt32(new MD5CryptoServiceProvider().ComputeHash(data), 0); }} Any thoughts? dp: You are right that I missed a check in Equals, I have updated it. Using the existing hashcode from the byte array will result in reference equality (or at least that same concept translated to hashcodes).for example: byte[] b1 = new byte[] { 1 };byte[] b2 = new byte[] { 1 };int h1 = b1.GetHashCode();int h2 = b2.GetHashCode(); With that code, despite the two byte arrays having the same values within them, they are referring to different parts of memory and will result in (probably) different hash codes. I need the hash codes for two byte arrays with the same contents to be equal.
The hash code of an object does not need to be unique. The checking rule is: Are the hash codes equal? Then call the full (slow) Equals method. Are the hash codes not equal? Then the two items are definitely not equal. All you want is a GetHashCode algorithm that splits up your collection into roughly even groups - it shouldn't form the key as the HashTable or Dictionary<> will need to use the hash to optimise retrieval. How long do you expect the data to be? How random? If lengths vary greatly (say for files) then just return the length. If lengths are likely to be similar look at a subset of the bytes that varies. GetHashCode should be a lot quicker than Equals , but doesn't need to be unique. Two identical things must never have different hash codes. Two different objects should not have the same hash code, but some collisions are to be expected (after all, there are more permutations than possible 32 bit integers).
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/16340", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1948/" ] }
16,363
Starting with 2005, VS started this behavior of when starting debugging session it spawns up a webserver for every project in a solution. I have a solution with 15 projects so it takes a while and is a waste of resources. Is there a way to configure it differently besides just using IIS?
Some details here on why it does it and how you can overcome it: http://vishaljoshi.blogspot.com/2007/12/tips-tricks-start-up-options-and.html There are instances when you might have many web applications or web sites in the same solution and you may be actually debugging only one of them... In such scenario it might not be desirable to have multiple instances of ASP.NET Development Server running... VS provides an explicit setting in the property grid of web application/site called Development Web Server - "Always Start When Debugging" which is set to True by default... If you set this Property to be False only one web server instance will be created for the start up web project...
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/16363", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1940/" ] }
16,396
Is there a way to easily convert Crystal Reports reports to Reporting Services RDL format? We have quite a few reports that will be needing conversion soon. I know about the manual process (which is basically rebuilding all your reports from scratch in SSRS), but my searches pointed to a few possibilities with automatic conversion "acceleration" with several consulting firms. (As described on .... - link broken). Do any of you have any valid experiences or recomendations regarding this particular issue?Are there any tools around that I do not know about?
I have searched previously for this, with no luck. There does not seem to be any tools available for this conversion, the manual method thereby becomes the only method. And yes, there are consulting firms who will do the manual work for you, but they still do it manually. Crystal Reports and Reporting Services have different architectural styles, making it a difficult task for a conversion tool, so I view it as unlikely that someone will build one anytime soon.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/16396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1165587/" ] }
16,413
Problem: I have an address field from an Access database which has been converted to SQL Server 2005. This field has everything all in one field. I need to parse out the address's individual sections into their appropriate fields in a normalized table. I need to do this for approximately 4,000 records, and it needs to be repeatable. Assumptions: Assume an address in the US (for now) assume that the input string will sometimes contain an addressee (the person being addressed) and/or a second street address (i.e. Suite B) states may be abbreviated zip code could be standard 5 digits or zip+4 there are typos in some instances UPDATE: In response to the questions posed, standards were not universally followed; I need need to store the individual values, not just geocode and errors means typo (corrected above) Sample Data: A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947 11522 Shawnee Road, Greenwood DE 19950 144 Kings Highway, S.W. Dover, DE 19901 Intergrated Const. Services 2 Penns Way Suite 405 New Castle, DE 19720 Humes Realty 33 Bridle Ridge Court, Lewes, DE 19958 Nichols Excavation 2742 Pulaski Hwy Newark, DE 19711 2284 Bryn Zion Road, Smyrna, DE 19904 VEI Dover Crossroads, LLC 1500 Serpentine Road, Suite 100 Baltimore MD 21 580 North Dupont Highway Dover, DE 19901 P.O. Box 778 Dover, DE 19903
I've done a lot of work on this kind of parsing. Because there are errors you won't get 100% accuracy, but there are a few things you can do to get most of the way there, and then do a visual BS test. Here's the general way to go about it. It's not code, because it's pretty academic to write it, there's no weirdness, just lots of string handling. (Now that you've posted some sample data, I've made some minor changes) Work backward. Start from the zip code, which will be near the end, and in one of two known formats: XXXXX or XXXXX-XXXX. If this doesn't appear, you can assume you're in the city, state portion, below. The next thing, before the zip, is going to be the state, and it'll be either in a two-letter format, or as words. You know what these will be, too -- there's only 50 of them. Also, you could soundex the words to help compensate for spelling errors. before that is the city, and it's probably on the same line as the state. You could use a zip-code database to check the city and state based on the zip, or at least use it as a BS detector. The street address will generally be one or two lines. The second line will generally be the suite number if there is one, but it could also be a PO box. It's going to be near-impossible to detect a name on the first or second line, though if it's not prefixed with a number (or if it's prefixed with an "attn:" or "attention to:" it could give you a hint as to whether it's a name or an address line. I hope this helps somewhat.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/16413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/149/" ] }
16,432
Let's say that you want to output or concat strings. Which of the following styles do you prefer? var p = new { FirstName = "Bill", LastName = "Gates" }; Console.WriteLine("{0} {1}", p.FirstName, p.LastName); Console.WriteLine(p.FirstName + " " + p.LastName); Do you rather use format or do you simply concat strings? What is your favorite? Is one of these hurting your eyes? Do you have any rational arguments to use one and not the other? I'd go for the second one.
Try this code. It's a slightly modified version of your code. I removed Console.WriteLine as it's probably a few orders of magnitude slower than what I'm trying to measure. I'm starting the Stopwatch before the loop and stopping it right after, this way I'm not losing precision if the function takes for example 26.4 ticks to execute. The way you divided the result by some iterations was wrong. See what happens if you have 1,000 milliseconds and 100 milliseconds. In both situations, you will get 0 ms after dividing it by 1,000,000. Code: Stopwatch s = new Stopwatch();var p = new { FirstName = "Bill", LastName = "Gates" };int n = 1000000;long fElapsedMilliseconds = 0, fElapsedTicks = 0, cElapsedMilliseconds = 0, cElapsedTicks = 0;string result;s.Start();for (var i = 0; i < n; i++) result = (p.FirstName + " " + p.LastName);s.Stop();cElapsedMilliseconds = s.ElapsedMilliseconds;cElapsedTicks = s.ElapsedTicks;s.Reset();s.Start();for (var i = 0; i < n; i++) result = string.Format("{0} {1}", p.FirstName, p.LastName);s.Stop();fElapsedMilliseconds = s.ElapsedMilliseconds;fElapsedTicks = s.ElapsedTicks;s.Reset();Console.Clear();Console.WriteLine(n.ToString()+" x result = string.Format(\"{0} {1}\", p.FirstName, p.LastName); took: " + (fElapsedMilliseconds) + "ms - " + (fElapsedTicks) + " ticks");Console.WriteLine(n.ToString() + " x result = (p.FirstName + \" \" + p.LastName); took: " + (cElapsedMilliseconds) + "ms - " + (cElapsedTicks) + " ticks");Thread.Sleep(4000); Those are my results: 1000000 x result = string.Format("{0} {1}", p.FirstName, p.LastName); took: 618ms - 2213706 ticks 1000000 x result = (p.FirstName + " " + p.LastName); took: 166ms - 595610 ticks
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/16432", "https://Stackoverflow.com", "https://Stackoverflow.com/users/920/" ] }
16,473
Say we have the following method: private MyObject foo = new MyObject();// and later in the classpublic void PotentialMemoryLeaker(){ int firedCount = 0; foo.AnEvent += (o,e) => { firedCount++;Console.Write(firedCount);}; foo.MethodThatFiresAnEvent();} If the class with this method is instantiated and the PotentialMemoryLeaker method is called multiple times, do we leak memory? Is there any way to unhook that lambda event handler after we're done calling MethodThatFiresAnEvent ?
Yes, save it to a variable and unhook it. DelegateType evt = (o, e) => { firedCount++; Console.Write(firedCount); };foo.AnEvent += evt;foo.MethodThatFiresAnEvent();foo.AnEvent -= evt; And yes, if you don't, you'll leak memory, as you'll hook up a new delegate object each time. You'll also notice this because each time you call this method, it'll dump to the console an increasing number of lines (not just an increasing number, but for one call to MethodThatFiresAnEvent it'll dump any number of items, once for each hooked up anonymous method).
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/16473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
16,483
How can I convince Firefox (3.0.1, if it matters) to send an If-Modified-Since header in an HTTPS request? It sends the header if the request uses plain HTTP and my server dutifully honors it. But when I request the same resource from the same server using HTTPS instead (i.e., simply changing the http:// in the URL to https://) then Firefox does not send an If-Modified-Since header at all. Is this behavior mandated by the SSL spec or something? Here are some example HTTP and HTTPS request/response pairs, pulled using the Live HTTP Headers Firefox extension, with some differences in bold: HTTP request/response: http://myserver.com:30000/scripts/site.jsGET /scripts/site.js HTTP/1.1Host: myserver.com:30000User-Agent: Mozilla/5.0 (...) Gecko/2008070206 Firefox/3.0.1Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8Accept-Language: en-us,en;q=0.5Accept-Encoding: gzip,deflateAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7Keep-Alive: 300Connection: keep-alive If-Modified-Since: Tue, 19 Aug 2008 15:57:30 GMTIf-None-Match: "a0501d1-300a-454d22526ae80"-gzipCache-Control: max-age=0 HTTP/1.x 304 Not ModifiedDate: Tue, 19 Aug 2008 15:59:23 GMTServer: Apache/2.2.8 (Unix) mod_ssl/2.2.8 OpenSSL/0.9.8Connection: Keep-AliveKeep-Alive: timeout=5, max=99Etag: "a0501d1-300a-454d22526ae80"-gzip HTTPS request/response: https://myserver.com:30001/scripts/site.jsGET /scripts/site.js HTTP/1.1Host: myserver.com:30001User-Agent: Mozilla/5.0 (...) Gecko/2008070206 Firefox/3.0.1Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8Accept-Language: en-us,en;q=0.5Accept-Encoding: gzip,deflateAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7Keep-Alive: 300Connection: keep-aliveHTTP/1.x 200 OKDate: Tue, 19 Aug 2008 16:00:14 GMTServer: Apache/2.2.8 (Unix) mod_ssl/2.2.8 OpenSSL/0.9.8Last-Modified: Tue, 19 Aug 2008 15:57:30 GMTEtag: "a0501d1-300a-454d22526ae80"-gzipAccept-Ranges: bytesContent-Encoding: gzipContent-Length: 3766Keep-Alive: timeout=5, max=100Connection: Keep-AliveContent-Type: text/javascript UPDATE: Setting browser.cache.disk_cache_ssl to true did the trick (which is odd because, as Nickolay points out, there's still the memory cache). Adding a "Cache-control: public" header to the response also worked. Thanks!
HTTPS requests are not cached so sending an If-Modified-Since doesn't make any sense. The not caching is a security precaution. The not caching on disk is a security pre-caution, but it seems it indeed affects the If-Modified-Since behavior (glancing over the code). Try setting the Firefox preference (in about:config) browser.cache.disk_cache_ssl to true . If that helps, try sending Cache-Control: public header in your response. UPDATE: Firefox behavior was changed for Gecko 2.0 (Firefox 4) -- HTTPS content is now cached.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/16483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/164/" ] }
16,491
How do you restore a database backup using SQL Server 2005 over the network? I recall doing this before but there was something odd about the way you had to do it.
The database is often running as a service under an account with no network access. If this is the case, then you wouldn't be able to restore directly over the network. Either the backup needs to be copied to the local machine or the database service needs to run as a user with the proper network access.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/16491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1545/" ] }
16,501
For a person without a comp-sci background, what is a lambda in the world of Computer Science?
Lambda comes from the Lambda Calculus and refers to anonymous functions in programming. Why is this cool? It allows you to write quick throw away functions without naming them. It also provides a nice way to write closures. With that power you can do things like this. Python def adder(x): return lambda y: x + yadd5 = adder(5)add5(1)6 As you can see from the snippet of Python, the function adder takes in an argument x, and returns an anonymous function, or lambda, that takes another argument y. That anonymous function allows you to create functions from functions. This is a simple example, but it should convey the power lambdas and closures have. Examples in other languages Perl 5 sub adder { my ($x) = @_; return sub { my ($y) = @_; $x + $y }}my $add5 = adder(5);print &$add5(1) == 6 ? "ok\n" : "not ok\n"; JavaScript var adder = function (x) { return function (y) { return x + y; };};add5 = adder(5);add5(1) == 6 JavaScript (ES6) const adder = x => y => x + y;add5 = adder(5);add5(1) == 6 Scheme (define adder (lambda (x) (lambda (y) (+ x y))))(define add5 (adder 5))(add5 1)6 C# 3.5 or higher Func<int, Func<int, int>> adder = (int x) => (int y) => x + y; // `int` declarations optionalFunc<int, int> add5 = adder(5);var add6 = adder(6); // Using implicit typingDebug.Assert(add5(1) == 6);Debug.Assert(add6(-1) == 5);// Closure exampleint yEnclosed = 1;Func<int, int> addWithClosure = (x) => x + yEnclosed;Debug.Assert(addWithClosure(2) == 3); Swift func adder(x: Int) -> (Int) -> Int{ return { y in x + y }}let add5 = adder(5)add5(1)6 PHP $a = 1;$b = 2;$lambda = fn () => $a + $b;echo $lambda(); Haskell (\x y -> x + y) Java see this post // The following is an example of Predicate : // a functional interface that takes an argument // and returns a boolean primitive type.Predicate<Integer> pred = x -> x % 2 == 0; // Tests if the parameter is even.boolean result = pred.test(4); // true Lua adder = function(x) return function(y) return x + y endendadd5 = adder(5)add5(1) == 6 -- true Kotlin val pred = { x: Int -> x % 2 == 0 }val result = pred(4) // true Ruby Ruby is slightly different in that you cannot call a lambda using the exact same syntax as calling a function, but it still has lambdas. def adder(x) lambda { |y| x + y }endadd5 = adder(5)add5[1] == 6 Ruby being Ruby, there is a shorthand for lambdas, so you can define adder this way: def adder(x) -> y { x + y }end R adder <- function(x) { function(y) x + y}add5 <- adder(5)add5(1)#> [1] 6
{ "score": 11, "source": [ "https://Stackoverflow.com/questions/16501", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1344/" ] }
16,550
Possible Duplicate: NAnt or MSBuild, which one to choose and when? What is the best build tool for .NET ? I currently use NAnt but only because I have experience with Ant . Is MSBuild preferred?
We actually use a combination of NAnt and MSBuild with CruiseControl . NAnt is used for script flow control and calls MSBuild to compile projects. After the physical build is triggered, NAnt is used to publish the individual project build outputs to a shared location. I am not sure this is the best process. I think many of us are still looking for a great build tool. One promising thing I heard recently on .NET Rocks, episode 362 , is James Kovac's PSake , a build system he based entirely on PowerShell. It sounds really promising since what you can do with PowerShell is fairly limitless in theory.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/16550", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1464/" ] }
16,557
I have always thought that the .equals() method in java should be overridden to be made specific to the class you have created. In other words to look for equivalence of two different instances rather than two references to the same instance. However I have encountered other programmers who seem to think that the default object behavior should be left alone and a new method created for testing equivalence of two objects of the same class. What are the argument for and against overriding the equals method?
Overriding the equals method is necessary if you want to test equivalence in standard library classes (for example, ensuring a java.util.Set contains unique elements or using objects as keys in java.util.Map objects). Note, if you override equals, ensure you honour the API contract as described in the documentation. For example, ensure you also override Object.hashCode : If two objects are equal according to the equals(Object) method, then calling the hashCode method on each of the two objects must produce the same integer result. EDIT: I didn't post this as a complete answer on the subject, so I'll echo Fredrik Kalseth's statement that overriding equals works best for immutable objects . To quote the API for Map : Note: great care must be exercised if mutable objects are used as map keys. The behavior of a map is not specified if the value of an object is changed in a manner that affects equals comparisons while the object is a key in the map.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/16557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1104/" ] }
16,568
I'm interested in learning some (ideally) database agnostic ways of selecting the n th row from a database table. It would also be interesting to see how this can be achieved using the native functionality of the following databases: SQL Server MySQL PostgreSQL SQLite Oracle I am currently doing something like the following in SQL Server 2005, but I'd be interested in seeing other's more agnostic approaches: WITH Ordered AS (SELECT ROW_NUMBER() OVER (ORDER BY OrderID) AS RowNumber, OrderID, OrderDateFROM Orders)SELECT *FROM OrderedWHERE RowNumber = 1000000 Credit for the above SQL: Firoz Ansari's Weblog Update: See Troels Arvin's answer regarding the SQL standard. Troels, have you got any links we can cite?
There are ways of doing this in optional parts of the standard, but a lot of databases support their own way of doing it. A really good site that talks about this and other things is http://troels.arvin.dk/db/rdbms/#select-limit . Basically, PostgreSQL and MySQL supports the non-standard: SELECT...LIMIT y OFFSET x Oracle, DB2 and MSSQL supports the standard windowing functions: SELECT * FROM ( SELECT ROW_NUMBER() OVER (ORDER BY key ASC) AS rownumber, columns FROM tablename) AS fooWHERE rownumber <= n (which I just copied from the site linked above since I never use those DBs) Update: As of PostgreSQL 8.4 the standard windowing functions are supported, so expect the second example to work for PostgreSQL as well. Update: SQLite added window functions support in version 3.25.0 on 2018-09-15 so both forms also work in SQLite.
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/16568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1944/" ] }
16,615
I'm new to windows programming and I'm trying to get notified of all changes to the file system (similar to the information that FileMon from SysInternals displays, but via an API). Is a FindFirstChangeNotification for each (non-network, non-substed) drive my best bet or are there other more suitable C/C++ APIs?
FindFirstChangeNotification is fine, but for slightly more ultimate power you should be using ReadDirectoryChangesW. (In fact, it's even recommended in the documentation!) It doesn't require a function pointer, it does require you to manually decode a raw buffer, it uses Unicode file names, but it is generally better and more flexible. On the other hand, if you want to do what FileMon does, you should probably do what FileMon does and use IFS to create and install a file system filter .
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/16615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1849/" ] }
16,656
I am working on a program that needs to create a multiple temporary folders for the application. These will not be seen by the user. The app is written in VB.net. I can think of a few ways to do it such as incremental folder name or random numbered folder names, but I was wondering, how other people solve this problem?
Update: Added File.Exists check per comment (2012-Jun-19) Here's what I've used in VB.NET. Essentially the same as presented, except I usually didn't want to create the folder immediately. The advantage to use GetRandomFilename is that it doesn't create a file, so you don't have to clean up if your using the name for something other than a file. Like using it for folder name. Private Function GetTempFolder() As String Dim folder As String = Path.Combine(Path.GetTempPath, Path.GetRandomFileName) Do While Directory.Exists(folder) or File.Exists(folder) folder = Path.Combine(Path.GetTempPath, Path.GetRandomFileName) Loop Return folderEnd Function Random Filename Example: C:\Documents and Settings\username\Local Settings\Temp\u3z5e0co.tvq Here's a variation using a Guid to get the temp folder name. Private Function GetTempFolderGuid() As String Dim folder As String = Path.Combine(Path.GetTempPath, Guid.NewGuid.ToString) Do While Directory.Exists(folder) or File.Exists(folder) folder = Path.Combine(Path.GetTempPath, Guid.NewGuid.ToString) Loop Return folderEnd Function guid Example: C:\Documents and Settings\username\Local Settings\Temp\2dbc6db7-2d45-4b75-b27f-0bd492c60496
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/16656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1632/" ] }
16,716
I am starting to build a new web application that will require user accounts. Now that I have an OpenID that I am using for this site I thought it would be cool if I could use OpenID for authentication in my application. Are there any good tutorials on how to integrate OpenID with an ASP.NET site?
See Scott Hanselman's post on using DotNetOpenID in ASP.NET. Andrew Arnott's blog is full of samples on using DotNetOpenID with ASP.NET, including ASP.NET MVC. I recently hooked up DotNetOpenID for the Subtext 2.0 release. It went really smoothly - the code samples included with the DotNetOpenID download are pretty helpful. The one thing I'd recommend is that you just use the library and avoid the ASP.NET control. It uses table based layout (hardcoded) and is pretty difficult to restyle.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/16716", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1838/" ] }
16,747
I am building a public website using ASP.NET, as part of the deliverable I need to do an Admin Site for data entry of the stuff shown in the public site, I was wondering what techniques or procedures are people using to validate entries using ASP.NET MVC.
Take a look at the JQuery Validation plugin this plugin is amazing,it's clean to implement and has all the features you could ever need, including remote validation via AJAX. Also a sample MVC controller method can be found here which basically uses the JsonResult action type like: public JsonResult CheckUserName(string username){ return Json(CheckValidUsername(username));}
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/16747", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1549/" ] }
16,762
I have a couple CSS files with overlapping CSS selectors that I'd like to programmatically merge (as in not just appending one file to the end of the other). Is there any tool to do this online? or a Firefox extension perhaps?
I found Factor CSS - complete with source code, but I think it does way more than I'd need. I really just want to combine CSS blocks that have the same selectors. I'll check out the source code and see if it can be converted to something usable as a TextMate bundle. That is, unless someone else manages to get to it before me. EDIT: Even better - here's a list of web-based tools for checking/formatting/optimizing css .
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/16762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2007/" ] }
16,770
I'm trying to fully understand all of Haskell's concepts. In what ways are algebraic data types similar to generic types, e.g., in C# and Java? And how are they different? What's so algebraic about them anyway? I'm familiar with universal algebra and its rings and fields, but I only have a vague idea of how Haskell's types work.
Haskell's algebraic data types are named such since they correspond to an initial algebra in category theory, giving us some laws, some operations and some symbols to manipulate. We may even use algebraic notation for describing regular data structures, where: + represents sum types (disjoint unions, e.g. Either ). • represents product types (e.g. structs or tuples) X for the singleton type (e.g. data X a = X a ) 1 for the unit type () and μ for the least fixed point (e.g. recursive types), usually implicit. with some additional notation: X² for X•X In fact, you might say (following Brent Yorgey) that a Haskell data type is regular if it can be expressed in terms of 1 , X , + , • , and a least fixed point. With this notation, we can concisely describe many regular data structures: Units: data () = () 1 Options: data Maybe a = Nothing | Just a 1 + X Lists: data [a] = [] | a : [a] L = 1+X•L Binary trees: data BTree a = Empty | Node a (BTree a) (BTree a) B = 1 + X•B² Other operations hold (taken from Brent Yorgey's paper, listed in the references): Expansion: unfolding the fix point can be helpful for thinking about lists. L = 1 + X + X² + X³ + ... (that is, lists are either empty, or they have one element, or two elements, or three, or ...) Composition, ◦ , given types F and G , the composition F ◦ G is a type which builds “F-structures made out of G-structures” (e.g. R = X • (L ◦ R) ,where L is lists, is a rose tree. Differentiation, the derivative of a data type D (given as D') is the type of D-structures with a single “hole”, that is, a distinguished location not containing any data. That amazingly satisfy the same rules as for differentiation in calculus: 1′ = 0 X′ = 1 (F + G)′ = F' + G′ (F • G)′ = F • G′ + F′ • G (F ◦ G)′ = (F′ ◦ G) • G′ References: Species and Functors and Types , Oh My!, Brent A. Yorgey, Haskell’10, September 30, 2010, Baltimore, Maryland, USA Clowns to the left of me, jokers to the right (Dissecting Data Structures) , Conor McBride POPL 2008
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/16770", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1659/" ] }
16,795
PHP has a great function called htmlspecialcharacters() where you pass it a string and it replaces all of HTML's special characters with their safe equivalents, it's almost a one stop shop for sanitizing input. Very nice right? Well is there an equivalent in any of the .NET libraries? If not, can anyone link to any code samples or libraries that do this well?
Try this. var encodedHtml = HttpContext.Current.Server.HtmlEncode(...);
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/16795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1366/" ] }
16,804
What options are there in the industry for enterprise reporting? I'm currently using SSRS 2005, and know that there is another version coming out with the new release of MSSQL. But, it seems like it might also be a good time to investigate the market to see what else is out there. What have you encountered? Do you like it/dislike it? Why? Thank you.
I've used Cognos Series 7, Cognos Series 8, Crystal Reports, Business Objects XI R2 WebIntelligence, Reporting Services 2000, Reporting Services 2005, and Reporting Services 2008. Here's my feedback on what I've learned: Reporting Services 2008/2005/2000 PROS Cost: Cheapest enterprise business intelligence solution if you are using MS SQL Server as a back-end. You also have a best-in-class ETL solution at no additional cost if you throw in SSIS. Most Flexible: Most flexible reporting solution I've ever used. It has always met all my business needs, particularly in its latest incarnation. Easily Scalable: We initially used this as a departmental solution supporting about 20 users. We eventually expanded it to cover a few thousand users. Despite having a really bad quality virtual server located in a remote data center, we were able to scale to about 50-100 concurrent user requests. On good hardware at a consulting gig, I was able to scale it to a larger set of concurrent users without any issues. I've also seen implementations where multiple SSRS servers were deployed in different countries and SSIS was used to synch the data in the back-ends. This allowed for solid performance in a distributed manner at almost no additional cost. Source Control Integration: This is CRITICAL to me when developing reports with my business intelligence teams. No other BI suite offers an out-of-box solution for this that I've ever used. Every other platform I used either required purchasing a 3rd party add-in or required you to promote reports between separate development, test, and production environments. Analysis Services: I like the tight integration with Analysis Services between SSRS and SSIS. I've read about instances where Oracle and DB2 quotes include installing a SQL Server 2005 Analysis Services server for OLAP cubes. Discoverability: No system has better discoverability than SSRS. There are more books, forums, articles, and code sites on SSRS than any other BI suite that I've ever used. If I needed to figuire out how to do something in SSRS, I could almost always find it with a few minutes or hours of work. CONS IIS Required for SSRS 2005/2000: Older versions of SSRS required installing IIS on the database server. This was not permissible from an internal controls perspective when I worked at a large bank. We eventually implemented SSRS without authorized approval from IT operations and basically asked for forgiveness later. This is not an issue in SSRS 2008 since IIS is no longer required. Report Builder: The web-based report builder was non-existant in SSRS 2000. The web-based report builder in SSRS 2005 was difficult to use and did not have enough functionality. The web-based report builder in SSRS 2008 is definitely better, but it is still too difficult to use for most business users. Database Bias: It works best with Microsoft SQL Server. It isn't great with Oracle, DB2, and other back-ends. Business Objects XI WebIntelligence PROS Ease of Use: Easiest to use for your average non-BI end-user for developing ad hoc reports. Database Agnostic: Definitely a good solution if you expect to use Oracle, DB2, or another database back-end. Performant: Very fast performance since most of the page navigations are basically file-system operations instead of database-calls. CONS Cost: Number one problem. If I want to scale up my implementation of Business Objects from 30 users to 1000 users, then SAP will make certain to charge you a few hundred thousands of dollars. And that's just for the Business Objects licenses. Add in the fact that you will also need database server licenses, you are now talking about a very expensive system. Of course, that could be the personal justification for getting Business Objects: if you can convince management to purchase a very expensive BI system, then you can probably convince management to pay for a large BI department. No Source Control: Lack of out-of-the-box source control integration leads to errors in accidentally modifying and deploying old report definitions by mistake. The "work-around" for this is promote reports between environments -- a process that I do NOT like to do since it slows down report development and introduces environmental differences variables. No HTML Email Support: You cannot send an HTML email via a schedule. I regularly do this in SSRS. You can buy an expensive 3rd party add-in to do this, but you shouldn't have to spend more money for this functionality. Model Bias: Report development requires universes -- basically a data model. That's fine for ad hoc report development, but I prefer to use stored procedures to have full control of performance. I also like to build flat tables that are then queried to avoid costly complex joins during report run-time. It is silly to have to build universes that just contain flat tables that are only used by one report. You shouldn't have to build a model just to query a table. Store procedure support is also not supported out of the box without hacking the SQL Overrides. Poor Parameter Support: Parameter support is terrible in BOXI WebIntelligence reports. Although I like the meta-data refresh options for general business users, it just isn't robust enough when trying to setup schedules. I almost always have to clone reports and alter the filters slightly which leads to unnecessary report definition duplication. SSRS beats this hands down, particularly since you can make the value and the label have different values -- unlike BOXI. Inadequate Report Linking Support: I wanted to store one report definition in a central folder and then create linked reports for other users. However, I quickly found out end-users needed to have full rights on the parent object to use the object in their own folder. This defeated the entire purpose of using a linked report object. Give me SSRS! Separate CMC: Why do you have to launch another application just to manage your object security? Worse, why isn't the functionality identical between CMC and InfoSys? For example, if you want to setup a scheduled report to retry on failed attempts, then you can specify the number of retries and the retry interval in CMC. However, you can't do this in InfoSys and you can't see the information either. InfoSys allows you to setup event-driven schedules and CMC does not support this feature. Java Version Dependency: BOXI works great on end-user machines so long as they are running the same version of java as the server. However, once a newer version of java is installed on your machine, things starts to break. We're running Java 1.5 on our BOXI R2 server (the default java client) and almost everyone in the company is on Java 1.6. If you use Java 1.6, then prompts can freeze your IE and FoxFire sessions or crash your report builder unexpectedly. Weak Discoverability: Aside from BOB (Business Objects Board), there isn't much out there on the Internet regarding troubleshooting Business Objects problems. Cognos Series 8 PROS Ease of Use: Although BOXI is easier to use for writing simple reports for general business users, Cognos is a close 2nd in this area. Database Agnostic: Like BOXI this is definitely a good solution if you expect to use Oracle, DB2, or another database back-end. FrameWork Manager: This is definitely a best-in-class meta-data repository. BOXI's universe builder wishes it was half as good. This tool is well suited to promoting packages across Development, Test, and Production environments. CONS Cost: Same issue as Business Objects. Similar cost structure. Similar database licensing requirements as well. No Source Control: Same issue as Business Objects. I'm not aware of any 3rd party tools that resolve this issue, but they might exist. Model Bias: Same issue as Business Objects. Has better support for stored procedures in FrameWork Manager, though. Poor Parameter Support: Same issue as Business Objects. Has better support for creating prompt-pages if you can code in Java. Buggy behavior, though, when users click the back-button to return to the prompt-page. SSRS beats this out hands-down. Inadequate Error Handling: Error messages in Cognos are nearly impossible to decipher. They generally give you a long negative number and a stack dump as part of the error message. I don't know how many times we "resolved" these error messages by rebuilding reports from scratch. For some reason, it is pretty easy to corrupt a report definition. No Discoverability: It is very hard to track down any answers on how to troubleshoot problems or to implement functionality in Cognos. There just isn't adequate community support in Internet facing websites for the products. As you can guess from my answer, I believe Microsoft's BI suite is the best platform on the market. However, I must state that most articles I've read on comparisons of BI suites usually do not rate Microsoft's offering as well as SAP's Business Objects and Cognos's Series 8 products. Also, I've also seen Microsoft come out on the bottom in internal reviews of BI Suites in two separate companies after they were review by the reigning CIO's. In both instances, though, it seemed like it all boiled down to wanting to be perceived as a major department that justified a large operating budget.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/16804", "https://Stackoverflow.com", "https://Stackoverflow.com/users/733/" ] }
16,828
It seems like the only way to do this is to pass the -i parameter in when you initially run less. Does anyone know of some secret hack to make something like this work /something to search for/i
You can also type command -I while less is running. It toggles case sensitivity for searches.
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/16828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1797/" ] }