content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Getting hibernate to log clob parameters (see here for the problem I'm trying to solve) How do you get hibernate to log clob values it's going to insert. It is logging other value types, such as Integer etc. I have the following in my log4j config: log4j.logger.net.sf.hibernate.SQL=DEBUG log4j.logger.org.hibernate.SQL=DEBUG log4j.logger.net.sf.hibernate.type=DEBUG log4j.logger.org.hibernate.type=DEBUG Which produces output such as: (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '0' to parameter: 2 (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '1' to parameter: 2 However you'll note that it never displays parameter: 3 which is our clob. What I would really want is something like: (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '0' to parameter: 2 (org.hibernate.type.ClobType) binding 'something' to parameter: 3 (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '1' to parameter: 2 (org.hibernate.type.ClobType) binding 'something else' to parameter: 3 How do I get it to show this in the log? A: Try using: log4j.logger.net.sf.hibernate=DEBUG log4j.logger.org.hibernate=DEBUG That's the finest level you'll get. If it does not show the information you want, then it's not possible. A: Well, it looks like you can't. (Thanks Marcio for the suggestion, but sadly that didn't add anything useful) A: Try to set log4j.logger.org.hibernate.type=TRACE and see if that helps.
Getting hibernate to log clob parameters
(see here for the problem I'm trying to solve) How do you get hibernate to log clob values it's going to insert. It is logging other value types, such as Integer etc. I have the following in my log4j config: log4j.logger.net.sf.hibernate.SQL=DEBUG log4j.logger.org.hibernate.SQL=DEBUG log4j.logger.net.sf.hibernate.type=DEBUG log4j.logger.org.hibernate.type=DEBUG Which produces output such as: (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '0' to parameter: 2 (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '1' to parameter: 2 However you'll note that it never displays parameter: 3 which is our clob. What I would really want is something like: (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '0' to parameter: 2 (org.hibernate.type.ClobType) binding 'something' to parameter: 3 (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '1' to parameter: 2 (org.hibernate.type.ClobType) binding 'something else' to parameter: 3 How do I get it to show this in the log?
[ "Try using:\nlog4j.logger.net.sf.hibernate=DEBUG\nlog4j.logger.org.hibernate=DEBUG\n\nThat's the finest level you'll get. If it does not show the information you want, then it's not possible.\n", "Well, it looks like you can't. (Thanks Marcio for the suggestion, but sadly that didn't add anything useful)\n", "Try to set log4j.logger.org.hibernate.type=TRACE and see if that helps.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "hibernate", "java", "oracle" ]
stackoverflow_0000053365_hibernate_java_oracle.txt
Q: Is there a way of getting the process id of my C++ application? Is there a way of getting the process id of my C++ application? I am using the Carbon framework, but not Cocoa… A: can you use the getpid() function found in unistd.h ? osx reference A: GetProcessPID is what you need. This takes a ProcessSerialNumber, which you can obtain from GetCurrentProcess. A: Note that you don't actually need to call GetCurrentProcess, you can use the constant kCurrentProcess. (But getpid is a lot less work if you're not trying to access another process's PID, anyway.)
Is there a way of getting the process id of my C++ application?
Is there a way of getting the process id of my C++ application? I am using the Carbon framework, but not Cocoa…
[ "can you use the getpid() function found in unistd.h ?\nosx reference\n", "GetProcessPID is what you need. This takes a ProcessSerialNumber, which you can obtain from GetCurrentProcess.\n", "Note that you don't actually need to call GetCurrentProcess, you can use the constant kCurrentProcess.\n(But getpid is a lot less work if you're not trying to access another process's PID, anyway.)\n" ]
[ 8, 1, 1 ]
[]
[]
[ "c++", "macos", "macos_carbon" ]
stackoverflow_0000041676_c++_macos_macos_carbon.txt
Q: Browser WYSIWYG best practices I am using a rich text editor on a web page. .NET has feature that prevent one from posting HTML tags, so I added a JavaScript snippet to change the angle brackets to and alias pair of characters before the post. The alias is replaced on the server with the necessary angle bracket and then stored in the database. With XSS aside, what are common ways of fixing this problem. (i.e. Is there a better way?) If you have comments on XSS(cross-site scripting), I'm sure that will help someone. A: There's actually a way to turn that "feature" off. This will allow the user to post whichever characters they want, and there will be no need to convert characters to an alias using Javascript. See this article for disabling request validation. It means that you'll have to do your own validation, but from the sounds of your post, it seems that is what you are looking to do anyway. You can also disable it per page by following the instructions here. A: I think the safest way to go is to NOT allow the user to create tags with your WISYWIG. Maybe using something like a markdown editor like on this site or available here. would be another approach. Also keep the Page directive ValidateRequest=true which should stop markup from being sent in the request, you'll of course need to handle this error when it comes up. People will always be able to inject tags into the request either way using firefox extensions like Tamper data, but the ValidateRequest=true should at least stop ASP.NET from accepting them. A straight forward post on XSS attacks was recently made by Jeff here. It also speaks to making your cookies HttpOnly, which is a semi-defense against cookie theft. Good luck! A: My first comment would be to avoid using JavaScript to change the angle brackets. Bypassing this is as simple as disabling JavaScript in the browser. Almost all server-side languages have some utility method that converts some HTML characters into their entity counterparts. For instance, PHP uses htmlentities(), and I am sure .NET has an equivalent utility method. In the least, you can do a regex replace for angle brackets, parenthesis and double quotes, and that will get you a long way toward a secure solution.
Browser WYSIWYG best practices
I am using a rich text editor on a web page. .NET has feature that prevent one from posting HTML tags, so I added a JavaScript snippet to change the angle brackets to and alias pair of characters before the post. The alias is replaced on the server with the necessary angle bracket and then stored in the database. With XSS aside, what are common ways of fixing this problem. (i.e. Is there a better way?) If you have comments on XSS(cross-site scripting), I'm sure that will help someone.
[ "There's actually a way to turn that \"feature\" off. This will allow the user to post whichever characters they want, and there will be no need to convert characters to an alias using Javascript. See this article for disabling request validation. It means that you'll have to do your own validation, but from the sounds of your post, it seems that is what you are looking to do anyway. You can also disable it per page by following the instructions here.\n", "I think the safest way to go is to NOT allow the user to create tags with your WISYWIG. Maybe using something like a markdown editor like on this site or available here. would be another approach. \nAlso keep the Page directive ValidateRequest=true which should stop markup from being sent in the request, you'll of course need to handle this error when it comes up. People will always be able to inject tags into the request either way using firefox extensions like Tamper data, but the ValidateRequest=true should at least stop ASP.NET from accepting them.\nA straight forward post on XSS attacks was recently made by Jeff here. It also speaks to making your cookies HttpOnly, which is a semi-defense against cookie theft. Good luck!\n", "My first comment would be to avoid using JavaScript to change the angle brackets. Bypassing this is as simple as disabling JavaScript in the browser. Almost all server-side languages have some utility method that converts some HTML characters into their entity counterparts. For instance, PHP uses htmlentities(), and I am sure .NET has an equivalent utility method. In the least, you can do a regex replace for angle brackets, parenthesis and double quotes, and that will get you a long way toward a secure solution.\n" ]
[ 4, 3, 1 ]
[]
[]
[ "asp.net", "javascript", "wysiwyg" ]
stackoverflow_0000061760_asp.net_javascript_wysiwyg.txt
Q: How to capture crash logs in Java I'm working on a cross platform application in Java which currently works nicely on Windows, Linux and MacOS X. I'm trying to work out a nice way to do detection (and handling) of 'crashes'. Is there an easy, cross-platform way to detect 'crashes' in Java and to do something in response? I guess by 'crashes' I mean uncaught exceptions. However the code does use some JNI so it'd be nice to be able to catch crashes from bad JNI code, but I have a feeling that's JVM specific. A: For simple catch-all handling, you can use the following static method in Thread. From the Javadoc: static void setDefaultUncaughtExceptionHandler(Thread.UncaughtExceptionHandler eh)           Set the default handler invoked when a thread abruptly terminates due to an uncaught exception, and no other handler has been defined for that thread. This is a very broad way to deal with errors or unchecked exceptions that may not be caught anywhere else. Side-note: It's better if the code can catch, log and/or recover from exceptions closer to the source of the problem. I would reserve this kind of generalized crash handling for totally unrecoverable situations (i.e. subclasses of java.lang.Error). Try to avoid the possibility of a RuntimeException ever going completely uncaught, since it might be possible--and preferable--for the software to survive that. A: For handling uncaught exceptions you can provide a new ThreadGroup which provides an implementation of ThreadGroup.uncaughtException(...). You can then catch any uncaught exceptions and handle them appropriately (e.g. send a crash log home). I can't help you on the JNI front, there's probably a way using a native wrapper executable before calling the JVM, but that executable is going to need to know about all the possible JVMs it could be calling and how the indicate crashes and where crash logs are placed etc. A: Not sure if this is what you needing, but you can also detect if an exception has occurred from within your native code. See http://java.sun.com/javase/6/docs/technotes/guides/jni/spec/functions.html#wp5234 for more info.
How to capture crash logs in Java
I'm working on a cross platform application in Java which currently works nicely on Windows, Linux and MacOS X. I'm trying to work out a nice way to do detection (and handling) of 'crashes'. Is there an easy, cross-platform way to detect 'crashes' in Java and to do something in response? I guess by 'crashes' I mean uncaught exceptions. However the code does use some JNI so it'd be nice to be able to catch crashes from bad JNI code, but I have a feeling that's JVM specific.
[ "For simple catch-all handling, you can use the following static method in Thread. From the Javadoc:\n\nstatic void setDefaultUncaughtExceptionHandler(Thread.UncaughtExceptionHandler eh)           Set the default handler invoked when a thread abruptly terminates due to an uncaught exception, and no other handler has been defined for that thread.\n\nThis is a very broad way to deal with errors or unchecked exceptions that may not be caught anywhere else.\nSide-note: It's better if the code can catch, log and/or recover from exceptions closer to the source of the problem. I would reserve this kind of generalized crash handling for totally unrecoverable situations (i.e. subclasses of java.lang.Error). Try to avoid the possibility of a RuntimeException ever going completely uncaught, since it might be possible--and preferable--for the software to survive that.\n", "For handling uncaught exceptions you can provide a new ThreadGroup which provides an implementation of ThreadGroup.uncaughtException(...). You can then catch any uncaught exceptions and handle them appropriately (e.g. send a crash log home).\nI can't help you on the JNI front, there's probably a way using a native wrapper executable before calling the JVM, but that executable is going to need to know about all the possible JVMs it could be calling and how the indicate crashes and where crash logs are placed etc.\n", "Not sure if this is what you needing, but you can also detect if an exception has occurred from within your native code. See http://java.sun.com/javase/6/docs/technotes/guides/jni/spec/functions.html#wp5234 for more info.\n" ]
[ 4, 3, 1 ]
[]
[]
[ "crash", "error_handling", "java" ]
stackoverflow_0000061714_crash_error_handling_java.txt
Q: Does Tiles for Struts2 support UTF-8 encoded templates? If so what are required configuration elements to enable UTF-8 for tiles? I'm finding my tile results are sent as: Content-Type text/html; A: What if you put this at the top? <%@ page contentType="text/html;charset=UTF-8" pageEncoding="UTF-8" language="java" %>
Does Tiles for Struts2 support UTF-8 encoded templates?
If so what are required configuration elements to enable UTF-8 for tiles? I'm finding my tile results are sent as: Content-Type text/html;
[ "What if you put this at the top?\n<%@ page contentType=\"text/html;charset=UTF-8\" pageEncoding=\"UTF-8\" language=\"java\" %> \n\n" ]
[ 0 ]
[]
[]
[ "internationalization", "java", "tiles" ]
stackoverflow_0000061796_internationalization_java_tiles.txt
Q: How do I use my pager (more/less) on error output only I have a program that spits out both standard error and standard out, and I want to run my pager less on the standard error, but ignore standard out. How do I do that? Update: That's it ... I didn't want to lose stdout ... just keep it out of pager program 2>&1 >log | less then later less log A: You could try redirecting standard out to /dev/null, but redirecting standard error to where standard out used to go. Example in ksh/bash: program 2>&1 >/dev/null | less Here the redirection 2>&1, which sets file descriptor 2 (stderr) to point to the same stream as file descriptor 1 (stdout), gets evaluated before the redirection >/dev/null , which sets file descriptor 1 to point to /dev/null. The effect is that what you write to stderr gets sent to stdout, and what you write to stdout gets thrown away.
How do I use my pager (more/less) on error output only
I have a program that spits out both standard error and standard out, and I want to run my pager less on the standard error, but ignore standard out. How do I do that? Update: That's it ... I didn't want to lose stdout ... just keep it out of pager program 2>&1 >log | less then later less log
[ "You could try redirecting standard out to /dev/null, but redirecting standard error to where standard out used to go.\nExample in ksh/bash:\nprogram 2>&1 >/dev/null | less\n\nHere the redirection 2>&1, which sets file descriptor 2 (stderr) to point to the same stream as file descriptor 1 (stdout), gets evaluated before the redirection >/dev/null , which sets file descriptor 1 to point to /dev/null. The effect is that what you write to stderr gets sent to stdout, and what you write to stdout gets thrown away.\n" ]
[ 4 ]
[]
[]
[ "unix" ]
stackoverflow_0000061871_unix.txt
Q: Sum of items in a collection Using LINQ to SQL, I have an Order class with a collection of OrderDetails. The Order Details has a property called LineTotal which gets Qnty x ItemPrice. I know how to do a new LINQ query of the database to find the order total, but as I already have the collection of OrderDetails from the DB, is there a simple method to return the sum of the LineTotal directly from the collection? I'd like to add the order total as a property of my Order class. I imagine I could loop through the collection and calculate the sum with a for each Order.OrderDetail, but I'm guessing there is a better way. A: You can do LINQ to Objects and the use LINQ to calculate the totals: decimal sumLineTotal = (from od in orderdetailscollection select od.LineTotal).Sum(); You can also use lambda-expressions to do this, which is a bit "cleaner". decimal sumLineTotal = orderdetailscollection.Sum(od => od.LineTotal); You can then hook this up to your Order-class like this if you want: Public Partial Class Order { ... Public Decimal LineTotal { get { return orderdetailscollection.Sum(od => od.LineTotal); } } }
Sum of items in a collection
Using LINQ to SQL, I have an Order class with a collection of OrderDetails. The Order Details has a property called LineTotal which gets Qnty x ItemPrice. I know how to do a new LINQ query of the database to find the order total, but as I already have the collection of OrderDetails from the DB, is there a simple method to return the sum of the LineTotal directly from the collection? I'd like to add the order total as a property of my Order class. I imagine I could loop through the collection and calculate the sum with a for each Order.OrderDetail, but I'm guessing there is a better way.
[ "You can do LINQ to Objects and the use LINQ to calculate the totals:\ndecimal sumLineTotal = (from od in orderdetailscollection\nselect od.LineTotal).Sum();\n\nYou can also use lambda-expressions to do this, which is a bit \"cleaner\".\ndecimal sumLineTotal = orderdetailscollection.Sum(od => od.LineTotal);\n\nYou can then hook this up to your Order-class like this if you want:\nPublic Partial Class Order {\n ...\n Public Decimal LineTotal {\n get {\n return orderdetailscollection.Sum(od => od.LineTotal);\n }\n }\n}\n\n" ]
[ 161 ]
[]
[]
[ "linq", "sum" ]
stackoverflow_0000061870_linq_sum.txt
Q: Sending messages to objects while debugging Objective-C in gdb, without symbols I'm trying to send messages to Objective-C objects in gdb. (gdb) p $esi $2 = (void *) 0x1268160 (gdb) po $esi <NSArray: 0x1359c0> (gdb) po [$esi count] Target does not respond to this message selector. I can't send any message to it. Am I missing something? Do I really need the symbols, or something else? A: If you must override gdb and send a message to an object when it will not let you, you can use performSelector: (gdb) print (int)[receivedData count] Target does not respond to this message selector. (gdb) print (int)[receivedData performSelector:@selector(count) ] 2008-09-15 00:46:35.854 Executable[1008:20b] *** -[NSConcreteMutableData count]: unrecognized selector sent to instance 0x105f2e0 If you need to pass an argument use withObject: (gdb) print (int)[receivedData performSelector:@selector(count) withObject:myObject ] A: Is it possible that you need to cast $esi? p (NSUInteger)[(NSArray *)$esi count] A: @[John Calsbeek] Then it complains about missing symbols. (gdb) p (NSUInteger)[(NSObject*)$esi retainCount] No symbol table is loaded. Use the "file" command. (gdb) p [(NSArray *)$esi count] No symbol "NSArray" in current context. I tried to load the symbols for Foundation: (gdb) add-symbol-file /System/Library/Frameworks/Foundation.framework/Foundation add symbol table from file "/System/Library/Frameworks/Foundation.framework/Foundation"? (y or n) y Reading symbols from /System/Library/Frameworks/Foundation.framework/Foundation...done. but still no luck: (gdb) p [(NSArray *)$esi count] No symbol "NSArray" in current context. Anyway, I don't think casting is the solution to this problem, you shouldn't have to know what kind of object it is, to be able to send messages to it. The weird thing is that I found an NSCFArray I have no problems sending messages to: (gdb) p $eax $11 = 367589056 (gdb) po $eax <NSCFArray 0x15e8f6c0>( file://localhost/Users/ask/Documents/composing-fractals.pdf ) (gdb) p (int)[$eax retainCount] $12 = 1 so I guess there was a problem with the object I was investigating... or something. Thanks for your help!
Sending messages to objects while debugging Objective-C in gdb, without symbols
I'm trying to send messages to Objective-C objects in gdb. (gdb) p $esi $2 = (void *) 0x1268160 (gdb) po $esi <NSArray: 0x1359c0> (gdb) po [$esi count] Target does not respond to this message selector. I can't send any message to it. Am I missing something? Do I really need the symbols, or something else?
[ "If you must override gdb and send a message to an object when it will not let you, you can use performSelector:\n(gdb) print (int)[receivedData count]\nTarget does not respond to this message selector.\n\n(gdb) print (int)[receivedData performSelector:@selector(count) ]\n2008-09-15 00:46:35.854 Executable[1008:20b] *** -[NSConcreteMutableData count]:\nunrecognized selector sent to instance 0x105f2e0\n\nIf you need to pass an argument use withObject:\n(gdb) print (int)[receivedData performSelector:@selector(count) withObject:myObject ]\n\n", "Is it possible that you need to cast $esi?\np (NSUInteger)[(NSArray *)$esi count]\n\n", "@[John Calsbeek]\nThen it complains about missing symbols.\n(gdb) p (NSUInteger)[(NSObject*)$esi retainCount]\nNo symbol table is loaded. Use the \"file\" command.\n(gdb) p [(NSArray *)$esi count]\nNo symbol \"NSArray\" in current context.\n\nI tried to load the symbols for Foundation:\n(gdb) add-symbol-file /System/Library/Frameworks/Foundation.framework/Foundation \nadd symbol table from file \"/System/Library/Frameworks/Foundation.framework/Foundation\"? (y or n) y\nReading symbols from /System/Library/Frameworks/Foundation.framework/Foundation...done.\n\nbut still no luck:\n(gdb) p [(NSArray *)$esi count]\nNo symbol \"NSArray\" in current context.\n\nAnyway, I don't think casting is the solution to this problem, you shouldn't have to know what kind of object it is, to be able to send messages to it.\nThe weird thing is that I found an NSCFArray I have no problems sending messages to:\n(gdb) p $eax\n$11 = 367589056\n(gdb) po $eax\n<NSCFArray 0x15e8f6c0>(\n file://localhost/Users/ask/Documents/composing-fractals.pdf\n)\n\n(gdb) p (int)[$eax retainCount]\n$12 = 1\n\nso I guess there was a problem with the object I was investigating... or something.\nThanks for your help!\n" ]
[ 10, 1, 0 ]
[]
[]
[ "debugging", "gdb", "macos", "objective_c", "reversing" ]
stackoverflow_0000056472_debugging_gdb_macos_objective_c_reversing.txt
Q: java.lang.IllegalArgumentException: Invalid in servlet mapping <servlet> <servlet-name>myservlet</servlet-name> <servlet-class>workflow.WDispatcher</servlet-class> <load-on-startup>2</load-on-startup> </servlet> <servlet-mapping> <servlet-name>myservlet</servlet-name> <url-pattern>*NEXTEVENT*</url-pattern> </servlet-mapping> Above is the snippet from Tomcat's web.xml. The URL pattern *NEXTEVENT* on start up throws java.lang.IllegalArgumentException: Invalid <url-pattern> in servlet mapping It will be greatly appreciated if someone can hint at the error. ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: <url-pattern>*NEXTEVENT*</url-pattern> The URL pattern is not valid. It can either end in an asterisk or start with one (to denote a file extension mapping). The url-pattern specification: A string beginning with a ‘/’ character and ending with a ‘/*’ suffix is used for path mapping. A string beginning with a ‘*.’ prefix is used as an extension mapping. A string containing only the ’/’ character indicates the "default" servlet of the application. In this case the servlet path is the request URI minus the context path and the path info is null. All other strings are used for exact matches only. See section 12.2 of the Java Servlet Specification Version 3.1 for more details. A: A workaround that can achieve that is to add a servlet filter to do URL re-writes e.g. re-write NEXTEVENT to /NEXTEVENT/(the one before the NEXTEVENT)/(the one after NEXTEVENT) or something similar.
java.lang.IllegalArgumentException: Invalid in servlet mapping
<servlet> <servlet-name>myservlet</servlet-name> <servlet-class>workflow.WDispatcher</servlet-class> <load-on-startup>2</load-on-startup> </servlet> <servlet-mapping> <servlet-name>myservlet</servlet-name> <url-pattern>*NEXTEVENT*</url-pattern> </servlet-mapping> Above is the snippet from Tomcat's web.xml. The URL pattern *NEXTEVENT* on start up throws java.lang.IllegalArgumentException: Invalid <url-pattern> in servlet mapping It will be greatly appreciated if someone can hint at the error. ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
[ "<url-pattern>*NEXTEVENT*</url-pattern>\n\nThe URL pattern is not valid. It can either end in an asterisk or start with one (to denote a file extension mapping).\nThe url-pattern specification:\n\n\nA string beginning with a ‘/’ character and ending with a ‘/*’\n suffix is used for path mapping.\nA string beginning with a ‘*.’ prefix is used as an extension\n mapping.\nA string containing only the ’/’ character indicates the \"default\"\n servlet of the application. In this\n case the servlet path is the request\n URI minus the context path and the\n path info is null.\nAll other strings are used for exact matches only.\n\n\nSee section 12.2 of the Java Servlet Specification Version 3.1 for more details.\n", "A workaround that can achieve that is to add a servlet filter to do URL re-writes e.g.\nre-write NEXTEVENT to /NEXTEVENT/(the one before the NEXTEVENT)/(the one after NEXTEVENT) or something similar.\n" ]
[ 102, 1 ]
[]
[]
[ "illegalargumentexception", "servlet_mapping", "servlets", "tomcat", "web.xml" ]
stackoverflow_0000026732_illegalargumentexception_servlet_mapping_servlets_tomcat_web.xml.txt
Q: Code to make a DHTMLEd control replace straight quotes with curly quotes I've got an old, legacy VB6 application that uses the DHTML editing control as an HTML editor. The Microsoft DHTML editing control, a.k.a. DHTMLEd, is probably nothing more than an IE control using IE's own native editing capability internally. I'd like to modify the app to implement smart quotes like Word. Specifically, " is replaced with “ or ” and ' is replaced with ‘ or ’ as appropriate as it is typed; and if the user presses Ctrl+Z immediately after the replacement, it goes back to being a straight quote. Does anyone have code that does that? If you don't have code for DHTML/VB6, but do have JavaScript code that works in a browser with contentEditable regions, I could use that, too A: Here's the VB6 version: Private Sub DHTMLEdit1_onkeypress() Dim e As Object Set e = DHTMLEdit1.DOM.parentWindow.event 'Perform smart-quote replacement' Select Case e.keyCode Case 34: 'Double-Quote' e.keyCode = 0 If IsAtWordEnd Then InsertDoubleUndo ChrW$(8221), ChrW$(34) Else InsertDoubleUndo ChrW$(8220), ChrW$(34) End If Case 39: 'Single-Quote' e.keyCode = 0 If IsAtWordEnd Then InsertDoubleUndo ChrW$(8217), ChrW$(39) Else InsertDoubleUndo ChrW$(8216), ChrW$(39) End If End Select End Sub Private Function IsLetter(ByVal character As String) As Boolean IsLetter = UCase$(character) <> LCase$(character) End Function Private Sub InsertDoubleUndo(VisibleText As String, HiddenText As String) Dim selection As Object Set selection = DHTMLEdit1.DOM.selection.createRange() selection.Text = HiddenText selection.moveStart "character", -Len(HiddenText) selection.Text = VisibleText End Sub Private Function IsAtWordEnd() As Boolean Dim ch As String ch = PreviousChar IsAtWordEnd = (ch <> " ") And (ch <> "") End Function Private Function PreviousChar() As String Dim selection As Object Set selection = m_dom.selection.createRange() selection.moveStart "character", -1 PreviousChar = selection.Text End Function Note: this solution inserts an additional level in the undo chain. For example, typing "This is a test" gives a chain of “This is a test” -> “This is a test" -> “This is a test -> “ -> " (extra level in bold). To remove this extra level you'd have to implement some sort of PostMessage+subclassing solution that doesn't involve cancelling the native keypress edit: Don't forget to include the DHTML Editing Control redistributable if you are targeting Windows Vista.
Code to make a DHTMLEd control replace straight quotes with curly quotes
I've got an old, legacy VB6 application that uses the DHTML editing control as an HTML editor. The Microsoft DHTML editing control, a.k.a. DHTMLEd, is probably nothing more than an IE control using IE's own native editing capability internally. I'd like to modify the app to implement smart quotes like Word. Specifically, " is replaced with “ or ” and ' is replaced with ‘ or ’ as appropriate as it is typed; and if the user presses Ctrl+Z immediately after the replacement, it goes back to being a straight quote. Does anyone have code that does that? If you don't have code for DHTML/VB6, but do have JavaScript code that works in a browser with contentEditable regions, I could use that, too
[ "Here's the VB6 version:\nPrivate Sub DHTMLEdit1_onkeypress()\n Dim e As Object\n Set e = DHTMLEdit1.DOM.parentWindow.event\n 'Perform smart-quote replacement'\n Select Case e.keyCode\n Case 34: 'Double-Quote'\n e.keyCode = 0\n If IsAtWordEnd Then\n InsertDoubleUndo ChrW$(8221), ChrW$(34)\n Else\n InsertDoubleUndo ChrW$(8220), ChrW$(34)\n End If\n Case 39: 'Single-Quote'\n e.keyCode = 0\n If IsAtWordEnd Then\n InsertDoubleUndo ChrW$(8217), ChrW$(39)\n Else\n InsertDoubleUndo ChrW$(8216), ChrW$(39)\n End If\n End Select\nEnd Sub\n\nPrivate Function IsLetter(ByVal character As String) As Boolean\n IsLetter = UCase$(character) <> LCase$(character)\nEnd Function\n\nPrivate Sub InsertDoubleUndo(VisibleText As String, HiddenText As String)\n Dim selection As Object\n Set selection = DHTMLEdit1.DOM.selection.createRange()\n selection.Text = HiddenText\n selection.moveStart \"character\", -Len(HiddenText)\n selection.Text = VisibleText\nEnd Sub\n\nPrivate Function IsAtWordEnd() As Boolean\n\n Dim ch As String\n ch = PreviousChar\n IsAtWordEnd = (ch <> \" \") And (ch <> \"\")\n\nEnd Function\n\nPrivate Function PreviousChar() As String\n\n Dim selection As Object\n Set selection = m_dom.selection.createRange()\n selection.moveStart \"character\", -1\n PreviousChar = selection.Text\n\nEnd Function\n\nNote: this solution inserts an additional level in the undo chain. For example, typing \"This is a test\" gives a chain of “This is a test” -> “This is a test\" -> “This is a test -> “ -> \" (extra level in bold). To remove this extra level you'd have to implement some sort of PostMessage+subclassing solution that doesn't involve cancelling the native keypress\nedit: Don't forget to include the DHTML Editing Control redistributable if you are targeting Windows Vista.\n" ]
[ 15 ]
[]
[]
[ "dom", "html", "vb6" ]
stackoverflow_0000061598_dom_html_vb6.txt
Q: How do I make a subproject with Qt? I'm about to start on a large Qt application, which is made up of smaller components (groups of classes that work together). For example, there might be a dialog that is used in the project, but should be developed on its own before being integrated into the project. Instead of working on it in another folder somewhere and then copying it into the main project folder, can I create a sub-folder which is dedicated to that dialog, and then somehow incorporate it into the main project? A: Here is what I would do. Let's say I want the following folder hierarchy : /MyWholeApp will contain the files for the whole application. /MyWholeApp/DummyDlg/ will contain the files for the standalone dialogbox which will be eventually part of the whole application. I would develop the standalone dialog box and the related classes. I would create a Qt-project file which is going to be included. It will contain only the forms and files which will eventually be part of the whole application. File DummyDlg.pri, in /MyWholeApp/DummyDlg/ : # Input FORMS += dummydlg.ui HEADERS += dummydlg.h SOURCES += dummydlg.cpp The above example is very simple. You could add other classes if needed. To develop the standalone dialog box, I would then create a Qt project file dedicated to this dialog : File DummyDlg.pro, in /MyWholeApp/DummyDlg/ : TEMPLATE = app DEPENDPATH += . INCLUDEPATH += . include(DummyDlg.pri) # Input SOURCES += main.cpp As you can see, this PRO file is including the PRI file created above, and is adding an additional file (main.cpp) which will contain the basic code for running the dialog box as a standalone : #include <QApplication> #include "dummydlg.h" int main(int argc, char* argv[]) { QApplication MyApp(argc, argv); DummyDlg MyDlg; MyDlg.show(); return MyApp.exec(); } Then, to include this dialog box to the whole application you need to create a Qt-Project file : file WholeApp.pro, in /MyWholeApp/ : TEMPLATE = app DEPENDPATH += . DummyDlg INCLUDEPATH += . DummyDlg include(DummyDlg/DummyDlg.pri) # Input FORMS += OtherDlg.ui HEADERS += OtherDlg.h SOURCES += OtherDlg.cpp WholeApp.cpp Of course, the Qt-Project file above is very simplistic, but shows how I included the stand-alone dialog box. A: Yes, you can edit your main project (.pro) file to include your sub project's project file. See here A: For Qt on Windows you can create DLLs for every subproject you want. No problem with using them from the main project (exe) after that. You'll have to take care of dependencies but it's not very difficult.
How do I make a subproject with Qt?
I'm about to start on a large Qt application, which is made up of smaller components (groups of classes that work together). For example, there might be a dialog that is used in the project, but should be developed on its own before being integrated into the project. Instead of working on it in another folder somewhere and then copying it into the main project folder, can I create a sub-folder which is dedicated to that dialog, and then somehow incorporate it into the main project?
[ "Here is what I would do. Let's say I want the following folder hierarchy :\n/MyWholeApp\n\nwill contain the files for the whole application.\n/MyWholeApp/DummyDlg/\n\nwill contain the files for the standalone dialogbox which will be eventually part of the whole application.\nI would develop the standalone dialog box and the related classes. I would create a Qt-project file which is going to be included. It will contain only the forms and files which will eventually be part of the whole application.\nFile DummyDlg.pri, in /MyWholeApp/DummyDlg/ :\n# Input\nFORMS += dummydlg.ui\nHEADERS += dummydlg.h\nSOURCES += dummydlg.cpp\n\nThe above example is very simple. You could add other classes if needed.\nTo develop the standalone dialog box, I would then create a Qt project file dedicated to this dialog :\nFile DummyDlg.pro, in /MyWholeApp/DummyDlg/ :\nTEMPLATE = app\nDEPENDPATH += .\nINCLUDEPATH += .\n\ninclude(DummyDlg.pri)\n\n# Input\nSOURCES += main.cpp\n\nAs you can see, this PRO file is including the PRI file created above, and is adding an additional file (main.cpp) which will contain the basic code for running the dialog box as a standalone :\n#include <QApplication>\n#include \"dummydlg.h\"\n\nint main(int argc, char* argv[])\n{\n QApplication MyApp(argc, argv);\n\n DummyDlg MyDlg;\n MyDlg.show();\n return MyApp.exec();\n}\n\nThen, to include this dialog box to the whole application you need to create a Qt-Project file :\nfile WholeApp.pro, in /MyWholeApp/ :\nTEMPLATE = app\nDEPENDPATH += . DummyDlg\nINCLUDEPATH += . DummyDlg\n\ninclude(DummyDlg/DummyDlg.pri)\n\n# Input\nFORMS += OtherDlg.ui\nHEADERS += OtherDlg.h\nSOURCES += OtherDlg.cpp WholeApp.cpp\n\nOf course, the Qt-Project file above is very simplistic, but shows how I included the stand-alone dialog box.\n", "Yes, you can edit your main project (.pro) file to include your sub project's project file.\nSee here\n", "For Qt on Windows you can create DLLs for every subproject you want. No problem with using them from the main project (exe) after that. You'll have to take care of dependencies but it's not very difficult.\n" ]
[ 24, 1, 0 ]
[]
[]
[ "qt" ]
stackoverflow_0000061405_qt.txt
Q: Is AnkhSVN any good? I asked a couple of coworkers about AnkhSVN and neither one of them was happy with it. One of them went as far as saying that AnkhSVN has messed up his devenv several times. What's your experience with AnkhSVN? I really miss having an IDE integrated source control tool. A: Older AnkhSVN (pre 2.0) was very crappy and I was only using it for shiny icons in the solution explorer. I relied on Tortoise for everything except reverts. The newer Ankh is a complete rewrite (it is now using the Source Control API of the IDE) and looks & works much better. Still, I haven't forced it to any heavy lifting. Icons is enough for me. The only gripe I have with 2.0 is the fact that it slaps its footprint to .sln files. I always revert them lest they cause problems for co-workers who do not have Ankh installed. I don't know if my fears are groundless or not. addendum: I have been using v2.1.7141 a bit more extensively for the last few weeks and here are the new things I have to add: No ugly crashes that plagued v1.x. Yay! For some reason, "Show Changes" (diff) windows are limited to only two. Meh. Diff windows do not allow editing/reverting yet. Boo! Updates, commits and browsing are MUCH faster than Tortoise. Yay! All in all, I would not use it standalone, but once you start using it, it becomes an almost indispensable companion to Tortoise. A: I always had stability issues with AnkhSVN. I couldn't switch everyone to Subversion where I work without an integrated solution. Thank goodness for VisualSVN + TortoiseSVN. VisualSVN isn't free, but it is cheap, and works a treat. A: I tried version 1, and it was unreliable to say the least. I can't say anything about 2.0. If you can afford it, the one I use, VisualSVN, is very good and uses TortoiseSVN for all its gui, except for the specialized things related to its VS integration. A: @pilif: AnkhSVN maintains an in-memory state of the working copy, which is invalidated/updated by Visual Studio events (ie you edit/change a file) and AnkhSVN events (ie you commit/update/revert/etc) Whenever the working copy is changed from outside Visual Studio (by editing with another tool, or by using another Subversion client), you will have to refresh AnkhSvn using the Refresh command we provide. The other thing that happens when you delete a file in a project with TortoiseSvn for example, is that it remains listed in the project file, and you will have to remove it there seperately (and then commit the project file as well). A: Copy/Pasting parts of my own Blogpost, as I switched from Ankh to VisualSVN: Why did I switch? Because i was a bit unhappy with the overall stability of Ankh, since it has some problems actually tracking Solution changes. VisualSVN is “just” a TortoiseSVN Frontend, which means it leaves all the “heavy lifting” to a third-party tool that a) is installed on most Workstations anyway and b) that’s been tested and used by such a wide audience, it’s really rock-solid. Now, AnkhSVN is certainly not a bad product, and the people behind it are serious about what they are doing, but having long-deleted files still in my SVN or getting the “Please Cleanup your solution” message get’s annoying after some time, but my biggest gripe is the property window. It’s nice that there is a nice window with Radio Buttons asking me which property I want to add. Unfortunately, there is no way to manually enter a property. Edit: That was for AnkhSVN 1.x. In the meantime, it was updated to 2.x and much improved. I use it in production on a system where I don't have VisualSVN and it works extremely well now. A: I had no problems with v1, but I was warned not to use it. I've been using v2 for a while, and I've had no problems with it. I still keep a backup of the repository though... A: I started with AnkhSvn and then moved on to VisualSvn. I have my own gripes with VisualSvn but its far less trouble compared to Ankh. I'm yet to try the new version of Ankh which they say is a complete rewrite and had inputs from Microsoft dev team as well. A: I've been using both the newest version of Ankh SVN and Tortoise on a project at home. I find them to both be very good with a caveat. I've found that both SVN tools have at times failed to keep up with my file/folder renaming and moving resulting in it thinking that a perfectly good file needs to be deleted on the next commit. This is probably down to me misusing SVN in some way but TFS at work does not have this problem. A: I tried AnkhSVN (1.0.3, just 4 months ago), and it did not work the way I wanted it to (i.e. needed to select things in the browser window instead of based on active file). I ended up making some macros that utilize TortoiseSVN that work much more like what I expected. I've been very happy with using TortoiseSVN via explorer and my macros inside the IDE. A: @mcintyre321 I've found that both SVN tools have at times failed to keep up with my file/folder renaming and moving resulting in it thinking that a perfectly good file needs to be deleted on the next commit. A move or rename operation results in an delete and 'add with history' at subversion level. TortoiseSvn shows this as: originalFile deleted newFile added (+) A: Earlier on (like 2 years ago when I last tried), AnkhSVN and Tortoise used in parallel with the same working copy caused some kind of working copy corruption where Ankh and Tortoise somehow lost track of the state the other tool left the working copy in. It was as if one of the tools stored additional metadata not contained in the working copy and was reliant on that being correct. The problems showed themselves by Ankh (or Tortoise) insisting on files being there which weren't, on files being changed which weren't and on files not being changed which were (and thus unable to commit). Maybe this has been fixed since, but I thought I'd better warn you guys. A: About a year ago me and a buddy used AnkhSVN for a project... several commits later while moving namespaces around, it broke the SVN repository. Broke as in, the last commit we did got corrupted, and we couldn't commit anymore. After that we used TortoiseSVN and did the namespace moving manually, it just... worked. If you're only working on base class libraries you could always try using SharpDevelop instead (that integrates with TortoiseSVN). I do hope they did fix AnkhSVN now though because IDE integrations always rock... when they work.
Is AnkhSVN any good?
I asked a couple of coworkers about AnkhSVN and neither one of them was happy with it. One of them went as far as saying that AnkhSVN has messed up his devenv several times. What's your experience with AnkhSVN? I really miss having an IDE integrated source control tool.
[ "Older AnkhSVN (pre 2.0) was very crappy and I was only using it for shiny icons in the solution explorer. I relied on Tortoise for everything except reverts.\nThe newer Ankh is a complete rewrite (it is now using the Source Control API of the IDE) and looks & works much better. Still, I haven't forced it to any heavy lifting. Icons is enough for me.\nThe only gripe I have with 2.0 is the fact that it slaps its footprint to .sln files. I always revert them lest they cause problems for co-workers who do not have Ankh installed. I don't know if my fears are groundless or not.\n\naddendum:\nI have been using v2.1.7141 a bit more extensively for the last few weeks and here are the new things I have to add:\n\nNo ugly crashes that plagued v1.x. Yay!\nFor some reason, \"Show Changes\" (diff) windows are limited to only two. Meh.\nDiff windows do not allow editing/reverting yet. Boo!\nUpdates, commits and browsing are MUCH faster than Tortoise. Yay!\n\nAll in all, I would not use it standalone, but once you start using it, it becomes an almost indispensable companion to Tortoise.\n", "I always had stability issues with AnkhSVN. I couldn't switch everyone to Subversion where I work without an integrated solution.\nThank goodness for VisualSVN + TortoiseSVN.\nVisualSVN isn't free, but it is cheap, and works a treat. \n", "I tried version 1, and it was unreliable to say the least. I can't say anything about 2.0.\nIf you can afford it, the one I use, VisualSVN, is very good and uses TortoiseSVN for all its gui, except for the specialized things related to its VS integration.\n", "@pilif: AnkhSVN maintains an in-memory state of the working copy, which is invalidated/updated by Visual Studio events (ie you edit/change a file) and AnkhSVN events (ie you commit/update/revert/etc)\nWhenever the working copy is changed from outside Visual Studio (by editing with another tool, or by using another Subversion client), you will have to refresh AnkhSvn using the Refresh command we provide.\nThe other thing that happens when you delete a file in a project with TortoiseSvn for example, is that it remains listed in the project file, and you will have to remove it there seperately (and then commit the project file as well).\n", "Copy/Pasting parts of my own Blogpost, as I switched from Ankh to VisualSVN:\n\nWhy did I switch? Because i was a bit unhappy with the overall stability of Ankh, since it has some problems actually tracking Solution changes. VisualSVN is “just” a TortoiseSVN Frontend, which means it leaves all the “heavy lifting” to a third-party tool that a) is installed on most Workstations anyway and b) that’s been tested and used by such a wide audience, it’s really rock-solid.\nNow, AnkhSVN is certainly not a bad product, and the people behind it are serious about what they are doing, but having long-deleted files still in my SVN or getting the “Please Cleanup your solution” message get’s annoying after some time, but my biggest gripe is the property window. It’s nice that there is a nice window with Radio Buttons asking me which property I want to add. Unfortunately, there is no way to manually enter a property.\n\nEdit: That was for AnkhSVN 1.x. In the meantime, it was updated to 2.x and much improved. I use it in production on a system where I don't have VisualSVN and it works extremely well now.\n", "I had no problems with v1, but I was warned not to use it. I've been using v2 for a while, and I've had no problems with it. I still keep a backup of the repository though...\n", "I started with AnkhSvn and then moved on to VisualSvn. I have my own gripes with VisualSvn but its far less trouble compared to Ankh. I'm yet to try the new version of Ankh which they say is a complete rewrite and had inputs from Microsoft dev team as well.\n", "I've been using both the newest version of Ankh SVN and Tortoise on a project at home. I find them to both be very good with a caveat.\nI've found that both SVN tools have at times failed to keep up with my file/folder renaming and moving resulting in it thinking that a perfectly good file needs to be deleted on the next commit. This is probably down to me misusing SVN in some way but TFS at work does not have this problem.\n", "I tried AnkhSVN (1.0.3, just 4 months ago), and it did not work the way I wanted it to (i.e. needed to select things in the browser window instead of based on active file). I ended up making some macros that utilize TortoiseSVN that work much more like what I expected.\nI've been very happy with using TortoiseSVN via explorer and my macros inside the IDE.\n", "@mcintyre321\n\nI've found that both SVN tools have at times failed to keep up with my file/folder renaming and moving resulting in it thinking that a perfectly good file needs to be deleted on the next commit.\n\nA move or rename operation results in an delete and 'add with history' at subversion level.\nTortoiseSvn shows this as:\noriginalFile deleted\nnewFile added (+)\n\n", "Earlier on (like 2 years ago when I last tried), AnkhSVN and Tortoise used in parallel with the same working copy caused some kind of working copy corruption where Ankh and Tortoise somehow lost track of the state the other tool left the working copy in.\nIt was as if one of the tools stored additional metadata not contained in the working copy and was reliant on that being correct.\nThe problems showed themselves by Ankh (or Tortoise) insisting on files being there which weren't, on files being changed which weren't and on files not being changed which were (and thus unable to commit).\nMaybe this has been fixed since, but I thought I'd better warn you guys.\n", "About a year ago me and a buddy used AnkhSVN for a project... several commits later while moving namespaces around, it broke the SVN repository. Broke as in, the last commit we did got corrupted, and we couldn't commit anymore.\nAfter that we used TortoiseSVN and did the namespace moving manually, it just... worked. If you're only working on base class libraries you could always try using SharpDevelop instead (that integrates with TortoiseSVN).\nI do hope they did fix AnkhSVN now though because IDE integrations always rock... when they work.\n" ]
[ 22, 13, 6, 6, 4, 3, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "ankhsvn", "version_control" ]
stackoverflow_0000018010_ankhsvn_version_control.txt
Q: DateTime Utility for ASP.net I was wondering if anyone could suggest a utility library that has useful functions for handling dates in ASP.NET easily taking away some of the leg work you normally have to do when handling dates? Subsonic Sugar has some really nice functions: http://subsonichelp.com/html/1413bafa-b5aa-99aa-0478-10875abe82ec.htm http://subsonicproject.googlecode.com/svn/trunk/SubSonic/Sugar/ Is there anything better out there? I was wanting to work out the start(mon) and end(sun) dates of the last 5 weeks. I was thinking something like this: DateTime Now = DateTime.Now; while(Now.DayOfWeek != DayOfWeek.Monday) { Now.AddDays(-1); } for(int i=0; i<5;i++) { AddToDatesList(Now, Now.AddDays(7)); Now.AddDays(-7); } but this seems crappy? Plus this is not exactly what i want because i need the time of that start date to be 00:00:00 and the time of the end date to be 23:59:59 A: Is there a specific problem you are trying to handle with dates? If the existing date API in .NET can handle your problem cleanly, I see no reason to consider a 3rd party library to do it. When I was in .NET, we had to deal with dates quite a bit, and the standard libraries provided a fair amount of functionality to us. A: What exactly do you want to do that System.DateTime and System.Timespan can't handle? A: CSLA has a useful helper class called SmartDate that addresses quite a lot of the problems when using dates in real applications. As far as I can recall it's coupled to the rest of the framework.
DateTime Utility for ASP.net
I was wondering if anyone could suggest a utility library that has useful functions for handling dates in ASP.NET easily taking away some of the leg work you normally have to do when handling dates? Subsonic Sugar has some really nice functions: http://subsonichelp.com/html/1413bafa-b5aa-99aa-0478-10875abe82ec.htm http://subsonicproject.googlecode.com/svn/trunk/SubSonic/Sugar/ Is there anything better out there? I was wanting to work out the start(mon) and end(sun) dates of the last 5 weeks. I was thinking something like this: DateTime Now = DateTime.Now; while(Now.DayOfWeek != DayOfWeek.Monday) { Now.AddDays(-1); } for(int i=0; i<5;i++) { AddToDatesList(Now, Now.AddDays(7)); Now.AddDays(-7); } but this seems crappy? Plus this is not exactly what i want because i need the time of that start date to be 00:00:00 and the time of the end date to be 23:59:59
[ "Is there a specific problem you are trying to handle with dates? If the existing date API in .NET can handle your problem cleanly, I see no reason to consider a 3rd party library to do it. When I was in .NET, we had to deal with dates quite a bit, and the standard libraries provided a fair amount of functionality to us.\n", "What exactly do you want to do that System.DateTime and System.Timespan can't handle?\n", "CSLA has a useful helper class called SmartDate that addresses quite a lot of the problems when using dates in real applications. As far as I can recall it's coupled to the rest of the framework.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "asp.net", "date" ]
stackoverflow_0000061638_asp.net_date.txt
Q: Comparing runtimes I am trying to get some accurate runtime comparisons of PHP vs Python (and potentially any other language that I have to include). Timing within a script is not my problem but timing within a script does not account for everything from the moment the request is made to run the script to output. 1) Is it actually worth taking such things into account? 2) Assuming it is worth taking it into account, how do I do this? I'm using a Mac so I've got access to Linux commands and I'm not afraid to compile/create a command to help me, I just don't know how to write such a command. A: If your idea is to compare the languages, I'd say anything outside them is not relevant for comparison purposes. Nonetheless you can use the time command to measure everything and can compare it with the timing within a script. Like this: $ time script.php HI! real 0m3.218s user 0m0.080s sys 0m0.064s It will give you clock time, user time (php interpreter) and sys time (OS time) If you are thinking web, then it gets a lot harder because you would be mixing webserver overhead and that is not always easy to compare if, say, you are using WSGI v/s mod_php. Then you'd have to hook probes into the webserving parts of the chain as well A: It's worth taking speed into account if you're optimizing code. You should generally know why you're optimizing code (as in: a specific task in your existing codebase is taking too long, not "I heard PHP is slower than Python"). It's not worth taking speed into account if you don't actually plan on switching languages. Just because one tiny module does something slightly faster doesn't mean rewriting your app in another language is a good idea. There are many other factors to choosing a language besides speed. You benchmark, of course. Run the two codebases multiple times and compare the timing. You can use the time command if both scripts are executable from the shell, or use respective benchmarking functionality from each language; the latter case depends heavily on the actual language, naturally. A: Well, you can use the "time" command to help: you@yourmachine:~$ time echo "hello world" hello world real 0m0.000s user 0m0.000s sys 0m0.000s you@yourmachine:~$ And this will get around timing outside of the environment. As for whether you need to actually time that extra work... that entirely depends on what you are doing. I assume this is for some kind of web application of some sort, so it depends on how the framework you use actually works... does it cache some kind of compiled (or parsed) version of the script? If so, then startup time will be totally irrelevant (since the first hit will be the only one that startup time exists in). Also, make sure to run your tests in a loop so you can discount the first run (and include the cost on the first run in your report if you want). I have done some tests in Java, and the first run is always slowest due to the JIT doing its job (and the same sort of hit may exist in PHP, Python and any other languages you try).
Comparing runtimes
I am trying to get some accurate runtime comparisons of PHP vs Python (and potentially any other language that I have to include). Timing within a script is not my problem but timing within a script does not account for everything from the moment the request is made to run the script to output. 1) Is it actually worth taking such things into account? 2) Assuming it is worth taking it into account, how do I do this? I'm using a Mac so I've got access to Linux commands and I'm not afraid to compile/create a command to help me, I just don't know how to write such a command.
[ "If your idea is to compare the languages, I'd say anything outside them is not relevant for comparison purposes. \nNonetheless you can use the time command to measure everything and can compare it with the timing within a script.\nLike this:\n$ time script.php\nHI!\n\nreal 0m3.218s\nuser 0m0.080s\nsys 0m0.064s\n\nIt will give you clock time, user time (php interpreter) and sys time (OS time)\nIf you are thinking web, then it gets a lot harder because you would be mixing webserver overhead and that is not always easy to compare if, say, you are using WSGI v/s mod_php. Then you'd have to hook probes into the webserving parts of the chain as well\n", "\nIt's worth taking speed into account if you're optimizing code. You should generally know why you're optimizing code (as in: a specific task in your existing codebase is taking too long, not \"I heard PHP is slower than Python\"). It's not worth taking speed into account if you don't actually plan on switching languages. Just because one tiny module does something slightly faster doesn't mean rewriting your app in another language is a good idea. There are many other factors to choosing a language besides speed.\nYou benchmark, of course. Run the two codebases multiple times and compare the timing. You can use the time command if both scripts are executable from the shell, or use respective benchmarking functionality from each language; the latter case depends heavily on the actual language, naturally.\n\n", "Well, you can use the \"time\" command to help:\nyou@yourmachine:~$ time echo \"hello world\"\nhello world\n\nreal 0m0.000s\nuser 0m0.000s\nsys 0m0.000s\nyou@yourmachine:~$ \n\nAnd this will get around timing outside of the environment.\nAs for whether you need to actually time that extra work... that entirely depends on what you are doing. I assume this is for some kind of web application of some sort, so it depends on how the framework you use actually works... does it cache some kind of compiled (or parsed) version of the script? If so, then startup time will be totally irrelevant (since the first hit will be the only one that startup time exists in).\nAlso, make sure to run your tests in a loop so you can discount the first run (and include the cost on the first run in your report if you want). I have done some tests in Java, and the first run is always slowest due to the JIT doing its job (and the same sort of hit may exist in PHP, Python and any other languages you try).\n" ]
[ 4, 1, 1 ]
[]
[]
[ "benchmarking", "php", "python" ]
stackoverflow_0000062079_benchmarking_php_python.txt
Q: Where to start with Entity Framework Anyone know a good book or post about how to start in EF? I have seen the DnrTV any other place? A: Mike Taulty's Blog: http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/category/1024.aspx A great EF intro deck: http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2008/03/13/10235.aspx And these ADO.NET Data Services screencasts are nice too: http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2008/01/25/10152.aspx ADO.NET Entity Framework MSDN: http://msdn.microsoft.com/en-us/library/bb399572.aspx ADO.NET Entity Framework forums: http://forums.microsoft.com/msdn/ShowForum.aspx?ForumID=533&SiteID=1 ADO.NET team blog: http://blogs.msdn.com/adonet/archive/tags/Entity+Framework/default.aspx Programming LINQ and the ADO.NET Entity Framework Webcast: http://blogs.msdn.com/adonet/archive/2008/01/28/programming-linq-and-the-ado-net-entity-framework-webcast.aspx A: Jason's DotNet Architecture Blog has a tutorial that gets you started with the basics, using the MS SQL Server AdventureWorks sample database. A: Alex James, Program Manager on the ADO.Net team at microsoft also has the odd good post on EF especially around Metadata. A: This is a very decent article on EF http://www.codeguru.com/csharp/.net/net_general/netframeworkclasses/article.php/c15489/
Where to start with Entity Framework
Anyone know a good book or post about how to start in EF? I have seen the DnrTV any other place?
[ "Mike Taulty's Blog: http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/category/1024.aspx\nA great EF intro deck: http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2008/03/13/10235.aspx\nAnd these ADO.NET Data Services screencasts are nice too: http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2008/01/25/10152.aspx\nADO.NET Entity Framework MSDN: http://msdn.microsoft.com/en-us/library/bb399572.aspx\nADO.NET Entity Framework forums: http://forums.microsoft.com/msdn/ShowForum.aspx?ForumID=533&SiteID=1\nADO.NET team blog: http://blogs.msdn.com/adonet/archive/tags/Entity+Framework/default.aspx\nProgramming LINQ and the ADO.NET Entity Framework Webcast: http://blogs.msdn.com/adonet/archive/2008/01/28/programming-linq-and-the-ado-net-entity-framework-webcast.aspx\n", "Jason's DotNet Architecture Blog has a tutorial that gets you started with the basics, using the MS SQL Server AdventureWorks sample database.\n", "Alex James, Program Manager on the ADO.Net team at microsoft also has the odd good post on EF especially around Metadata.\n", "This is a very decent article on EF http://www.codeguru.com/csharp/.net/net_general/netframeworkclasses/article.php/c15489/\n" ]
[ 14, 1, 0, 0 ]
[]
[]
[ ".net", ".net_3.5", "entity_framework" ]
stackoverflow_0000042826_.net_.net_3.5_entity_framework.txt
Q: ADO.NET Entity Framework tutorials Does anyone know of any good tutorials on ADO.NET Entity Framework? There are a few useful links here at Stack OverFlow, and I've found one tutorial at Jason's DotNet Architecture Blog, but can anyone recommend any other good tutorials? Any tutorials available from Microsoft, either online or as part of any conference/course material? A: Microsoft offers .NET 3.5 Enhancements Training Kit it contains documentation and sample code for ADO.NET EF A: Here are some that Julie Lerman wrote: http://www.thedatafarm.com/blog/2008/04/04/EightEntityFrameworkTutorialsOnDataDeveloperNET.aspx And here's of course some info from Microsoft: http://msdn.microsoft.com/en-us/library/bb386876.aspx A: Sample application from MSDN And some inside information from ADO.NET Team Blog A: Try this link you may get some best ideas... http://msdn.microsoft.com/en-us/library/aa697427(VS.80).aspx http://en.wikipedia.org/wiki/ADO.NET_Entity_Framework This one is nice try this.... http://davidhayden.com/blog/dave/archive/2007/03/19/ADONETEntityFrameworkObjectServicesTutorial.aspx http://www.codeguru.com/csharp/.net/net_general/netframeworkclasses/article.php/c15489/..
ADO.NET Entity Framework tutorials
Does anyone know of any good tutorials on ADO.NET Entity Framework? There are a few useful links here at Stack OverFlow, and I've found one tutorial at Jason's DotNet Architecture Blog, but can anyone recommend any other good tutorials? Any tutorials available from Microsoft, either online or as part of any conference/course material?
[ "Microsoft offers .NET 3.5 Enhancements Training Kit it contains documentation and sample code for ADO.NET EF\n", "Here are some that Julie Lerman wrote:\nhttp://www.thedatafarm.com/blog/2008/04/04/EightEntityFrameworkTutorialsOnDataDeveloperNET.aspx\nAnd here's of course some info from Microsoft:\nhttp://msdn.microsoft.com/en-us/library/bb386876.aspx\n", "Sample application from MSDN\nAnd some inside information from ADO.NET Team Blog\n", "Try this link you may get some best ideas...\nhttp://msdn.microsoft.com/en-us/library/aa697427(VS.80).aspx\nhttp://en.wikipedia.org/wiki/ADO.NET_Entity_Framework\nThis one is nice try this....\nhttp://davidhayden.com/blog/dave/archive/2007/03/19/ADONETEntityFrameworkObjectServicesTutorial.aspx\nhttp://www.codeguru.com/csharp/.net/net_general/netframeworkclasses/article.php/c15489/..\n" ]
[ 13, 5, 3, 3 ]
[]
[]
[ "ado.net", "entity_framework" ]
stackoverflow_0000062110_ado.net_entity_framework.txt
Q: How can I find the current DNS server? I'm using Delphi and need to get the current Windows DNS server IP address so I can do a lookup. What function should I call to find it? The only solution I have right now does an ipconfig/all to get it, which is horrible. A: Found a nice one using the function GetNetworkParams().Seems to work quite good. You can find it here: http://www.swissdelphicenter.ch/torry/showcode.php?id=2452 A: Do you really need to know what is DNS server to do a lookup? Here is a solution how to get a IP address using 2 functions: GetHostName and GetHostByName. I assume the GetHostByName function does the lookup you need for you, or am I wrong? A: See GetNetowrkParams method (Platform SDK: IP Helper)
How can I find the current DNS server?
I'm using Delphi and need to get the current Windows DNS server IP address so I can do a lookup. What function should I call to find it? The only solution I have right now does an ipconfig/all to get it, which is horrible.
[ "Found a nice one using the function GetNetworkParams().Seems to work quite good.\nYou can find it here:\nhttp://www.swissdelphicenter.ch/torry/showcode.php?id=2452\n", "Do you really need to know what is DNS server to do a lookup? \nHere is a solution how to get a IP address using 2 functions: GetHostName and GetHostByName. I assume the GetHostByName function does the lookup you need for you, or am I wrong? \n", "See GetNetowrkParams method (Platform SDK: IP Helper)\n" ]
[ 4, 1, 0 ]
[]
[]
[ "delphi", "dns", "networking", "windows" ]
stackoverflow_0000062127_delphi_dns_networking_windows.txt
Q: Would you bother to mock StreamReader object? I use a stream reader to import some data and at the moment I hardcode a small sample file in the test to do the job. Is it sensible to use Mock Objects with this and how ? A: I don't see any points to mock StreamReader unless you're making StreamReader derived class. If you need to provide test input via StreamReader, just read some predefined data from any suitable source. A: StreamReader is a concrete class, so many mocking systems won't allow you to mock it. TypeMock Isolator will, however. You may find you want to mock it if you need to force errors to come from the reader, rather than just having it supply data to your class under test. If you don't need this functionality, you may be just as far ahead constructing a StreamReader from some other Stream, such as a MemoryStream - this way you don't need to go to disk for your data. A: When testing code that depends on streams, streamreaders and streamwriters I usually use the memorystream object for testing. No mocking framework needed here. A: You can use a factory method to return a TextReader that could either be the mock object or an actual StreamReader.
Would you bother to mock StreamReader object?
I use a stream reader to import some data and at the moment I hardcode a small sample file in the test to do the job. Is it sensible to use Mock Objects with this and how ?
[ "I don't see any points to mock StreamReader unless you're making StreamReader derived class. If you need to provide test input via StreamReader, just read some predefined data from any suitable source.\n", "StreamReader is a concrete class, so many mocking systems won't allow you to mock it. \nTypeMock Isolator will, however.\nYou may find you want to mock it if you need to force errors to come from the reader, rather than just having it supply data to your class under test. If you don't need this functionality, you may be just as far ahead constructing a StreamReader from some other Stream, such as a MemoryStream - this way you don't need to go to disk for your data.\n", "When testing code that depends on streams, streamreaders and streamwriters I usually use the memorystream object for testing. No mocking framework needed here.\n", "You can use a factory method to return a TextReader that could either be the mock object or an actual StreamReader.\n" ]
[ 3, 2, 1, 1 ]
[]
[]
[ ".net", "mocking" ]
stackoverflow_0000062159_.net_mocking.txt
Q: What do you think will be the level of usage of Silverlight 1 year from now? There is a lot of buzz about Microsoft Silverlight, especially after the Olympics. Also H264 will be supported in a future version. Where do you think Silverlight will be 1 year from now? A: They were saying they were getting 1.5 million downloads per day back in March 2008, and that was before the Olympics and the Democratic National Convention. So, unless my math is off, that's more than 4 people. I'd expect to see it show up as a recommended Windows update, and possible included with IE8 or something in the future. A: A year from now, the number of people with the runtime installed will still be a fairly small minority! I suspect that choosing Silverlight will still be a barrier to people using your stuff for a long while to come. A: Most .NET developers I work with have been shying away from Silverlight. Right now it seems more like a novelty than a development platform. A: In a year it will still be a minority of content, but the installed base will be large enough that mainstream projects will be considering it as a viable alternative to Flash. Until they survey the pool of available, talented designers familiar with it. A: At best, in the same place at Flash. Now, how many of you do Flash enterprise applications? Does Google do flash applications? or SalesForce.com? Oracle? or any other major on demand application provider? In my opinion, even if it kills off Flash, it will still be largely irrelevant for the types of applications we write everyday. A: 100% more. (so about 4 people) A: Considering NBC has already dropped Silverlight and are using Flash again for NFL telecasts, I don't see a healthy future for Microsoft's platform. Do they even have any other partners using it? I know WWE was one of their partners but they barely use it on their own website. EDIT - not sure if it's true or not but this guy says that the decision to go with Flash was the NFL's and not NBC's. Either way still doesn't look good for the MS platform. A: I think that as long as the Moonlight project is successful that we'll see Silverlight become significant competition for Flash. Silverlight is still in its infancy - 1.0 had next to nothing in it. Version 2 is in beta now, and that adds lots of common user controls that developers need to write applications. A: They really got a huge bump with the Olympics as far as getting it installed on machines. It will be interesting to see how much developer buy in they can gather. It's a tough sell for front end web people because it's a complete toolset change. I know the midteir/WPF people like it because it's closer to their normal .NET toolset, but they're not usually the ones doing web design. IMHO, things like HTML5 and Gears are where many people are going to go. A: I think that it will grow, but MSFT will need to do more deals like they did with the Olympics. Hooking up with CBS/NCAA on the March Madness broadcasts would be worth whatever millions they could throw at it. A: Silverlight 1 Vs Silverlight 2: Silverlight 2 is expected to be out in the next few months (they used to say in August 2008 until ... August ended. In September they say October.), so MS will probably be promoting Silverlight 2.1 (or whatever upgrade to Silerlight 2) in a year's time, and Silverlight 1.0 will likely have no developer share at all, and no momentum. Silverlight Vs Javascript-based platforms: Google Chrome (and the upcoming Firefox 2.1) promise an order of magnitude better performance in JavaScript. We haven't seen the best from them yet. MS will have to improve IE's JavaScript speeds, though who knows when they'll be able to ship that (in IE 9 maybe?). I think that it will be a few more years yet before the clear winners emerge from the fray. A: The installation barrier will be a problem until it ships with Windows by default. But even then developers will only support the established Flash. Considering certain mobile platforms have neither Flash nor Silverlight, it's best to back the one more likely to be ported to all platforms, and that's the dominant Flash. In the end Javascript + SVG will almost certainly win out over these vendor produced solutions. But within a year I'd be surprised if any significant amount of development is done with Silverlight. Flash has too much momentum and MS is too late to the game with nothing sufficiently compelling.
What do you think will be the level of usage of Silverlight 1 year from now?
There is a lot of buzz about Microsoft Silverlight, especially after the Olympics. Also H264 will be supported in a future version. Where do you think Silverlight will be 1 year from now?
[ "They were saying they were getting 1.5 million downloads per day back in March 2008, and that was before the Olympics and the Democratic National Convention. So, unless my math is off, that's more than 4 people.\nI'd expect to see it show up as a recommended Windows update, and possible included with IE8 or something in the future.\n", "A year from now, the number of people with the runtime installed will still be a fairly small minority! \nI suspect that choosing Silverlight will still be a barrier to people using your stuff for a long while to come.\n", "Most .NET developers I work with have been shying away from Silverlight.\nRight now it seems more like a novelty than a development platform.\n", "In a year it will still be a minority of content, but the installed base will be large enough that mainstream projects will be considering it as a viable alternative to Flash. Until they survey the pool of available, talented designers familiar with it.\n", "At best, in the same place at Flash. Now, how many of you do Flash enterprise applications? Does Google do flash applications? or SalesForce.com? Oracle? or any other major on demand application provider?\nIn my opinion, even if it kills off Flash, it will still be largely irrelevant for the types of applications we write everyday.\n", "100% more.\n(so about 4 people)\n", "Considering NBC has already dropped Silverlight and are using Flash again for NFL telecasts, I don't see a healthy future for Microsoft's platform.\nDo they even have any other partners using it? I know WWE was one of their partners but they barely use it on their own website. \nEDIT - not sure if it's true or not but this guy says that the decision to go with Flash was the NFL's and not NBC's. Either way still doesn't look good for the MS platform.\n", "I think that as long as the Moonlight project is successful that we'll see Silverlight become significant competition for Flash. \nSilverlight is still in its infancy - 1.0 had next to nothing in it. Version 2 is in beta now, and that adds lots of common user controls that developers need to write applications.\n", "They really got a huge bump with the Olympics as far as getting it installed on machines. It will be interesting to see how much developer buy in they can gather. It's a tough sell for front end web people because it's a complete toolset change. I know the midteir/WPF people like it because it's closer to their normal .NET toolset, but they're not usually the ones doing web design.\nIMHO, things like HTML5 and Gears are where many people are going to go.\n", "I think that it will grow, but MSFT will need to do more deals like they did with the Olympics. Hooking up with CBS/NCAA on the March Madness broadcasts would be worth whatever millions they could throw at it.\n", "Silverlight 1 Vs Silverlight 2:\nSilverlight 2 is expected to be out in the next few months (they used to say in August 2008 until ... August ended. In September they say October.), so MS will probably be promoting Silverlight 2.1 (or whatever upgrade to Silerlight 2) in a year's time, and Silverlight 1.0 will likely have no developer share at all, and no momentum.\nSilverlight Vs Javascript-based platforms:\nGoogle Chrome (and the upcoming Firefox 2.1) promise an order of magnitude better performance in JavaScript. We haven't seen the best from them yet. MS will have to improve IE's JavaScript speeds, though who knows when they'll be able to ship that (in IE 9 maybe?). \nI think that it will be a few more years yet before the clear winners emerge from the fray.\n", "The installation barrier will be a problem until it ships with Windows by default. But even then developers will only support the established Flash. Considering certain mobile platforms have neither Flash nor Silverlight, it's best to back the one more likely to be ported to all platforms, and that's the dominant Flash.\nIn the end Javascript + SVG will almost certainly win out over these vendor produced solutions. But within a year I'd be surprised if any significant amount of development is done with Silverlight. Flash has too much momentum and MS is too late to the game with nothing sufficiently compelling.\n" ]
[ 5, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "silverlight" ]
stackoverflow_0000054169_silverlight.txt
Q: Response.Clear in ASP.NET 3.5 I have recently upgraded some of my web applications to ASP.NET 3.5 by installing the framework on the server and setting up my web applications acrodingly and all is well, however. On some pages, I want to clear the current contents of the response buffer with code like this: Response.Clear(); // Output some stuff Response.End(); But this now isn't working in 3.5 when it did in 2.0. I have also tried setting the response buffer to false but this didn't work either. Can anyone let me know why it isn't working or if there is a work around? A: Try setting Buffer="True" in the Page Directive of the page and not in codebehind. I just tried this in VS2008 on a Web Site project: Create new item Choose "Web page" Leave all the html-tags in there, just for fun Fill the page_load like this protected void Page_Load(object sender, EventArgs e) { Response.Write("test1"); Response.Clear(); Response.Write("test2"); Response.End(); } It will then output "test2" without any html-tags.
Response.Clear in ASP.NET 3.5
I have recently upgraded some of my web applications to ASP.NET 3.5 by installing the framework on the server and setting up my web applications acrodingly and all is well, however. On some pages, I want to clear the current contents of the response buffer with code like this: Response.Clear(); // Output some stuff Response.End(); But this now isn't working in 3.5 when it did in 2.0. I have also tried setting the response buffer to false but this didn't work either. Can anyone let me know why it isn't working or if there is a work around?
[ "Try setting Buffer=\"True\" in the Page Directive of the page and not in codebehind.\nI just tried this in VS2008 on a Web Site project:\n\nCreate new item\nChoose \"Web page\"\nLeave all the html-tags in there, just for fun\nFill the page_load like this\nprotected void Page_Load(object sender, EventArgs e) \n{ \n Response.Write(\"test1\"); \n Response.Clear(); \n Response.Write(\"test2\"); \n Response.End(); \n}\n\n\nIt will then output \"test2\" without any html-tags.\n" ]
[ 12 ]
[]
[]
[ "asp.net" ]
stackoverflow_0000062154_asp.net.txt
Q: GWT context.xml in shell mode I'm trying to get the GWTShell mode to load my context.xml file in which my database is described. The only usable info can be found here, but this doesn't seem to work for the context.xml part. A: I'm using Eclipse with Cypal Studio (previously called Googlipse). If there is any other better plugin for Eclipse please recommend it. As the Shell mode uses a Tomcat instance, which is the same target server we are using in the final deployment, it should be possible to achieve (or fake) a similar behaviour. A: As of version 1.4, I have been running all my server side code, in my container of choice (Glassfish) and hooking up the GWTShell to that. Are you using Netbeans, Eclipse or something else? The Netbeans plugin gwt4nb does this for you out of the box, you just have to start your web project in debug mode. I'm sure the GWT plugin for Eclipse does the same thing. I realise this doesn't directly answer your question -> but my question is, is there a reason you're trying to get GWT to pick up your database settings and not just running your project as normal instead. I find this much better and robust way of running the GWTShell. Edit: Sorry I don't really use Eclipse, so I can't help you with plugins for it. I find Netbeans far superior for J2EE/web type projects. It's a bit slower, but far more functional. The plugin for that is called 'GWT4NB', it's free and it will set up your ant script in such a way that you just have to right-click on your web project and choose debug. I can understand if you don't want to switch IDEs though.
GWT context.xml in shell mode
I'm trying to get the GWTShell mode to load my context.xml file in which my database is described. The only usable info can be found here, but this doesn't seem to work for the context.xml part.
[ "I'm using Eclipse with Cypal Studio (previously called Googlipse).\nIf there is any other better plugin for Eclipse please recommend it.\nAs the Shell mode uses a Tomcat instance, which is the same target server we are using in the final deployment, it should be possible to achieve (or fake) a similar behaviour. \n", "As of version 1.4, I have been running all my server side code, in my container of choice (Glassfish) and hooking up the GWTShell to that. Are you using Netbeans, Eclipse or something else? The Netbeans plugin gwt4nb does this for you out of the box, you just have to start your web project in debug mode. I'm sure the GWT plugin for Eclipse does the same thing.\nI realise this doesn't directly answer your question -> but my question is, is there a reason you're trying to get GWT to pick up your database settings and not just running your project as normal instead. I find this much better and robust way of running the GWTShell.\nEdit: Sorry I don't really use Eclipse, so I can't help you with plugins for it. I find Netbeans far superior for J2EE/web type projects. It's a bit slower, but far more functional. The plugin for that is called 'GWT4NB', it's free and it will set up your ant script in such a way that you just have to right-click on your web project and choose debug. I can understand if you don't want to switch IDEs though.\n" ]
[ 1, 0 ]
[]
[]
[ "gwt", "java" ]
stackoverflow_0000059806_gwt_java.txt
Q: Google Maps in Flex Component I'm embedding the Google Maps Flash API in Flex and it runs fine locally with the watermark on it, etc. When I upload it to the server (flex.mydomain.com) I get a sandbox security error listed below: SecurityError: Error #2121: Security sandbox violation: Loader.content: http://mydomain.com/main.swf?Fri, 12 Sep 2008 21:46:03 UTC cannot access http://maps.googleapis.com/maps/lib/map_1_6.swf. This may be worked around by calling Security.allowDomain. at flash.display::Loader/get content() at com.google.maps::ClientBootstrap/createFactory() at com.google.maps::ClientBootstrap/executeNextFrameCalls() Does anyone have any experience with embedding the Google Maps Flash API into Flex components and specifically settings security settings to make this work? I did get a new API key that is registered to my domain and am using that when it's published. I've tried doing the following in the main application as well as the component: Security.allowDomain('*') Security.allowDomain('maps.googleapis.com') Security.allowDomain('mydomain.com') A: This sounds like a crossdomain.xml related problem. I did a quick search and there seems to be many people with the same issue. Some proxy requests through XMLHttpRequest etc.. Issue 406: Add crossdomain.xml for Google Accounts A: Thanks for the help. Apparently this has something to do with including the Flex app on an ASP.NET page. When I moved it over to a flat HTML file, it worked fine. I don't have time to fully investigate right now, but that seems to have fixed it.
Google Maps in Flex Component
I'm embedding the Google Maps Flash API in Flex and it runs fine locally with the watermark on it, etc. When I upload it to the server (flex.mydomain.com) I get a sandbox security error listed below: SecurityError: Error #2121: Security sandbox violation: Loader.content: http://mydomain.com/main.swf?Fri, 12 Sep 2008 21:46:03 UTC cannot access http://maps.googleapis.com/maps/lib/map_1_6.swf. This may be worked around by calling Security.allowDomain. at flash.display::Loader/get content() at com.google.maps::ClientBootstrap/createFactory() at com.google.maps::ClientBootstrap/executeNextFrameCalls() Does anyone have any experience with embedding the Google Maps Flash API into Flex components and specifically settings security settings to make this work? I did get a new API key that is registered to my domain and am using that when it's published. I've tried doing the following in the main application as well as the component: Security.allowDomain('*') Security.allowDomain('maps.googleapis.com') Security.allowDomain('mydomain.com')
[ "This sounds like a crossdomain.xml related problem. I did a quick search and there seems to be many people with the same issue. Some proxy requests through XMLHttpRequest etc..\nIssue 406: Add crossdomain.xml for Google Accounts\n", "Thanks for the help. Apparently this has something to do with including the Flex app on an ASP.NET page. When I moved it over to a flat HTML file, it worked fine. I don't have time to fully investigate right now, but that seems to have fixed it.\n" ]
[ 2, 1 ]
[]
[]
[ "actionscript_3", "apache_flex", "google_maps" ]
stackoverflow_0000060046_actionscript_3_apache_flex_google_maps.txt
Q: How to specify accepted certificates for Client Authentication in .NET SslStream I am attempting to use the .Net System.Security.SslStream class to process the server side of a SSL/TLS stream with client authentication. To perform the handshake, I am using this code: SslStream sslStream = new SslStream(innerStream, false, RemoteCertificateValidation, LocalCertificateSelectionCallback); sslStream.AuthenticateAsServer(serverCertificate, true, SslProtocols.Default, false); Unfortunately, this results in the SslStream transmitting a CertificateRequest containing the subjectnames of all certificates in my CryptoAPI Trusted Root Store. I would like to be able to override this. It is not an option for me to require the user to install or remove certificates from the Trusted Root Store. It looks like the SslStream uses SSPI/SecureChannel underneath, so if anyone knows how to do the equivalent with that API, that would be helpful, too. Any ideas? A: It does not look like this is currently possible using the .NET libraries. I solved it by using the Mono class library implementation of System.Security.SslStream, which gives better access to overriding the servers behavior during the handshake. A: What the certificate validation is doing is validating all certificates in the chain. In order to truely do that it just contact the root store of each of those cerficates. If that's not something you want to happen you can deploy your own root store locally. A: It is not the validation part I want to change. The problem is in the initial handshake, the server transmits the message informing the client that client authentication is required (that is the CertificateRequest message). As part of this message, the server sends the names of CAs that it will accept as issuers of the client certificate. It is that list which per default contains all the Trusted Roots in the store. But if is possible to override the certificate root store for a single application, that would probably fix the problem. Is that what you mean? And if so, how do I do that?
How to specify accepted certificates for Client Authentication in .NET SslStream
I am attempting to use the .Net System.Security.SslStream class to process the server side of a SSL/TLS stream with client authentication. To perform the handshake, I am using this code: SslStream sslStream = new SslStream(innerStream, false, RemoteCertificateValidation, LocalCertificateSelectionCallback); sslStream.AuthenticateAsServer(serverCertificate, true, SslProtocols.Default, false); Unfortunately, this results in the SslStream transmitting a CertificateRequest containing the subjectnames of all certificates in my CryptoAPI Trusted Root Store. I would like to be able to override this. It is not an option for me to require the user to install or remove certificates from the Trusted Root Store. It looks like the SslStream uses SSPI/SecureChannel underneath, so if anyone knows how to do the equivalent with that API, that would be helpful, too. Any ideas?
[ "It does not look like this is currently possible using the .NET libraries. \nI solved it by using the Mono class library implementation of System.Security.SslStream, which gives better access to overriding the servers behavior during the handshake.\n", "What the certificate validation is doing is validating all certificates in the chain. In order to truely do that it just contact the root store of each of those cerficates.\nIf that's not something you want to happen you can deploy your own root store locally.\n", "It is not the validation part I want to change. The problem is in the initial handshake, the server transmits the message informing the client that client authentication is required (that is the CertificateRequest message). As part of this message, the server sends the names of CAs that it will accept as issuers of the client certificate. It is that list which per default contains all the Trusted Roots in the store.\nBut if is possible to override the certificate root store for a single application, that would probably fix the problem. Is that what you mean? And if so, how do I do that?\n" ]
[ 3, 2, 1 ]
[]
[]
[ ".net", "c#", "ssl", "sspi" ]
stackoverflow_0000053824_.net_c#_ssl_sspi.txt
Q: Classis ASP debugging global.asa in VS2005 I was trying to set a breakpoint in global.asa in an old classic ASP project with IIS 6 in Visual Studio 2005. Somehow the context menu for actually setting the breakpoint somewhere in global.asa is disabled (greyed). How can I set a breakpoint then? Breakpoints in .asp pages are no problem though and do work fine. A: Try this: How to: Debug Global.asa files. The short version is to place a VBScript Stop statement or JScript debugger at the beginning of the procedure, before any statements that you will want to step through.
Classis ASP debugging global.asa in VS2005
I was trying to set a breakpoint in global.asa in an old classic ASP project with IIS 6 in Visual Studio 2005. Somehow the context menu for actually setting the breakpoint somewhere in global.asa is disabled (greyed). How can I set a breakpoint then? Breakpoints in .asp pages are no problem though and do work fine.
[ "Try this: How to: Debug Global.asa files. The short version is to place a VBScript Stop statement or JScript debugger at the beginning of the procedure, before any statements that you will want to step through.\n" ]
[ 2 ]
[]
[]
[ "asp_classic", "debugging", "global.asa", "visual_studio", "visual_studio_2005" ]
stackoverflow_0000062225_asp_classic_debugging_global.asa_visual_studio_visual_studio_2005.txt
Q: How can I embed Perl inside a C++ application? I would like to call Perl script files from my c++ program. I am not sure that the people I will distribute to will have Perl installed. Basically I'm looking for a .lib file that I can use that has an Apache like distribution license. A: You can embed perl into your app. Perl Embedding by John Quillan C++ wrapper around Perl C API A: I'm currently writing a library for embedding Perl in C++, but it's not finished yet. In any case I would recommend against using the EP library. Not only has it not been maintained for years, but is also has some severe architectural deficiencies and is rather limited in its scope. If you are interested in alpha software you can contact me about it, otherwise I'd advice you to use the raw API. A: To call perl from C++ you need to use the API, as someone else mentioned; the basic tutorial is available in the perlxstut documentation. Note that you will most probably need more than just a ".lib", because you'll need a lot of tiny modules which are located in the "lib" directory of the perl distrib: strict.pm, etc. That's a not a big deal though, I guess; the apache example you mentioned has the same constraint of delivering some default configuration files etc. However, to distribute Perl, on Windows (I guess you're on Windows since you mentionned a .lib file), the ActiveState distribution which everyone uses might cause some licensing headache. It's not really clear to me, but it seems like you cannot redistribute ActivePerl in a commercial product. Note that, if you want to embed Perl in a C++ program, you might have to recompile it anyway, to have the same compilation flags on Perl and on your program.
How can I embed Perl inside a C++ application?
I would like to call Perl script files from my c++ program. I am not sure that the people I will distribute to will have Perl installed. Basically I'm looking for a .lib file that I can use that has an Apache like distribution license.
[ "You can embed perl into your app. \n\nPerl Embedding by John Quillan\nC++ wrapper around Perl C API\n\n", "I'm currently writing a library for embedding Perl in C++, but it's not finished yet. In any case I would recommend against using the EP library. Not only has it not been maintained for years, but is also has some severe architectural deficiencies and is rather limited in its scope. If you are interested in alpha software you can contact me about it, otherwise I'd advice you to use the raw API.\n", "To call perl from C++ you need to use the API, as someone else mentioned; the basic tutorial is available in the perlxstut documentation.\nNote that you will most probably need more than just a \".lib\", because you'll need a lot of tiny modules which are located in the \"lib\" directory of the perl distrib: strict.pm, etc. That's a not a big deal though, I guess; the apache example you mentioned has the same constraint of delivering some default configuration files etc.\nHowever, to distribute Perl, on Windows (I guess you're on Windows since you mentionned a .lib file), the ActiveState distribution which everyone uses might cause some licensing headache. It's not really clear to me, but it seems like you cannot redistribute ActivePerl in a commercial product. Note that, if you want to embed Perl in a C++ program, you might have to recompile it anyway, to have the same compilation flags on Perl and on your program.\n" ]
[ 16, 6, 1 ]
[]
[]
[ "c++", "perl" ]
stackoverflow_0000049168_c++_perl.txt
Q: What's a liquid layout? My designer keeps throwing out the term "liquid" layout. What does this mean? Thanks for the clarification, I have always just called this a percentage layout, and thought he was saying that the pieces could be moved around, and that was liquid A: A "liquid" layout is a site layout that expands to fill the entire available area as the browser window is resized. Typically this is done using CSS. Liquid layouts can be quite helpful for certain types of sites, but they also tend to be significantly more effort than fixed width layouts, and their usefulness depends on the site content and how well implemented they are. A: From http://www.maxdesign.com.au/presentation/liquid/ : All containers on the page have their widths defined in percents - meaning that they are completely based on the viewport rather than the initial containing block. A liquid layout will move in and out when you resize your browser window. A: Basically, it's a layout of a web page that doesn't rely on a specific width specifications for elements in the page. See the discussion over at Wikipedia. A: It means a layout which adjusts dynamically to the browser (or whatever client) width and height, to make efficient use of all available screen space, as opposed to (mostly) fixed width layouts which are made to fit a common denominator resolution at that particular time (e.g. 800x600 used to be the norm for websites for many years). A: See this: http://www.time-tripper.com/uipatterns/Liquid_Layout A: Liquid Layouts refer to the design concept of a website. A liquid layout will move in and out when you resize your browser window, due to is having percentages and relative widths in the CSS. A: It just means that it will contract/expand to fill the browser's window size (usually the width), up to a certain point if things are done well. Otherwise text can get quite hard to read on big (24"+) monitors. A: One of two: The design will scale to the width of the browser (as in, if the browser was 1024px wide, the design will be as well)... although this does get quite fun when designing for 100px wide browsers (sometime designers will actually set a min-width though). The design has a fixed width, but is set in a measurement using a relative size... for example "em"... so as the font size is increased, the width of the page increases. A: A liquid layout is a method of CSS layout that defines all widths in percentages, so the areas of the page will grow/shrink when the viewport (browser window) is resized. They're very useful if trying to create a site that will fit both large and small screens. They're a little more difficult to work it than fixed layouts, because you're relinquishing some level control over how everything fits in the page, and you have to pay very close attention to your content, to make sure it doesn't fall apart aesthetically on resize. I would say liquid layouts are most useful for text heavy sites with a fairly basic column layout. You might also find a happy medium with an 'elastic' layout -- one that has both liquid and fixed areas. A: In a true Liquid layout, your content expands and contracts to fit your user's browser window in a meaningful, calculated and intelligent way. So it's more than just setting your column and container widths to percentages. Done well, this can result in a increase of perceived quality. Done poorly, it's a usability nightmare. Going Liquid is a huge pain the rump. The pain is worth it though if the topic/client/product(s) you are building the site for have a strong visual quality to them (think summer blockbuster film site), require a certain fit and finish, or if it needs to display large chunks of data. Note: I'll update this a bit later with links to good examples and citations for my claims
What's a liquid layout?
My designer keeps throwing out the term "liquid" layout. What does this mean? Thanks for the clarification, I have always just called this a percentage layout, and thought he was saying that the pieces could be moved around, and that was liquid
[ "A \"liquid\" layout is a site layout that expands to fill the entire available area as the browser window is resized. Typically this is done using CSS. Liquid layouts can be quite helpful for certain types of sites, but they also tend to be significantly more effort than fixed width layouts, and their usefulness depends on the site content and how well implemented they are.\n", "From http://www.maxdesign.com.au/presentation/liquid/ :\n\nAll containers on the page have their\n widths defined in percents - meaning\n that they are completely based on the\n viewport rather than the initial\n containing block. A liquid layout will\n move in and out when you resize your\n browser window.\n\n", "Basically, it's a layout of a web page that doesn't rely on a specific width specifications for elements in the page.\nSee the discussion over at Wikipedia.\n", "It means a layout which adjusts dynamically to the browser (or whatever client) width and height, to make efficient use of all available screen space, as opposed to (mostly) fixed width layouts which are made to fit a common denominator resolution at that particular time (e.g. 800x600 used to be the norm for websites for many years).\n", "See this:\nhttp://www.time-tripper.com/uipatterns/Liquid_Layout\n", "Liquid Layouts refer to the design concept of a website. A liquid layout will move in and out when you resize your browser window, due to is having percentages and relative widths in the CSS.\n", "It just means that it will contract/expand to fill the browser's window size (usually the width), up to a certain point if things are done well. Otherwise text can get quite hard to read on big (24\"+) monitors.\n", "One of two:\n\nThe design will scale to the width of the browser (as in, if the browser was 1024px wide, the design will be as well)... although this does get quite fun when designing for 100px wide browsers (sometime designers will actually set a min-width though).\nThe design has a fixed width, but is set in a measurement using a relative size... for example \"em\"... so as the font size is increased, the width of the page increases.\n\n", "A liquid layout is a method of CSS layout that defines all widths in percentages, so the areas of the page will grow/shrink when the viewport (browser window) is resized.\nThey're very useful if trying to create a site that will fit both large and small screens. They're a little more difficult to work it than fixed layouts, because you're relinquishing some level control over how everything fits in the page, and you have to pay very close attention to your content, to make sure it doesn't fall apart aesthetically on resize.\nI would say liquid layouts are most useful for text heavy sites with a fairly basic column layout. You might also find a happy medium with an 'elastic' layout -- one that has both liquid and fixed areas.\n", "In a true Liquid layout, your content expands and contracts to fit your user's browser window in a meaningful, calculated and intelligent way. So it's more than just setting your column and container widths to percentages. \nDone well, this can result in a increase of perceived quality. Done poorly, it's a usability nightmare.\nGoing Liquid is a huge pain the rump. The pain is worth it though if the topic/client/product(s) you are building the site for have a strong visual quality to them (think summer blockbuster film site), require a certain fit and finish, or if it needs to display large chunks of data.\nNote: I'll update this a bit later with links to good examples and citations for my claims\n" ]
[ 13, 4, 1, 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "asp.net", "css", "html", "layout" ]
stackoverflow_0000062334_asp.net_css_html_layout.txt
Q: Retrieving the associated shared service provider's name? How do you programmatically retrieve the name of a shared services provider that's associated with a specific Sharepoint web application? I have a custom solution that needs to: Enumerate all web applications that it's deployed to Figure out the Shared Services provider that each of the web applications is associated with Access a Business Data Catalog installed on the SSP to retrieve some data Enumerate through all site collections in those web applications Perform various tasks within the site collections according to the data I got points 1, 3, 4 and 5 figured out, but 2 is somewhat troublesome. I want to avoid hardcoding the SSP name anywhere and not require the farm administrator to manually edit a configuration file. All information I need is in the Sharepoint configuration database, I just need to know how to access it through the object model. A: Unfortunately there is no supported way I know of that this can be done. The relevant class is SharedResourceProvider in the Microsoft.Office.Server.Administration namespace, in the Microsoft.Office.Server DLL. It's marked internal so pre-reflection: SharedResourceProvider sharedResourceProvider = ServerContext.GetContext(SPContext.Current.Site).SharedResourceProvider; string sspName = sharedResourceProvider.Name; Post-reflection: ServerContext sc = ServerContext.GetContext(SPContext.Current.Site); PropertyInfo srpProp = sc.GetType().GetProperty( "SharedResourceProvider", BindingFlags.NonPublic | BindingFlags.Instance); object srp = srpProp.GetValue(sc, null); PropertyInfo srpNameProp = srp.GetType().GetProperty( "Name", BindingFlags.Public | BindingFlags.Instance); string sspName = (string)srpNameProp.GetValue(srp, null); An alternative would be to write a SQL query over the configuration database which isn't recommended.
Retrieving the associated shared service provider's name?
How do you programmatically retrieve the name of a shared services provider that's associated with a specific Sharepoint web application? I have a custom solution that needs to: Enumerate all web applications that it's deployed to Figure out the Shared Services provider that each of the web applications is associated with Access a Business Data Catalog installed on the SSP to retrieve some data Enumerate through all site collections in those web applications Perform various tasks within the site collections according to the data I got points 1, 3, 4 and 5 figured out, but 2 is somewhat troublesome. I want to avoid hardcoding the SSP name anywhere and not require the farm administrator to manually edit a configuration file. All information I need is in the Sharepoint configuration database, I just need to know how to access it through the object model.
[ "Unfortunately there is no supported way I know of that this can be done. The relevant class is SharedResourceProvider in the Microsoft.Office.Server.Administration namespace, in the Microsoft.Office.Server DLL. It's marked internal so pre-reflection:\nSharedResourceProvider sharedResourceProvider = ServerContext.GetContext(SPContext.Current.Site).SharedResourceProvider;\nstring sspName = sharedResourceProvider.Name;\n\nPost-reflection:\nServerContext sc = ServerContext.GetContext(SPContext.Current.Site);\nPropertyInfo srpProp = sc.GetType().GetProperty(\n \"SharedResourceProvider\", BindingFlags.NonPublic | BindingFlags.Instance);\nobject srp = srpProp.GetValue(sc, null);\nPropertyInfo srpNameProp = srp.GetType().GetProperty(\n \"Name\", BindingFlags.Public | BindingFlags.Instance);\nstring sspName = (string)srpNameProp.GetValue(srp, null);\n\nAn alternative would be to write a SQL query over the configuration database which isn't recommended.\n" ]
[ 1 ]
[]
[]
[ "moss", "sharedservicesprovider", "sharepoint" ]
stackoverflow_0000058809_moss_sharedservicesprovider_sharepoint.txt
Q: Do I really need to use transactions in stored procedures? [MSSQL 2005] I'm writing a pretty straightforward e-commerce app in asp.net, do I need to use transactions in my stored procedures? Read/Write ratio is about 9:1 A: Many people ask - do I need transactions? Why do I need them? When to use them? The answer is simple: use them all the time, unless you have a very good reason not to (for instance, don't use atomic transactions for "long running activities" between businesses). The default should always be yes. You are in doubt? - use transactions. Why are transactions beneficial? They help you deal with crashes, failures, data consistency, error handling, they help you write simpler code etc. And the list of benefits will continue to grow with time. Here is some more info from http://blogs.msdn.com/florinlazar/ A: Remember in SQL Server all single statement CRUD operations are in an implicit transaction by default. You just need to turn on explict transactions (BEGIN TRAN) if you need to make multiple statements act as an atomic unit. A: The answer is, it depends. You do not always need transaction safety. Sometimes it's overkill. Sometimes it's not. I can see that, for example, when you implement a checkout process you only want to finalize it once you gathered all data, etc.. Think about a payment f'up, you can rollback - that's an example when you need a transaction. Or maybe when it's wise to use them. Do you need a transaction when you create a new user account? Maybe, if it's across 10 tables (for whatever reason), if it's just a single table then probably not. It also depends on what you sold your client on and who they are, and if they requested it, etc.. But if making a decision is up to you, then I'd say, choose wisely. My bottom line is, avoid premature optimization. Build your application, keep in mind that you may want to go back and refactor/optimize later when you need it. Look at a couple opensource projects and see how they implemented different parts of their app, learn from that. You'll see that most of them don't use transactions at all, yet there are huge online stores that use them. A: Of course, it depends. It depends upon the work that the particular stored procedure performs and, perhaps, not so much the "read/write ratio" that you suggest. In general, you should consider enclosing a unit of work within a transaction if it is query that could be impacted by some other, simultaneously running query. If this sounds nondeterministic, it is. It is often difficult to predict under what circumstances a particular unit of work qualifies as a candidate for this. A good place to start is to review the precise CRUD being performed within the unit of work, in this case within your stored procedure, and decide if it a) could be affected by some other, simultaneous operation and b) if that other work matters to the end result of this work being performed (or, even, vice versa). If the answer is "Yes" to both of these then consider wrapping the unit of work within a transaction. What this is suggesting is that you can't always simply decide to either use or not use transactions, rather you should apply them when it makes sense. Use the properties defined by ACID (Atomicity, Consistency, Isolation, and Durability) to help decide when this might be the case. One other thing to consider is that in some circumstances, particularly if the system must perform many operations in quick succession, e.g., a high-volume transaction processing application, you might need to weigh the relative performance cost of the transaction. Depending upon the size of the unit of work, a commit (or rollback) of a transaction can be resource expensive, perhaps negatively impacting the performance of your system unnecessarily or, at least, with limited benefit. Unfortunately, this is not an easy question to precisely answer: "It depends." A: Use them if: There are some errors that you may want to test for and catch which won't be caught except by you going out and doing the work (looking things up, testing values, etc.), usually from within a transaction so that you can roll back the whole operation. There are multi-step operations of any sort, which should, logically, be rolled back as a group if they fail.
Do I really need to use transactions in stored procedures? [MSSQL 2005]
I'm writing a pretty straightforward e-commerce app in asp.net, do I need to use transactions in my stored procedures? Read/Write ratio is about 9:1
[ "Many people ask - do I need transactions? Why do I need them? When to use them?\nThe answer is simple: use them all the time, unless you have a very good reason not to (for instance, don't use atomic transactions for \"long running activities\" between businesses). The default should always be yes. You are in doubt? - use transactions.\nWhy are transactions beneficial? They help you deal with crashes, failures, data consistency, error handling, they help you write simpler code etc. And the list of benefits will continue to grow with time.\nHere is some more info from http://blogs.msdn.com/florinlazar/\n", "Remember in SQL Server all single statement CRUD operations are in an implicit transaction by default. You just need to turn on explict transactions (BEGIN TRAN) if you need to make multiple statements act as an atomic unit.\n", "The answer is, it depends. You do not always need transaction safety. Sometimes it's overkill. Sometimes it's not.\nI can see that, for example, when you implement a checkout process you only want to finalize it once you gathered all data, etc.. Think about a payment f'up, you can rollback - that's an example when you need a transaction. Or maybe when it's wise to use them.\nDo you need a transaction when you create a new user account? Maybe, if it's across 10 tables (for whatever reason), if it's just a single table then probably not.\nIt also depends on what you sold your client on and who they are, and if they requested it, etc.. But if making a decision is up to you, then I'd say, choose wisely. \nMy bottom line is, avoid premature optimization. Build your application, keep in mind that you may want to go back and refactor/optimize later when you need it. Look at a couple opensource projects and see how they implemented different parts of their app, learn from that. You'll see that most of them don't use transactions at all, yet there are huge online stores that use them.\n", "Of course, it depends.\nIt depends upon the work that the particular stored procedure performs and, perhaps, not so much the \"read/write ratio\" that you suggest. In general, you should consider enclosing a unit of work within a transaction if it is query that could be impacted by some other, simultaneously running query. If this sounds nondeterministic, it is. It is often difficult to predict under what circumstances a particular unit of work qualifies as a candidate for this.\nA good place to start is to review the precise CRUD being performed within the unit of work, in this case within your stored procedure, and decide if it a) could be affected by some other, simultaneous operation and b) if that other work matters to the end result of this work being performed (or, even, vice versa). If the answer is \"Yes\" to both of these then consider wrapping the unit of work within a transaction.\nWhat this is suggesting is that you can't always simply decide to either use or not use transactions, rather you should apply them when it makes sense. Use the properties defined by ACID (Atomicity, Consistency, Isolation, and Durability) to help decide when this might be the case.\nOne other thing to consider is that in some circumstances, particularly if the system must perform many operations in quick succession, e.g., a high-volume transaction processing application, you might need to weigh the relative performance cost of the transaction. Depending upon the size of the unit of work, a commit (or rollback) of a transaction can be resource expensive, perhaps negatively impacting the performance of your system unnecessarily or, at least, with limited benefit.\nUnfortunately, this is not an easy question to precisely answer: \"It depends.\"\n", "Use them if:\n\nThere are some errors that you may want to test for and catch which won't be caught except by you going out and doing the work (looking things up, testing values, etc.), usually from within a transaction so that you can roll back the whole operation.\nThere are multi-step operations of any sort, which should, logically, be rolled back as a group if they fail.\n\n" ]
[ 7, 3, 1, 1, 0 ]
[]
[]
[ "asp.net", "e_commerce", "sql", "sql_server" ]
stackoverflow_0000060419_asp.net_e_commerce_sql_sql_server.txt
Q: Web Control Properties I would like to make my web control more readable in design mode, basically I want the tag declaration to look like: <cc1:Ctrl ID="Value1" runat="server"> <Values>string value 1</Value> <Values>string value 2</Value> </cc1:Ctrl> Lets say I have a private variable in the code behind: List<string> values = new List<string>(); So how can I make my user control fill out the private variable with the values that are declared in the markup? Sorry I should have been more explicit. Basically I like the functionality that the ITemplate provides (http://msdn.microsoft.com/en-us/library/aa719834.aspx) But in this case you need to know at runtime how many templates can be instansitated, i.e. void Page_Init() { if (messageTemplate != null) { for (int i=0; i<5; i++) { MessageContainer container = new MessageContainer(i); messageTemplate.InstantiateIn(container); msgholder.Controls.Add(container); } } } In the given example the markup looks like: <acme:test runat=server> <MessageTemplate> Hello #<%# Container.Index %>.<br> </MessageTemplate> </acme:test> Which is nice and clean, it does not have any tag prefixes etc. I really want the nice clean tags. I'm probably being silly in wanting the markup to be clean, I'm just wondering if there is something simple that I'm missing. A: I think what you are searching for is the attribute: [PersistenceMode(PersistenceMode.InnerProperty)] Persistence Mode Remember that you have to register your namespace and prefix with: <%@ Register Namespace="MyNamespace" TagPrefix="Pref" %> A: I see two options, but both depend on your web control implementing some sort of collection for your values. The first option is to just use the control's collection instead of your private variable. The other option is to copy the control's collection to your private variable at run-time (maybe in the Page_Load event handler, for example). Say you have web control that implements a collection of items, like a listbox. The tag looks like this in the source view: <asp:ListBox ID="ListBox1" runat="server"> <asp:ListItem>String 1</asp:ListItem> <asp:ListItem>String 2</asp:ListItem> <asp:ListItem>String 3</asp:ListItem> </asp:ListBox><br /> Then you might use code like this to load your private variable: List<String> values = new List<String>(); foreach (ListItem item in ListBox1.Items) { values.Add(item.Value.ToString()); } If you do this in Page_Load you'll probably want to only execute on the initial load (i.e. not on postbacks). On the other hand, depending on how you use it, you could just use the ListBox1.Items collection instead of declaring and initializing the values variable. I can think of no way to do this declaratively (since your list won't be instantiated until run-time anyway).
Web Control Properties
I would like to make my web control more readable in design mode, basically I want the tag declaration to look like: <cc1:Ctrl ID="Value1" runat="server"> <Values>string value 1</Value> <Values>string value 2</Value> </cc1:Ctrl> Lets say I have a private variable in the code behind: List<string> values = new List<string>(); So how can I make my user control fill out the private variable with the values that are declared in the markup? Sorry I should have been more explicit. Basically I like the functionality that the ITemplate provides (http://msdn.microsoft.com/en-us/library/aa719834.aspx) But in this case you need to know at runtime how many templates can be instansitated, i.e. void Page_Init() { if (messageTemplate != null) { for (int i=0; i<5; i++) { MessageContainer container = new MessageContainer(i); messageTemplate.InstantiateIn(container); msgholder.Controls.Add(container); } } } In the given example the markup looks like: <acme:test runat=server> <MessageTemplate> Hello #<%# Container.Index %>.<br> </MessageTemplate> </acme:test> Which is nice and clean, it does not have any tag prefixes etc. I really want the nice clean tags. I'm probably being silly in wanting the markup to be clean, I'm just wondering if there is something simple that I'm missing.
[ "I think what you are searching for is the attribute:\n[PersistenceMode(PersistenceMode.InnerProperty)]\n\nPersistence Mode\nRemember that you have to register your namespace and prefix with:\n<%@ Register Namespace=\"MyNamespace\" TagPrefix=\"Pref\" %>\n\n", "I see two options, but both depend on your web control implementing some sort of collection for your values. The first option is to just use the control's collection instead of your private variable. The other option is to copy the control's collection to your private variable at run-time (maybe in the Page_Load event handler, for example).\nSay you have web control that implements a collection of items, like a listbox. The tag looks like this in the source view:\n <asp:ListBox ID=\"ListBox1\" runat=\"server\">\n <asp:ListItem>String 1</asp:ListItem>\n <asp:ListItem>String 2</asp:ListItem>\n <asp:ListItem>String 3</asp:ListItem>\n </asp:ListBox><br />\n\nThen you might use code like this to load your private variable:\n List<String> values = new List<String>();\n\n foreach (ListItem item in ListBox1.Items)\n {\n values.Add(item.Value.ToString());\n }\n\nIf you do this in Page_Load you'll probably want to only execute on the initial load (i.e. not on postbacks). On the other hand, depending on how you use it, you could just use the ListBox1.Items collection instead of declaring and initializing the values variable.\nI can think of no way to do this declaratively (since your list won't be instantiated until run-time anyway).\n" ]
[ 2, 0 ]
[]
[]
[ "asp.net", "c#" ]
stackoverflow_0000061861_asp.net_c#.txt
Q: Const Struct& I'm having a little trouble figuring out exactly how const applies in a specific case. Here's the code I have: struct Widget { Widget():x(0), y(0), z(0){} int x, y, z; }; struct WidgetHolder //Just a simple struct to hold four Widgets. { WidgetHolder(Widget a, Widget b, Widget c, Widget d): A(a), B(b), C(c), D(d){} Widget& A; Widget& B; Widget& C; Widget& D; }; class Test //This class uses four widgets internally, and must provide access to them externally. { public: const WidgetHolder AccessWidgets() const { //This should return our four widgets, but I don't want anyone messing with them. return WidgetHolder(A, B, C, D); } WidgetHolder AccessWidgets() { //This should return our four widgets, I don't care if they get changed. return WidgetHolder(A, B, C, D); } private: Widget A, B, C, D; }; int main() { const Test unchangeable; unchangeable.AccessWidgets().A.x = 1; //Why does this compile, shouldn't the Widget& be const? } Basically, I have a class called test. It uses four widgets internally, and I need it to return these, but if test was declared const, I want the widgets returned const also. Can someone explain to me why the code in main() compiles? Thank you very much. A: You need to create a new type specifically for holding const Widget& objects. Ie: struct ConstWidgetHolder { ConstWidgetHolder(const Widget &a, const Widget &b, const Widget &c, const Widget &d): A(a), B(b), C(c), D(d){} const Widget& A; const Widget& B; const Widget& C; const Widget& D; }; class Test { public: ConstWidgetHolder AccessWidgets() const { return ConstWidgetHolder(A, B, C, D); } You will now get the following error (in gcc 4.3): widget.cc: In function 'int main()': widget.cc:51: error: assignment of data-member 'Widget::x' in read-only structure A similar idiom is used in the standard library with iterators ie: class vector { iterator begin(); const_iterator begin() const; A: unchangeable.AccessWidgets(): At this point, you are creating a new object of type WidgetHolder. This object is not protected by const. You are also creating new widgets in the WidgetHolder and not references to the Wdiget. A: Your WidgetHolder is going to hold invalid references (pointers). You are passing objects on the stack to the constructor and then holding references to their (temporary) addresses. This is guaranteed to break. You should only assign references to objects with the same (or greater) lifetime as the reference itself. Pass references to the constructor if you must hold references. Even better, don't hold the references at all and just make the copies. A: This compiles because although the WidgetHolder is a const object, this const-ness does not automatically apply to objects pointed to (referenced by) the WidgetHolder. Think of it at a machine level - if the WidgetHolder object itself were held in read-only memory, you could still write to things that were pointed to by the WidgetHolder. The problem appears to lie in this line: WidgetHolder(Widget a, Widget b, Widget c, Widget d): A(a), B(b), C(c), D(d){} As Frank mentioned, your references inside the WidgetHolder class are going to hold invalid references after the constructor returns. Therefore, you should change this to: WidgetHolder(Widget &a, Widget &b, Widget &c, Widget &d): A(a), B(b), C(c), D(d){} After you do that, it won't compile, and I leave it as an exercise for the reader to work out the rest of the solution. A: EDIT: he deleted his answer, making me look a bit foolish :) The answer by Flame is dangerously wrong. His WidgetHolder takes a reference to a value object in the constructor. As soon as the constructor returns, that passed-by-value object will be destroyed and so you'll hold a reference to a destroyed object. A very simple sample app using his code clearly shows this: #include <iostream> class Widget { int x; public: Widget(int inX) : x(inX){} ~Widget() { std::cout << "widget " << static_cast< void*>(this) << " destroyed" << std::endl; } }; struct WidgetHolder { Widget& A; public: WidgetHolder(Widget a): A(a) {} const Widget& a() const { std::cout << "widget " << static_cast< void*>(&A) << " used" << std::endl; return A; } }; int main(char** argv, int argc) { Widget test(7); WidgetHolder holder(test); Widget const & test2 = holder.a(); return 0; } The output would be something like widget 0xbffff7f8 destroyed widget 0xbffff7f8 used widget 0xbffff7f4 destroyed To avoid this the WidgetHolder constructor should take references to the variables it wants to store as references. struct WidgetHolder { Widget& A; public: WidgetHolder(Widget & a): A(a) {} /* ... */ }; A: The original query was how to return the WidgetHolder as const if the containing class was const. C++ uses const as part of the function signature and therefore you can have const and none const versions of the same function. The none const one is called when the instance is none const, and the const one is called when the instance is const. Therefore a solution is to access the widgets in the widget holder by functions, rather than directly. I have create a more simple example below which I believe answers the original question. #include <stdio.h> class Test { public: Test(int v){m_v = v;} ~Test(){printf("Destruct value = %d\n",m_v);} int& GetV(){printf ("None Const returning %d\n",m_v); return m_v; } const int& GetV() const { printf("Const returning %d\n",m_v); return m_v;} private: int m_v; }; void main() { // A none const object (or reference) calls the none const functions // in preference to the const Test one(10); int& x = one.GetV(); // We can change the member variable via the reference x = 12; const Test two(20); // This will call the const version two.GetV(); // So the below line will not compile // int& xx = two.GetV(); // Where as this will compile const int& xx = two.GetV(); // And then the below line will not compile // xx = 3; } In terms of the original code, I think it would be easier to have a WidgetHolder as a member of the class Test and then return either a const or none const reference to it, and make the Widgets private members of the holder, and provide a const and none const accessor for each Widget. class WidgetHolder { ... Widget& GetA(); const Widget& GetA() const; ... }; And then on the main class class Test { ... WigetHolder& AccessWidgets() { return m_Widgets;} const WidgetHolder&AcessWidgets() const { return m_Widgets;} private: WidgetHolder m_Widgets; ... };
Const Struct&
I'm having a little trouble figuring out exactly how const applies in a specific case. Here's the code I have: struct Widget { Widget():x(0), y(0), z(0){} int x, y, z; }; struct WidgetHolder //Just a simple struct to hold four Widgets. { WidgetHolder(Widget a, Widget b, Widget c, Widget d): A(a), B(b), C(c), D(d){} Widget& A; Widget& B; Widget& C; Widget& D; }; class Test //This class uses four widgets internally, and must provide access to them externally. { public: const WidgetHolder AccessWidgets() const { //This should return our four widgets, but I don't want anyone messing with them. return WidgetHolder(A, B, C, D); } WidgetHolder AccessWidgets() { //This should return our four widgets, I don't care if they get changed. return WidgetHolder(A, B, C, D); } private: Widget A, B, C, D; }; int main() { const Test unchangeable; unchangeable.AccessWidgets().A.x = 1; //Why does this compile, shouldn't the Widget& be const? } Basically, I have a class called test. It uses four widgets internally, and I need it to return these, but if test was declared const, I want the widgets returned const also. Can someone explain to me why the code in main() compiles? Thank you very much.
[ "You need to create a new type specifically for holding const Widget& objects. Ie:\n\n\nstruct ConstWidgetHolder\n{\n ConstWidgetHolder(const Widget &a, const Widget &b, const Widget &c, const Widget &d): A(a), B(b), C(c), D(d){}\n\n const Widget& A;\n const Widget& B;\n const Widget& C;\n const Widget& D;\n};\n\nclass Test\n{\npublic:\n ConstWidgetHolder AccessWidgets() const\n {\n return ConstWidgetHolder(A, B, C, D);\n }\n\n\nYou will now get the following error (in gcc 4.3):\n\nwidget.cc: In function 'int main()':\nwidget.cc:51: error: assignment of data-member 'Widget::x' in read-only structure\n\nA similar idiom is used in the standard library with iterators ie:\n\n\nclass vector {\n iterator begin();\n const_iterator begin() const;\n\n\n", "unchangeable.AccessWidgets():\nAt this point, you are creating a new object of type WidgetHolder. \nThis object is not protected by const. \nYou are also creating new widgets in the WidgetHolder and not references to the Wdiget. \n", "Your WidgetHolder is going to hold invalid references (pointers). You are passing objects on the stack to the constructor and then holding references to their (temporary) addresses. This is guaranteed to break.\nYou should only assign references to objects with the same (or greater) lifetime as the reference itself.\nPass references to the constructor if you must hold references. Even better, don't hold the references at all and just make the copies.\n", "This compiles because although the WidgetHolder is a const object, this const-ness does not automatically apply to objects pointed to (referenced by) the WidgetHolder. Think of it at a machine level - if the WidgetHolder object itself were held in read-only memory, you could still write to things that were pointed to by the WidgetHolder.\nThe problem appears to lie in this line:\nWidgetHolder(Widget a, Widget b, Widget c, Widget d): A(a), B(b), C(c), D(d){}\n\nAs Frank mentioned, your references inside the WidgetHolder class are going to hold invalid references after the constructor returns. Therefore, you should change this to:\nWidgetHolder(Widget &a, Widget &b, Widget &c, Widget &d): A(a), B(b), C(c), D(d){}\n\nAfter you do that, it won't compile, and I leave it as an exercise for the reader to work out the rest of the solution.\n", "EDIT: he deleted his answer, making me look a bit foolish :)\nThe answer by Flame is dangerously wrong. His WidgetHolder takes a reference to a value object in the constructor. As soon as the constructor returns, that passed-by-value object will be destroyed and so you'll hold a reference to a destroyed object.\nA very simple sample app using his code clearly shows this:\n#include <iostream>\n\nclass Widget\n{\n int x;\npublic:\n Widget(int inX) : x(inX){}\n ~Widget() {\n std::cout << \"widget \" << static_cast< void*>(this) << \" destroyed\" << std::endl;\n }\n};\n\nstruct WidgetHolder\n{\n Widget& A;\n\npublic:\n WidgetHolder(Widget a): A(a) {}\n\n const Widget& a() const {\n std::cout << \"widget \" << static_cast< void*>(&A) << \" used\" << std::endl;\n return A;\n}\n\n};\n\nint main(char** argv, int argc)\n{\nWidget test(7);\nWidgetHolder holder(test);\nWidget const & test2 = holder.a();\n\nreturn 0;\n} \n\nThe output would be something like \n\nwidget 0xbffff7f8 destroyed\nwidget 0xbffff7f8 used\nwidget 0xbffff7f4 destroyed\n\nTo avoid this the WidgetHolder constructor should take references to the variables it wants to store as references.\n\nstruct WidgetHolder\n{\n Widget& A;\n\npublic:\n WidgetHolder(Widget & a): A(a) {}\n\n /* ... */\n\n};\n\n", "The original query was how to return the WidgetHolder as const if the containing class was const. C++ uses const as part of the function signature and therefore you can have const and none const versions of the same function. The none const one is called when the instance is none const, and the const one is called when the instance is const. Therefore a solution is to access the widgets in the widget holder by functions, rather than directly. I have create a more simple example below which I believe answers the original question. \n#include <stdio.h>\n\nclass Test\n{\npublic:\n Test(int v){m_v = v;}\n ~Test(){printf(\"Destruct value = %d\\n\",m_v);}\n\n int& GetV(){printf (\"None Const returning %d\\n\",m_v); return m_v; }\n\n const int& GetV() const { printf(\"Const returning %d\\n\",m_v); return m_v;}\nprivate:\n int m_v;\n};\n\nvoid main()\n{\n // A none const object (or reference) calls the none const functions\n // in preference to the const\n Test one(10);\n int& x = one.GetV();\n // We can change the member variable via the reference\n x = 12;\n\n const Test two(20);\n // This will call the const version \n two.GetV();\n\n // So the below line will not compile\n // int& xx = two.GetV();\n\n // Where as this will compile\n const int& xx = two.GetV();\n\n // And then the below line will not compile\n // xx = 3;\n\n}\n\nIn terms of the original code, I think it would be easier to have a WidgetHolder as a member of the class Test and then return either a const or none const reference to it, and make the Widgets private members of the holder, and provide a const and none const accessor for each Widget.\nclass WidgetHolder {\n...\n\nWidget& GetA();\nconst Widget& GetA() const;\n...\n};\n\nAnd then on the main class\nclass Test {\n...\nWigetHolder& AccessWidgets() { return m_Widgets;}\nconst WidgetHolder&AcessWidgets() const { return m_Widgets;}\n\nprivate:\n WidgetHolder m_Widgets;\n...\n};\n\n" ]
[ 7, 3, 3, 2, 0, 0 ]
[]
[]
[ "c++", "constants" ]
stackoverflow_0000057355_c++_constants.txt
Q: How to work around a very large 2d array in C++ I need to create a 2D int array of size 800x800. But doing so creates a stack overflow (ha ha). I'm new to C++, so should I do something like a vector of vectors? And just encapsulate the 2d array into a class? Specifically, this array is my zbuffer in a graphics program. I need to store a z value for every pixel on the screen (hence the large size of 800x800). Thanks! A: You need about 2.5 megs, so just using the heap should be fine. You don't need a vector unless you need to resize it. See C++ FAQ Lite for an example of using a "2D" heap array. int *array = new int[800*800]; (Don't forget to delete[] it when you're done.) A: Every post so far leaves the memory management for the programmer. This can and should be avoided. ReaperUnreal is darn close to what I'd do, except I'd use a vector rather than an array and also make the dimensions template parameters and change the access functions -- and oh just IMNSHO clean things up a bit: template <class T, size_t W, size_t H> class Array2D { public: const int width = W; const int height = H; typedef typename T type; Array2D() : buffer(width*height) { } inline type& at(unsigned int x, unsigned int y) { return buffer[y*width + x]; } inline const type& at(unsigned int x, unsigned int y) const { return buffer[y*width + x]; } private: std::vector<T> buffer; }; Now you can allocate this 2-D array on the stack just fine: void foo() { Array2D<int, 800, 800> zbuffer; // Do something with zbuffer... } I hope this helps! EDIT: Removed array specification from Array2D::buffer. Thanks to Andreas for catching that! A: Kevin's example is good, however: std::vector<T> buffer[width * height]; Should be std::vector<T> buffer; Expanding it a bit you could of course add operator-overloads instead of the at()-functions: const T &operator()(int x, int y) const { return buffer[y * width + x]; } and T &operator()(int x, int y) { return buffer[y * width + x]; } Example: int main() { Array2D<int, 800, 800> a; a(10, 10) = 50; std::cout << "A(10, 10)=" << a(10, 10) << std::endl; return 0; } A: You could do a vector of vectors, but that would have some overhead. For a z-buffer the more typical method would be to create an array of size 800*800=640000. const int width = 800; const int height = 800; unsigned int* z_buffer = new unsigned int[width*height]; Then access the pixels as follows: unsigned int z = z_buffer[y*width+x]; A: I might create a single dimension array of 800*800. It is probably more efficient to use a single allocation like this, rather than allocating 800 separate vectors. int *ary=new int[800*800]; Then, probably encapsulate that in a class that acted like a 2D array. class _2DArray { public: int *operator[](const size_t &idx) { return &ary[idx*800]; } const int *operator[](const size_t &idx) const { return &ary[idx*800]; } }; The abstraction shown here has a lot of holes, e.g, what happens if you access out past the end of a "row"? The book "Effective C++" has a pretty good discussion of writing good multi dimensional arrays in C++. A: There's the C like way of doing: const int xwidth = 800; const int ywidth = 800; int* array = (int*) new int[xwidth * ywidth]; // Check array is not NULL here and handle the allocation error if it is // Then do stuff with the array, such as zero initialize it for(int x = 0; x < xwidth; ++x) { for(int y = 0; y < ywidth; ++y) { array[y * xwidth + x] = 0; } } // Just use array[y * xwidth + x] when you want to access your class. // When you're done with it, free the memory you allocated with delete[] array; You could encapsulate the y * xwidth + x inside a class with an easy get and set method (possibly with overloading the [] operator if you want to start getting into more advanced C++). I'd recommend getting to this slowly though if you're just starting with C++ and not start creating re-usable fully class templates for n-dimension arrays which will just confuse you when you're starting off. As soon as you get into graphics work you might find that the overhead of having extra class calls might slow down your code. However don't worry about this until your application isn't fast enough and you can profile it to show where the time is lost, rather than making it more difficult to use at the start with possible unnecessary complexity. I found that the C++ lite FAQ was great for information such as this. In particular your question is answered by: http://www.parashift.com/c++-faq-lite/freestore-mgmt.html#faq-16.16 A: One thing you can do is change the stack size (if you really want the array on the stack) with VC the flag to do this is [/F](http://msdn.microsoft.com/en-us/library/tdkhxaks(VS.80).aspx). But the solution you probably want is to put the memory in the heap rather than on the stack, for that you should use a vector of vectors. The following line declares a vector of 800 elements, each element is a vector of 800 ints and saves you from managing the memory manually. std::vector<std::vector<int> > arr(800, std::vector<int>(800)); Note the space between the two closing angle brackets (> >) which is required in order disambiguate it from the shift right operator (which will no longer be needed in C++0x). A: Or you could try something like: boost::shared_array<int> zbuffer(new int[width*height]); You should still be able to do this too: ++zbuffer[0]; No more worries about managing the memory, no custom classes to take care of, and it's easy to throw around. A: You can allocate array on static storage (in file's scope, or add static qualifier in function scope), if you need only one instance. int array[800][800]; void fn() { static int array[800][800]; } This way it will not go to the stack, and you not have to deal with dynamic memory.
How to work around a very large 2d array in C++
I need to create a 2D int array of size 800x800. But doing so creates a stack overflow (ha ha). I'm new to C++, so should I do something like a vector of vectors? And just encapsulate the 2d array into a class? Specifically, this array is my zbuffer in a graphics program. I need to store a z value for every pixel on the screen (hence the large size of 800x800). Thanks!
[ "You need about 2.5 megs, so just using the heap should be fine. You don't need a vector unless you need to resize it. See C++ FAQ Lite for an example of using a \"2D\" heap array.\nint *array = new int[800*800];\n\n(Don't forget to delete[] it when you're done.)\n", "Every post so far leaves the memory management for the programmer. This can and should be avoided. ReaperUnreal is darn close to what I'd do, except I'd use a vector rather than an array and also make the dimensions template parameters and change the access functions -- and oh just IMNSHO clean things up a bit:\ntemplate <class T, size_t W, size_t H>\nclass Array2D\n{\npublic:\n const int width = W;\n const int height = H;\n typedef typename T type;\n\n Array2D()\n : buffer(width*height)\n {\n }\n\n inline type& at(unsigned int x, unsigned int y)\n {\n return buffer[y*width + x];\n }\n\n inline const type& at(unsigned int x, unsigned int y) const\n {\n return buffer[y*width + x];\n }\n\nprivate:\n std::vector<T> buffer;\n};\n\nNow you can allocate this 2-D array on the stack just fine:\nvoid foo()\n{\n Array2D<int, 800, 800> zbuffer;\n\n // Do something with zbuffer...\n}\n\nI hope this helps!\nEDIT: Removed array specification from Array2D::buffer. Thanks to Andreas for catching that!\n", "Kevin's example is good, however:\n\nstd::vector<T> buffer[width * height];\n\n\nShould be\nstd::vector<T> buffer;\n\nExpanding it a bit you could of course add operator-overloads instead of the at()-functions:\nconst T &operator()(int x, int y) const\n{\n return buffer[y * width + x];\n}\n\nand\nT &operator()(int x, int y)\n{\n return buffer[y * width + x];\n}\n\nExample:\nint main()\n{\n Array2D<int, 800, 800> a;\n a(10, 10) = 50;\n std::cout << \"A(10, 10)=\" << a(10, 10) << std::endl;\n return 0;\n}\n\n", "You could do a vector of vectors, but that would have some overhead. For a z-buffer the more typical method would be to create an array of size 800*800=640000.\nconst int width = 800;\nconst int height = 800;\nunsigned int* z_buffer = new unsigned int[width*height];\n\nThen access the pixels as follows:\nunsigned int z = z_buffer[y*width+x];\n\n", "I might create a single dimension array of 800*800. It is probably more efficient to use a single allocation like this, rather than allocating 800 separate vectors.\nint *ary=new int[800*800];\n\nThen, probably encapsulate that in a class that acted like a 2D array.\nclass _2DArray\n{\n public:\n int *operator[](const size_t &idx)\n {\n return &ary[idx*800];\n }\n const int *operator[](const size_t &idx) const\n {\n return &ary[idx*800];\n }\n};\n\nThe abstraction shown here has a lot of holes, e.g, what happens if you access out past the end of a \"row\"? The book \"Effective C++\" has a pretty good discussion of writing good multi dimensional arrays in C++.\n", "There's the C like way of doing:\nconst int xwidth = 800;\nconst int ywidth = 800;\nint* array = (int*) new int[xwidth * ywidth];\n// Check array is not NULL here and handle the allocation error if it is\n// Then do stuff with the array, such as zero initialize it\nfor(int x = 0; x < xwidth; ++x)\n{\n for(int y = 0; y < ywidth; ++y)\n {\n array[y * xwidth + x] = 0;\n }\n}\n// Just use array[y * xwidth + x] when you want to access your class.\n\n// When you're done with it, free the memory you allocated with\ndelete[] array;\n\nYou could encapsulate the y * xwidth + x inside a class with an easy get and set method (possibly with overloading the [] operator if you want to start getting into more advanced C++). I'd recommend getting to this slowly though if you're just starting with C++ and not start creating re-usable fully class templates for n-dimension arrays which will just confuse you when you're starting off.\nAs soon as you get into graphics work you might find that the overhead of having extra class calls might slow down your code. However don't worry about this until your application isn't fast enough and you can profile it to show where the time is lost, rather than making it more difficult to use at the start with possible unnecessary complexity.\nI found that the C++ lite FAQ was great for information such as this. In particular your question is answered by:\nhttp://www.parashift.com/c++-faq-lite/freestore-mgmt.html#faq-16.16\n", "One thing you can do is change the stack size (if you really want the array on the stack) with VC the flag to do this is [/F](http://msdn.microsoft.com/en-us/library/tdkhxaks(VS.80).aspx). \nBut the solution you probably want is to put the memory in the heap rather than on the stack, for that you should use a vector of vectors.\nThe following line declares a vector of 800 elements, each element is a vector of 800 ints and saves you from managing the memory manually.\nstd::vector<std::vector<int> > arr(800, std::vector<int>(800));\n\nNote the space between the two closing angle brackets (> >) which is required in order disambiguate it from the shift right operator (which will no longer be needed in C++0x).\n", "Or you could try something like:\nboost::shared_array<int> zbuffer(new int[width*height]);\n\nYou should still be able to do this too:\n++zbuffer[0];\n\nNo more worries about managing the memory, no custom classes to take care of, and it's easy to throw around.\n", "You can allocate array on static storage (in file's scope, or add static qualifier in function scope), if you need only one instance.\nint array[800][800];\n\nvoid fn()\n{\n static int array[800][800];\n}\n\nThis way it will not go to the stack, and you not have to deal with dynamic memory.\n" ]
[ 12, 10, 4, 3, 2, 1, 1, 1, 1 ]
[ "Well, building on what Niall Ryan started, if performance is an issue, you can take this one step further by optimizing the math and encapsulating this into a class.\nSo we'll start with a bit of math. Recall that 800 can be written in powers of 2 as:\n800 = 512 + 256 + 32 = 2^5 + 2^8 + 2^9\n\nSo we can write our addressing function as:\nint index = y << 9 + y << 8 + y << 5 + x;\n\nSo if we encapsulate everything into a nice class we get:\nclass ZBuffer\n{\npublic:\n const int width = 800;\n const int height = 800;\n\n ZBuffer()\n {\n for(unsigned int i = 0, *pBuff = zbuff; i < width * height; i++, pBuff++)\n *pBuff = 0;\n }\n\n inline unsigned int getZAt(unsigned int x, unsigned int y)\n {\n return *(zbuff + y << 9 + y << 8 + y << 5 + x);\n }\n\n inline unsigned int setZAt(unsigned int x, unsigned int y, unsigned int z)\n {\n *(zbuff + y << 9 + y << 8 + y << 5 + x) = z;\n }\nprivate:\n unsigned int zbuff[width * height];\n};\n\n" ]
[ -1 ]
[ "2d", "arrays", "c++", "graphics", "zbuffer" ]
stackoverflow_0000061680_2d_arrays_c++_graphics_zbuffer.txt
Q: C# Casting vs. Parse Which of the following is better code in c# and why? ((DateTime)g[0]["MyUntypedDateField"]).ToShortDateString() or DateTime.Parse(g[0]["MyUntypedDateField"].ToString()).ToShortDateString() Ultimately, is it better to cast or to parse? A: If g[0]["MyUntypedDateField"] is really a DateTime object, then the cast is the better choice. If it's not really a DateTime, then you have no choice but to use the Parse (you would get an InvalidCastException if you tried to use the cast) A: Casting is the only good answer. You have to remember, that ToString and Parse results are not always exact - there are cases, when you cannot safely roundtrip between those two functions. The documentation of ToString says, it uses current thread culture settings. The documentation of Parse says, it also uses current thread culture settings (so far so good - they are using the same culture), but there is an explicit remark, that: Formatting is influenced by properties of the current DateTimeFormatInfo object, which by default are derived from the Regional and Language Options item in Control Panel. One reason the Parse method can unexpectedly throw FormatException is if the current DateTimeFormatInfo.DateSeparator and DateTimeFormatInfo.TimeSeparator properties are set to the same value. So depending on the users settings, the ToString/Parse code can and will unexpectedly fail... A: Your code suggests that the variable may be either a date or a string that looks like a date. Dates you can simply return wit a cast, but strings must be parsed. Parsing comes with two caveats; if you aren't certain this string can be parsed, then use DateTime.TryParse(). Always include a reference to the culture you want to parse as. ToShortDateString() returns different outputs in different places. You will almost certainly want to parse using the same culture. I suggest this function dealing with both situations; private DateTime ParseDateTime(object data) { if (data is DateTime) { // already a date-time. return (DateTime)data; } else if (data is string) { // it's a local-format string. string dateString = (string)data; DateTime parseResult; if (DateTime.TryParse(dateString, CultureInfo.CurrentCulture, DateTimeStyles.AssumeLocal, out parseResult)) { return parseResult; } else { throw new ArgumentOutOfRangeException("data", "could not parse this datetime:" + data); } } else { // it's neither a DateTime or a string; that's a problem. throw new ArgumentOutOfRangeException("data", "could not understand data of this type"); } } Then call like this; ParseDateTime(g[0]["MyUntypedDateField").ToShortDateString(); Note that bad data throws an exception, so you'll want to catch that. Also; the 'as' operator does not work with the DateTime data type, as this only works with reference types, and DateTime is a value type. A: As @Brian R. Bondy pointed it depends on implementation of g[0]["MyUntypedDateField"]. Safe practice is to use DateTime.TryParse and as operator. A: Parse requires a string for input, casting requires an object, so in the second example you provide above, then you are required to perform two casts: one from an object to a string, then from a string to a DateTime. The first does not. However, if there is a risk of an exception when you perform the cast, then you might want to go the second route so you can TryParse and avoid an expensive exception to be thrown. Otherwise, go the most efficient route and just cast once (from object to DateTime) rather than twice (from object to string to DateTime). A: There's comparison of the different techniques at http://blogs.msdn.com/bclteam/archive/2005/02/11/371436.aspx.
C# Casting vs. Parse
Which of the following is better code in c# and why? ((DateTime)g[0]["MyUntypedDateField"]).ToShortDateString() or DateTime.Parse(g[0]["MyUntypedDateField"].ToString()).ToShortDateString() Ultimately, is it better to cast or to parse?
[ "If g[0][\"MyUntypedDateField\"] is really a DateTime object, then the cast is the better choice. If it's not really a DateTime, then you have no choice but to use the Parse (you would get an InvalidCastException if you tried to use the cast)\n", "Casting is the only good answer.\nYou have to remember, that ToString and Parse results are not always exact - there are cases, when you cannot safely roundtrip between those two functions.\nThe documentation of ToString says, it uses current thread culture settings. The documentation of Parse says, it also uses current thread culture settings (so far so good - they are using the same culture), but there is an explicit remark, that:\n\nFormatting is influenced by properties of the current DateTimeFormatInfo object, which by default are derived from the Regional and Language Options item in Control Panel. One reason the Parse method can unexpectedly throw FormatException is if the current DateTimeFormatInfo.DateSeparator and DateTimeFormatInfo.TimeSeparator properties are set to the same value.\n\nSo depending on the users settings, the ToString/Parse code can and will unexpectedly fail...\n", "Your code suggests that the variable may be either a date or a string that looks like a date. Dates you can simply return wit a cast, but strings must be parsed. Parsing comes with two caveats;\n\nif you aren't certain this string can be parsed, then use DateTime.TryParse(). \nAlways include a reference to the culture you want to parse as. ToShortDateString() returns different outputs in different places. You will almost certainly want to parse using the same culture. I suggest this function dealing with both situations;\nprivate DateTime ParseDateTime(object data)\n{\n if (data is DateTime)\n {\n // already a date-time.\n return (DateTime)data;\n }\n else if (data is string)\n {\n // it's a local-format string.\n string dateString = (string)data;\n DateTime parseResult;\n if (DateTime.TryParse(dateString, CultureInfo.CurrentCulture,\n DateTimeStyles.AssumeLocal, out parseResult))\n {\n return parseResult;\n }\n else\n {\n throw new ArgumentOutOfRangeException(\"data\", \n \"could not parse this datetime:\" + data);\n }\n }\n else\n {\n // it's neither a DateTime or a string; that's a problem.\n throw new ArgumentOutOfRangeException(\"data\", \n \"could not understand data of this type\");\n }\n}\n\n\nThen call like this;\nParseDateTime(g[0][\"MyUntypedDateField\").ToShortDateString();\n\nNote that bad data throws an exception, so you'll want to catch that.\nAlso; the 'as' operator does not work with the DateTime data type, as this only works with reference types, and DateTime is a value type.\n", "As @Brian R. Bondy pointed it depends on implementation of g[0][\"MyUntypedDateField\"]. Safe practice is to use DateTime.TryParse and as operator. \n", "Parse requires a string for input, casting requires an object, so in the second example you provide above, then you are required to perform two casts: one from an object to a string, then from a string to a DateTime. The first does not. \nHowever, if there is a risk of an exception when you perform the cast, then you might want to go the second route so you can TryParse and avoid an expensive exception to be thrown. Otherwise, go the most efficient route and just cast once (from object to DateTime) rather than twice (from object to string to DateTime).\n", "There's comparison of the different techniques at http://blogs.msdn.com/bclteam/archive/2005/02/11/371436.aspx. \n" ]
[ 12, 3, 1, 0, 0, 0 ]
[]
[]
[ "c#", "casting", "datetime", "parsing", "string" ]
stackoverflow_0000061733_c#_casting_datetime_parsing_string.txt
Q: iframe wikipedia article without the wrapper I want to embed a wikipedia article into a page but I don't want all the wrapper (navigation, etc.) that sits around the articles. I saw it done here: http://www.dayah.com/periodic/. Click on an element and the iframe is displayed and links to the article only (no wrapper). So how'd they do that? Seems like JavaScript handles showing the iframe and constructing the href but after browsing the pages javascript (http://www.dayah.com/periodic/Script/interactivity.js) I still can't figure out how the url is built. Thanks. A: The periodic table example loads the printer-friendly version of the wiki artice into an iframe. http://en.wikipedia.org/wiki/Potasium?printable=yes it's done in function click_wiki(e) (line 534, interactivity.js) var article = el.childNodes[0].childNodes[n_name].innerHTML; ... window.frames["WikiFrame"].location.replace("http://" + language + ".wikipedia.org/w/index.php?title=" + encodeURIComponent(article) + "&printable=yes"); A: @VolkerK is right, they are using the printable version. Here is an easy way to find out when you know the site is displaying the page in an iframe. In Firefox right click anywhere inside the iframe, from the context menu select "This Frame" then "View frame info" You get the info you need including the Address: Address: http://en.wikipedia.org/w/index.php?title=Chromium&printable=yes A: The jQuery library lets you specify part of a page to retrieve by an Ajax call, with a CSS-like syntax: http://docs.jquery.com/Ajax/load
iframe wikipedia article without the wrapper
I want to embed a wikipedia article into a page but I don't want all the wrapper (navigation, etc.) that sits around the articles. I saw it done here: http://www.dayah.com/periodic/. Click on an element and the iframe is displayed and links to the article only (no wrapper). So how'd they do that? Seems like JavaScript handles showing the iframe and constructing the href but after browsing the pages javascript (http://www.dayah.com/periodic/Script/interactivity.js) I still can't figure out how the url is built. Thanks.
[ "The periodic table example loads the printer-friendly version of the wiki artice into an iframe. http://en.wikipedia.org/wiki/Potasium?printable=yes\nit's done in function click_wiki(e) (line 534, interactivity.js)\n\nvar article = el.childNodes[0].childNodes[n_name].innerHTML;\n...\nwindow.frames[\"WikiFrame\"].location.replace(\"http://\" + language + \".wikipedia.org/w/index.php?title=\" + encodeURIComponent(article) + \"&printable=yes\");\n\n", "@VolkerK is right, they are using the printable version.\nHere is an easy way to find out when you know the site is displaying the page in an iframe.\nIn Firefox right click anywhere inside the iframe, from the context menu select \"This Frame\" then \"View frame info\"\nYou get the info you need including the Address:\n\nAddress: http://en.wikipedia.org/w/index.php?title=Chromium&printable=yes\n\n", "The jQuery library lets you specify part of a page to retrieve by an Ajax call, with a CSS-like syntax: http://docs.jquery.com/Ajax/load\n" ]
[ 15, 2, 0 ]
[ "You could always download the site and scrap it. I think everything inside <div id=\"bodyContent\"> is the content of the article - sans navigation, header, footer, etc..\nDon't forget to credit. ;)\n" ]
[ -3 ]
[ "iframe", "javascript", "wikipedia" ]
stackoverflow_0000061902_iframe_javascript_wikipedia.txt
Q: Is there any way to get rid of the long list of usings at the top of my .cs files? As I get more and more namespaces in my solution, the list of using statements at the top of my files grows longer and longer. This is especially the case in my unit tests where for each component that might be called I need to include the using for the interface, the IoC container, and the concrete type. With upward of 17 lines of usings in my integration test files its just getting downright messy. Does anyone know if theres a way to define a macro for my base using statements? Any other solutions? A: I know I shouldn't say this out loud, but, maybe reconsider your design. 17 usings in 1 file = a lot of coupling (on the namespace level). A: Some people enjoy hiding the usings in a #region. Otherwise, I think you're out of luck. Unless you want to put the namespace on all your referents. A: Can't stand Resharper myself. But I also can't stand messy using statements. I use the Power Commands add-in for VS, which has a handy 'Remove and Sort' using statements command (among other good things). A: There are four possible problems here; The namespaces in your code are dividing your classes too finely. if you have, for example; using MyCompany.Drawing.Vector.Points; using MyCompany.Drawing.Vector.Shapes; using MyCompany.Drawing.Vector.Transformations; consider collapsing them to the single MyCompany.Drawing.Vector namespace. You probably aren't gaining by dividing too much. Visual Studio Code Analysis/FxCop has a rule for this, checking the number of classes in a namespace. Too few and it will warn you. You are putting too many tests into the same class. If you are referencing System.Data, System.Drawing, and System.IO in the same class, consider writing more atomic tests -- some which access databases, some which draw images, and some which access the file system. Then divide each type across three test classes. You are writing tests which do too much. If you are referencing a lot of namespaces, your tests may be coupling too many features together. This kind of coupling can often be buggy, so try to break big, wide-ranging functions into smaller parts, and test these in seperate files. Many are redundant. Are they all used, or are they just copy-pasted from other files. Right-click on the code editor and choose from the 'Organise Using' options to remove unused statements. A: Does anyone know if theres a way to define a macro for my base using statements? Do you mean that namespaces you use often are automaticly added to each new class? If yes, Resharper can do that too. Additionaly it has a feature to put the usings in a region on code clean-up. Resharper may be the way to go (you won't regrett it as I can say from my own experience). A: VS2008 added an "Organize Usings" context menu, which has a Sort, Remove, and "Remove and Sort" option which will do what you want per file. The Visual Studio Power Commands add-in adds a context menu in the solution explorer for projects and solutions which is a "Remove and Sort" for all files in the project and all projects in the solution, respectively. A: If you want to change the default using statements that are done when you create a new file, take a look in the C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\ItemTemplates\CSharp\Code\1033 directory. It contains a bunch of zip files that you can modify to change the templates for Code files (Obviously move up the directory structure to change other languages or other types of files). See here for more information. A: It may help to use aliasing. Not sure it it's worth it, but instead of: using System.Web.UI; using System.Web.Mail; using System.Web.Security; ... Control ... ... MailMessage ... ... Roles ... you can use: using W = System.Web; ... W.UI.Control ... ... W.Mail.MailMessage ... ... W.Security.Rolse ... A: Resharper - the add-in for Visual Studio - has a feature that strips unsued Using's from a file, but I don't know anything that does quite what you describe. A: In VS2008, you can right click on the CS file and choose 'Organize Usings'. It will strip unused using and sort them for you too. Other than that, I would just use #region. Also, CTRL+M+O will collapse all your regions functions, etc at design time. I use this shortcut A LOT!
Is there any way to get rid of the long list of usings at the top of my .cs files?
As I get more and more namespaces in my solution, the list of using statements at the top of my files grows longer and longer. This is especially the case in my unit tests where for each component that might be called I need to include the using for the interface, the IoC container, and the concrete type. With upward of 17 lines of usings in my integration test files its just getting downright messy. Does anyone know if theres a way to define a macro for my base using statements? Any other solutions?
[ "I know I shouldn't say this out loud, but, maybe reconsider your design.\n17 usings in 1 file = a lot of coupling (on the namespace level).\n", "Some people enjoy hiding the usings in a #region. Otherwise, I think you're out of luck. Unless you want to put the namespace on all your referents.\n", "Can't stand Resharper myself. But I also can't stand messy using statements. I use the Power Commands add-in for VS, which has a handy 'Remove and Sort' using statements command (among other good things).\n", "There are four possible problems here;\nThe namespaces in your code are dividing your classes too finely. if you have, for example;\nusing MyCompany.Drawing.Vector.Points;\nusing MyCompany.Drawing.Vector.Shapes;\nusing MyCompany.Drawing.Vector.Transformations;\n\nconsider collapsing them to the single MyCompany.Drawing.Vector namespace. You probably aren't gaining by dividing too much. Visual Studio Code Analysis/FxCop has a rule for this, checking the number of classes in a namespace. Too few and it will warn you.\nYou are putting too many tests into the same class. If you are referencing System.Data, System.Drawing, and System.IO in the same class, consider writing more atomic tests -- some which access databases, some which draw images, and some which access the file system. Then divide each type across three test classes.\nYou are writing tests which do too much. If you are referencing a lot of namespaces, your tests may be coupling too many features together. This kind of coupling can often be buggy, so try to break big, wide-ranging functions into smaller parts, and test these in seperate files. \nMany are redundant. Are they all used, or are they just copy-pasted from other files. Right-click on the code editor and choose from the 'Organise Using' options to remove unused statements.\n", "\nDoes anyone know if theres a way to\n define a macro for my base using\n statements?\n\nDo you mean that namespaces you use often are automaticly added to each new class? If yes, Resharper can do that too. Additionaly it has a feature to put the usings in a region on code clean-up. Resharper may be the way to go (you won't regrett it as I can say from my own experience).\n", "VS2008 added an \"Organize Usings\" context menu, which has a Sort, Remove, and \"Remove and Sort\" option which will do what you want per file. The Visual Studio Power Commands add-in adds a context menu in the solution explorer for projects and solutions which is a \"Remove and Sort\" for all files in the project and all projects in the solution, respectively.\n", "If you want to change the default using statements that are done when you create a new file, take a look in the C:\\Program Files\\Microsoft Visual Studio 9.0\\Common7\\IDE\\ItemTemplates\\CSharp\\Code\\1033 directory. It contains a bunch of zip files that you can modify to change the templates for Code files (Obviously move up the directory structure to change other languages or other types of files).\nSee here for more information.\n", "It may help to use aliasing. Not sure it it's worth it, but instead of:\nusing System.Web.UI;\nusing System.Web.Mail;\nusing System.Web.Security;\n... Control ...\n... MailMessage ...\n... Roles ... \n\nyou can use:\nusing W = System.Web;\n... W.UI.Control ...\n... W.Mail.MailMessage ...\n... W.Security.Rolse ...\n\n", "Resharper - the add-in for Visual Studio - has a feature that strips unsued Using's from a file, but I don't know anything that does quite what you describe.\n", "In VS2008, you can right click on the CS file and choose 'Organize Usings'. It will strip unused using and sort them for you too. Other than that, I would just use #region. Also, CTRL+M+O will collapse all your regions functions, etc at design time. I use this shortcut A LOT!\n" ]
[ 6, 4, 2, 2, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ ".net", "c#", "namespaces", "visual_studio" ]
stackoverflow_0000062219_.net_c#_namespaces_visual_studio.txt
Q: Getting Subclipse in Aptana to work with the newest release of Subversion The version of Subclipse (1.2.4) currently available through Aptana's automatic Plugins Manager does not work with the newest version of Subversion. I see on the Subclipse website however that they have 1.4.2 out for Eclipse. So I added a new remote update site to my Update manager. When I tried to install it, it told me I needed Mylyn 3.0.0. So after much searching I found Mylyn 3.0.0 and added another new remote update site to my update manager. Then when I tried to install that, it told me I needed org.eclipse.ui 3.3.0 or equivalent. Looking at the configuration details for Aptana, it looks like it is built against eclipse 3.2.2. Does anyone know if there is a way to upgrade the version of Eclipse Aptana that is built against to 3.3.0? Or if there is some other way to get Subclipse to work with the very newest version of Subversion? I know this isn't necessarily a "programming" question, but I hope it's ok since it's highly relevant to the programming experience. A: Subclipse does not require Mylyn, but the update site includes a plugin that integrates Mylyn and Subclipse. This is intended for people that use Mylyn. In your case, you would want to just de-select Mylyn in the update dialog. Subclipse also requires Subversion 1.5 and the corresponding version of the JavaHL native libraries. I have written the start of an FAQ to help people understand JavaHL and how to get it. See: http://desktop-eclipse.open.collab.net/wiki/JavaHL A: I've had problems with JavaHL in Eclipse Ganymede, when it worked fine in Eclipse Europa. I'm not sure how Aptana is different, but try either upgrading JavaHL or switching to the pure-java SVNKit implementation within the Subclipse config. A: if you're not going to be using mylyn just uncheck that dependency. I'm not really familiar with Aptana, but in eclipse you can expand whats being installed and uncheck anything you don't need. A: I used the update url and I installed the JavaHL adapter, the Subclipse project itself and the SVNKit adapter BETA. After this it worked fine for me, this is for linux platform hope it works for you.
Getting Subclipse in Aptana to work with the newest release of Subversion
The version of Subclipse (1.2.4) currently available through Aptana's automatic Plugins Manager does not work with the newest version of Subversion. I see on the Subclipse website however that they have 1.4.2 out for Eclipse. So I added a new remote update site to my Update manager. When I tried to install it, it told me I needed Mylyn 3.0.0. So after much searching I found Mylyn 3.0.0 and added another new remote update site to my update manager. Then when I tried to install that, it told me I needed org.eclipse.ui 3.3.0 or equivalent. Looking at the configuration details for Aptana, it looks like it is built against eclipse 3.2.2. Does anyone know if there is a way to upgrade the version of Eclipse Aptana that is built against to 3.3.0? Or if there is some other way to get Subclipse to work with the very newest version of Subversion? I know this isn't necessarily a "programming" question, but I hope it's ok since it's highly relevant to the programming experience.
[ "Subclipse does not require Mylyn, but the update site includes a plugin that integrates Mylyn and Subclipse. This is intended for people that use Mylyn. In your case, you would want to just de-select Mylyn in the update dialog.\nSubclipse also requires Subversion 1.5 and the corresponding version of the JavaHL native libraries. I have written the start of an FAQ to help people understand JavaHL and how to get it. See: http://desktop-eclipse.open.collab.net/wiki/JavaHL\n", "I've had problems with JavaHL in Eclipse Ganymede, when it worked fine in Eclipse Europa. I'm not sure how Aptana is different, but try either upgrading JavaHL or switching to the pure-java SVNKit implementation within the Subclipse config.\n", "if you're not going to be using mylyn just uncheck that dependency. I'm not really familiar with Aptana, but in eclipse you can expand whats being installed and uncheck anything you don't need.\n", "I used the update url and I installed the JavaHL adapter, the Subclipse project itself and the SVNKit adapter BETA.\nAfter this it worked fine for me, this is for linux platform hope it works for you.\n" ]
[ 18, 7, 4, 3 ]
[]
[]
[ "aptana", "eclipse", "subclipse", "svn" ]
stackoverflow_0000000079_aptana_eclipse_subclipse_svn.txt
Q: reload a .sql schema without restarting mysqld Is it possible to reload a schema file without having to restart mysqld? I am working in just one db in a sea of many and would like to have my changes refreshed without doing a cold-restart. A: When you say "reload a schema file", I assume you're referring to a file that has all the SQL statements defining your database schema? i.e. creating tables, views, stored procecures, etc.? The solution is fairly simple - keep a file with all the SQL that creates the tables, etc. in a file, and before all the CREATE statements, add a DELETE/DROP statement to remove what's already there. Then when you want to do a reload, just do: cat myschemafile.sql | mysql -u userid -p databasename
reload a .sql schema without restarting mysqld
Is it possible to reload a schema file without having to restart mysqld? I am working in just one db in a sea of many and would like to have my changes refreshed without doing a cold-restart.
[ "When you say \"reload a schema file\", I assume you're referring to a file that has all the SQL statements defining your database schema? i.e. creating tables, views, stored procecures, etc.?\nThe solution is fairly simple - keep a file with all the SQL that creates the tables, etc. in a file, and before all the CREATE statements, add a DELETE/DROP statement to remove what's already there. Then when you want to do a reload, just do:\ncat myschemafile.sql | mysql -u userid -p databasename\n\n" ]
[ 1 ]
[]
[]
[ "mysql", "schema", "sql" ]
stackoverflow_0000062593_mysql_schema_sql.txt
Q: Merging records for Mnesia I am trying to refactor some code I have for software that collects current status of agents in a call queue. Currently, for each of the 6 or so events that I listen to, I check in a Mnesia table if an agent exists and change some values in the row depending on the event or add it as new if the agent doesn't exist. Currently I have this Mnesia transaction in each event and of course that is a bunch of repeated code for checking the existence of agents and so on. I'm trying to change it so that there is one function like change_agent/2 that I call from the events that handles this for me. My problems are of course records.... I find no way of dynamically creating them or merging 2 of them together or anything. Preferably there would be a function I could call like: change_agent("001", #agent(id = "001", name = "Steve")). change_agent("001", #agent(id = "001", paused = 0, talking_to = "None")). A: It is difficult to write generic access functions for records. One workaround for this is the 'exprecs' library, which will generate code for low-level record access functions. The thing you need to do is to add the following lines to a module: -compile({parse_transform, exprecs}). -export_records([...]). % name the records that you want to 'export' The naming convention for the access functions may look strange, but was inspired by a proposal from Richard O'Keefe. It is, at least, consistent, and unlikely to clash with existing functions. (: A: I wrote some code a while ago that merges two records. Is not entirely dynamic, but whith macros you could easily use it for several records. It works like this: The merge/2 function takes two records and converts them to lists together with the empty record for reference (the record type is defined at compile time, and must be. This is the "undynamic" part). These are then run through the generic function merge/4 which works with lists and takes elements from A if they are defined, otherwise from B if they are defined, or lastly from Default (which is always defined). Here's the code (please excuse StackOverflow's poor Erlang syntax highlighting): %%%---------------------------------------------------------------------------- %%% @spec merge(RecordA, RecordB) -> #my_record{} %%% RecordA = #my_record{} %%% RecordB = #my_record{} %%% %%% @doc Merges two #my_record{} instances. The first takes precedence. %%% @end %%%---------------------------------------------------------------------------- merge(RecordA, RecordB) when is_record(RecordA, my_record), is_record(RecordB, my_record) -> list_to_tuple( lists:append([my_record], merge(tl(tuple_to_list(RecordA)), tl(tuple_to_list(RecordB)), tl(tuple_to_list(#my_record{})), []))). %%%---------------------------------------------------------------------------- %%% @spec merge(A, B, Default, []) -> [term()] %%% A = [term()] %%% B = [term()] %%% Default = [term()] %%% %%% @doc Merges the lists `A' and `B' into to a new list taking %%% default values from `Default'. %%% %%% Each element of `A' and `B' are compared against the elements in %%% `Default'. If they match the default, the default is used. If one %%% of them differs from the other and the default value, that element is %%% chosen. If both differs, the element from `A' is chosen. %%% @end %%%---------------------------------------------------------------------------- merge([D|ATail], [D|BTail], [D|DTail], To) -> merge(ATail, BTail, DTail, [D|To]); % If default, take from D merge([D|ATail], [B|BTail], [D|DTail], To) -> merge(ATail, BTail, DTail, [B|To]); % If only A default, take from B merge([A|ATail], [_|BTail], [_|DTail], To) -> merge(ATail, BTail, DTail, [A|To]); % Otherwise take from A merge([], [], [], To) -> lists:reverse(To). Feel free to use it in any way you want.
Merging records for Mnesia
I am trying to refactor some code I have for software that collects current status of agents in a call queue. Currently, for each of the 6 or so events that I listen to, I check in a Mnesia table if an agent exists and change some values in the row depending on the event or add it as new if the agent doesn't exist. Currently I have this Mnesia transaction in each event and of course that is a bunch of repeated code for checking the existence of agents and so on. I'm trying to change it so that there is one function like change_agent/2 that I call from the events that handles this for me. My problems are of course records.... I find no way of dynamically creating them or merging 2 of them together or anything. Preferably there would be a function I could call like: change_agent("001", #agent(id = "001", name = "Steve")). change_agent("001", #agent(id = "001", paused = 0, talking_to = "None")).
[ "It is difficult to write generic access functions for records.\nOne workaround for this is the 'exprecs' library, which\nwill generate code for low-level record access functions.\nThe thing you need to do is to add the following lines to \na module:\n-compile({parse_transform, exprecs}).\n-export_records([...]). % name the records that you want to 'export'\n\nThe naming convention for the access functions may look strange, but was inspired by a proposal from Richard O'Keefe. It is, at least, consistent, and unlikely to clash with existing functions. (:\n", "I wrote some code a while ago that merges two records. Is not entirely dynamic, but whith macros you could easily use it for several records.\nIt works like this: The merge/2 function takes two records and converts them to lists together with the empty record for reference (the record type is defined at compile time, and must be. This is the \"undynamic\" part). These are then run through the generic function merge/4 which works with lists and takes elements from A if they are defined, otherwise from B if they are defined, or lastly from Default (which is always defined).\nHere's the code (please excuse StackOverflow's poor Erlang syntax highlighting):\n%%%----------------------------------------------------------------------------\n%%% @spec merge(RecordA, RecordB) -> #my_record{}\n%%% RecordA = #my_record{}\n%%% RecordB = #my_record{}\n%%%\n%%% @doc Merges two #my_record{} instances. The first takes precedence.\n%%% @end\n%%%----------------------------------------------------------------------------\nmerge(RecordA, RecordB) when is_record(RecordA, my_record),\n is_record(RecordB, my_record) ->\n list_to_tuple(\n lists:append([my_record],\n merge(tl(tuple_to_list(RecordA)),\n tl(tuple_to_list(RecordB)),\n tl(tuple_to_list(#my_record{})),\n []))).\n\n%%%----------------------------------------------------------------------------\n%%% @spec merge(A, B, Default, []) -> [term()]\n%%% A = [term()]\n%%% B = [term()]\n%%% Default = [term()]\n%%%\n%%% @doc Merges the lists `A' and `B' into to a new list taking\n%%% default values from `Default'.\n%%%\n%%% Each element of `A' and `B' are compared against the elements in\n%%% `Default'. If they match the default, the default is used. If one\n%%% of them differs from the other and the default value, that element is\n%%% chosen. If both differs, the element from `A' is chosen.\n%%% @end\n%%%----------------------------------------------------------------------------\nmerge([D|ATail], [D|BTail], [D|DTail], To) ->\n merge(ATail, BTail, DTail, [D|To]); % If default, take from D\nmerge([D|ATail], [B|BTail], [D|DTail], To) ->\n merge(ATail, BTail, DTail, [B|To]); % If only A default, take from B\nmerge([A|ATail], [_|BTail], [_|DTail], To) ->\n merge(ATail, BTail, DTail, [A|To]); % Otherwise take from A\nmerge([], [], [], To) ->\n lists:reverse(To).\n\nFeel free to use it in any way you want.\n" ]
[ 3, 2 ]
[]
[]
[ "erlang", "mnesia" ]
stackoverflow_0000062245_erlang_mnesia.txt
Q: Performing validation on a databound object after the property has been updated I have a basic form with controls that are databound to an object implementing the INotifyPropertyChanged interface. I would like to add some validation to a couple of properties but dont want to go through implementing IDataErrorInfo for the sake of validating a couple of properties. I have created the functions that perform the validation and return the error message (if applicable) in the object. What I would like to do is call these functions from my form when the relevant properties on the object have changed, and setup the ErrorProvider control in my form with any error messages that have been returned from the validation functions. I have tried hooking up event handlers to the Validating and LostFocus events, but these seem to fire before my object is updated, and hence they are not validating the correct data. Its only when I leave the textbox, go back in and then leave again that the validation runs against the correct data. Is there another event that I can hook into so that I can call these validation functions after the property on my object has been updated? Or am I better off just implementing the IDataErrorInfo interface? A: I'm not sure exactly what the problem is, are you saying that you can't get the property to set until the control loses focus? If so, you need to set the binding to update OnPropertyChanged instead of OnValidation. Binding to OnPropertyChanged means the binding is updated immediately, while OnValidation only updates the underlying object when a Validation is triggered (which for most controls is when they lose focus). A: I think i've found a solution to the problem with the help of Cameron's post. I have changed the binding to update OnPropertyChanged and now when I wire up the event handler to the LostFocus event the validation is being performed on the "new" value from the textbox rather than what was previously held in the object
Performing validation on a databound object after the property has been updated
I have a basic form with controls that are databound to an object implementing the INotifyPropertyChanged interface. I would like to add some validation to a couple of properties but dont want to go through implementing IDataErrorInfo for the sake of validating a couple of properties. I have created the functions that perform the validation and return the error message (if applicable) in the object. What I would like to do is call these functions from my form when the relevant properties on the object have changed, and setup the ErrorProvider control in my form with any error messages that have been returned from the validation functions. I have tried hooking up event handlers to the Validating and LostFocus events, but these seem to fire before my object is updated, and hence they are not validating the correct data. Its only when I leave the textbox, go back in and then leave again that the validation runs against the correct data. Is there another event that I can hook into so that I can call these validation functions after the property on my object has been updated? Or am I better off just implementing the IDataErrorInfo interface?
[ "I'm not sure exactly what the problem is, are you saying that you can't get the property to set until the control loses focus?\nIf so, you need to set the binding to update OnPropertyChanged instead of OnValidation.\nBinding to OnPropertyChanged means the binding is updated immediately, while OnValidation only updates the underlying object when a Validation is triggered (which for most controls is when they lose focus).\n", "I think i've found a solution to the problem with the help of Cameron's post. I have changed the binding to update OnPropertyChanged and now when I wire up the event handler to the LostFocus event the validation is being performed on the \"new\" value from the textbox rather than what was previously held in the object\n" ]
[ 1, 1 ]
[]
[]
[ ".net", "data_binding", "validation", "vb.net" ]
stackoverflow_0000062686_.net_data_binding_validation_vb.net.txt
Q: What is the most efficient way to paginate my site when querying with SQL? I am trying to paginate the results of an SQL query for use on a web page. The language and the database backend are PHP and SQLite. The code I'm using works something like this (page numbering starts at 0) http://example.com/table?page=0 page = request(page) per = 10 // results per page offset = page * per // take one extra record so we know if a next link is needed resultset = query(select columns from table where conditions limit offset, per + 1) if(page > 0) show a previous link if(count(resultset) > per) show a next link unset(resultset[per]) display results Are there more efficient ways to do pagination than this? One problem that I can see with my current method is that I must store all 10 (or however many) results in memory before I start displaying them. I do this because PDO does not guarantee that the row count will be available. Is it more efficient to issue a COUNT(*) query to learn how many rows exist, then stream the results to the browser? Is this one of those "it depends on the size of your table, and whether the count(*) query requires a full table scan in the database backend", "do some profiling yourself" kind of questions? A: I've opted to go with the COUNT(*) two query method, because it allows me to create a link directly to the last page, which the other method does not allow. Performing the count first also allows me to stream the results, and so should work well with higher numbers of records with less memory. Consistency between pages is not an issue for me. Thank you for your help. A: There are several cases where I have a fairly complex (9-12 table join) query, returning many thousands of rows, which I need to paginate. Obviously to paginate nicely, you need to know the total size of the result. With MySQL databases, using the SQL_CALC_FOUND_ROWS directive in the SELECT can help you achieve this easily, although the jury is out on whether that will be more efficient for you to do. However, since you are using SQLite, I recommend sticking with the 2 query approach. Here is a very concise thread on the matter. A: i'd suggest just doing the count first. a count(primary key) is a very efficient query. A: I doubt that it will be a problem for your users to wait for the backend to return ten rows. (You can make it up to them by being good at specifying image dimensions, make the webserver negotiate compressed data transfers when possible, etc.) I don't think that it will be very useful for you to do a count(*) initially. If you are up to some complicated coding: When the user is looking at page x, use ajax-like magic to pre-load page x+1 for improved user experience. A general note about pagination: If the data changes while the user browses through your pages, it may be a problem if your solution demands a very high level of consistency. I've writte a note about that elsewhere.
What is the most efficient way to paginate my site when querying with SQL?
I am trying to paginate the results of an SQL query for use on a web page. The language and the database backend are PHP and SQLite. The code I'm using works something like this (page numbering starts at 0) http://example.com/table?page=0 page = request(page) per = 10 // results per page offset = page * per // take one extra record so we know if a next link is needed resultset = query(select columns from table where conditions limit offset, per + 1) if(page > 0) show a previous link if(count(resultset) > per) show a next link unset(resultset[per]) display results Are there more efficient ways to do pagination than this? One problem that I can see with my current method is that I must store all 10 (or however many) results in memory before I start displaying them. I do this because PDO does not guarantee that the row count will be available. Is it more efficient to issue a COUNT(*) query to learn how many rows exist, then stream the results to the browser? Is this one of those "it depends on the size of your table, and whether the count(*) query requires a full table scan in the database backend", "do some profiling yourself" kind of questions?
[ "I've opted to go with the COUNT(*) two query method, because it allows me to create a link directly to the last page, which the other method does not allow. Performing the count first also allows me to stream the results, and so should work well with higher numbers of records with less memory.\nConsistency between pages is not an issue for me. Thank you for your help.\n", "There are several cases where I have a fairly complex (9-12 table join) query, returning many thousands of rows, which I need to paginate. Obviously to paginate nicely, you need to know the total size of the result. With MySQL databases, using the SQL_CALC_FOUND_ROWS directive in the SELECT can help you achieve this easily, although the jury is out on whether that will be more efficient for you to do.\nHowever, since you are using SQLite, I recommend sticking with the 2 query approach. Here is a very concise thread on the matter.\n", "i'd suggest just doing the count first. a count(primary key) is a very efficient query.\n", "I doubt that it will be a problem for your users to wait for the backend to return ten rows. (You can make it up to them by being good at specifying image dimensions, make the webserver negotiate compressed data transfers when possible, etc.)\nI don't think that it will be very useful for you to do a count(*) initially.\nIf you are up to some complicated coding: When the user is looking at page x, use ajax-like magic to pre-load page x+1 for improved user experience.\nA general note about pagination:\nIf the data changes while the user browses through your pages, it may be a problem if your solution demands a very high level of consistency. I've writte a note about that elsewhere.\n" ]
[ 2, 2, 1, 1 ]
[]
[]
[ "pdo", "php", "sql", "sqlite" ]
stackoverflow_0000052723_pdo_php_sql_sqlite.txt
Q: Predefined Dialog templates in VB.NET? In VB.NET is there a library of template dialogs I can use? It's easy to create a custom dialog and inherit from that, but it seems like there would be some templates for that sort of thing. I just need something simple like Save/Cancel, Yes/No, etc. Edit: MessageBox is not quite enough, because I want to add drop-down menus, listboxes, grids, etc. If I had a dialog form where I could ask for some pre-defined buttons, each of which returned a modal result and closed the form, then I could add those controls and the buttons would already be there. A: Do you need something more than what can be provided by MsgBox? MsgBox("Do you want to see this message?", MsgBoxStyle.OkCancel + MsgBoxStyle.Information, "Respond") A: Why not create your own template? I've done that with several types of forms, not just dialogs. It is a great way to give yourself a jump-start. Create your basic dialog, keeping it as generic as possible, then save it as a template. Here is an article that will help you: http://www.builderau.com.au/program/dotnet/soa/Save-time-with-Visual-Studio-2005-project-templates/0,339028399,339285540,00.htm And: http://msdn.microsoft.com/en-us/magazine/cc188697.aspx A: Are you unable to use the MessageBox class? A: Of course there's MessageBox (shorthand MsgBox in VB.Net) and also the windows common dialogs like Open File, Save File, Print, ColorPicker, etc. However, none of those really qualify as templates. I can sympathize with wanting a better message box from time to time. You might try code project: I'll bet you'll see a dozen...
Predefined Dialog templates in VB.NET?
In VB.NET is there a library of template dialogs I can use? It's easy to create a custom dialog and inherit from that, but it seems like there would be some templates for that sort of thing. I just need something simple like Save/Cancel, Yes/No, etc. Edit: MessageBox is not quite enough, because I want to add drop-down menus, listboxes, grids, etc. If I had a dialog form where I could ask for some pre-defined buttons, each of which returned a modal result and closed the form, then I could add those controls and the buttons would already be there.
[ "Do you need something more than what can be provided by MsgBox?\nMsgBox(\"Do you want to see this message?\", MsgBoxStyle.OkCancel + MsgBoxStyle.Information, \"Respond\")\n\n", "Why not create your own template? I've done that with several types of forms, not just dialogs. It is a great way to give yourself a jump-start.\nCreate your basic dialog, keeping it as generic as possible, then save it as a template.\nHere is an article that will help you:\nhttp://www.builderau.com.au/program/dotnet/soa/Save-time-with-Visual-Studio-2005-project-templates/0,339028399,339285540,00.htm\nAnd:\nhttp://msdn.microsoft.com/en-us/magazine/cc188697.aspx\n", "Are you unable to use the MessageBox class?\n", "Of course there's MessageBox (shorthand MsgBox in VB.Net) and also the windows common dialogs like Open File, Save File, Print, ColorPicker, etc.\nHowever, none of those really qualify as templates. \nI can sympathize with wanting a better message box from time to time. You might try code project: I'll bet you'll see a dozen...\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "dialog", "templates", "vb.net" ]
stackoverflow_0000062906_dialog_templates_vb.net.txt
Q: E4X : Assigning to root node I am using Adobe Flex/Air here, but as far as I know this applies to all of JavaScript. I have come across this problem a few times, and there must be an easy solution out there! Suppose I have the following XML (using e4x): var xml:XML = <root><example>foo</example></root> I can change the contents of the example node using the following code: xml.example = "bar"; However, if I have this: var xml:XML = <root>foo</root> How do i change the contents of the root node? xml = "bar"; Obviously doesn't work as I'm attempting to assign a string to an XML object. A: It seems you confuse variables for the values they contain. The assignment node = textInput.text; changes the value the variable node points to, it doesn't change anything with the object that node currently points to. To do what you want to do you can use the setChildren method of the XML class: node.setChildren(textInput.text) A: Ah thank you Theo - indeed seems I was confused there. I think the root of the confustion came from the fact I was able to assign textInput.text = node; Which I now guess is just implicity calling XML.toString() to convert XML->String. setChildren() is what I was looking for. A: If you're trying to change the root element of a document, you don't really need to-- just throw out the existing document and replace it. Alternatively, just wrap your element in a more proper root element (you shouldn't be editing the root node anyway) and you'd be set. Of course, that doesn't answer your question. There's an ugly JS hack that can do what you want, but bear in mind that it's likely far slower than doing the above. Anyway, here it is: var xml = <root>foo</root>; // </fix_syntax_highlighter> var parser = new DOMParser(); var serializer = new XMLSerializer(); // Parse xml as DOM document // Must inject "<root></root>" wrapper because // E4X's toString() method doesn't give it to us // Not sure if this is expected behaviour.. doesn't seem so to me. var xmlDoc = parser.parseFromString("<root>" + xml.toString() + "</root>", "text/xml"); // Make the change xmlDoc.documentElement.firstChild.nodeValue = "CHANGED"; // Serialize back to string and then to E4X XML() xml = new XML(serializer.serializeToString(xmlDoc)); You can ignore the fix_syntax_highlighter comment.
E4X : Assigning to root node
I am using Adobe Flex/Air here, but as far as I know this applies to all of JavaScript. I have come across this problem a few times, and there must be an easy solution out there! Suppose I have the following XML (using e4x): var xml:XML = <root><example>foo</example></root> I can change the contents of the example node using the following code: xml.example = "bar"; However, if I have this: var xml:XML = <root>foo</root> How do i change the contents of the root node? xml = "bar"; Obviously doesn't work as I'm attempting to assign a string to an XML object.
[ "It seems you confuse variables for the values they contain. The assignment\nnode = textInput.text;\n\nchanges the value the variable node points to, it doesn't change anything with the object that node currently points to. To do what you want to do you can use the setChildren method of the XML class:\nnode.setChildren(textInput.text)\n\n", "Ah thank you Theo - indeed seems I was confused there. I think the root of the confustion came from the fact I was able to assign \ntextInput.text = node; \n\nWhich I now guess is just implicity calling XML.toString() to convert XML->String. setChildren() is what I was looking for.\n", "If you're trying to change the root element of a document, you don't really need to-- just throw out the existing document and replace it. Alternatively, just wrap your element in a more proper root element (you shouldn't be editing the root node anyway) and you'd be set.\nOf course, that doesn't answer your question. There's an ugly JS hack that can do what you want, but bear in mind that it's likely far slower than doing the above. Anyway, here it is:\nvar xml = <root>foo</root>; // </fix_syntax_highlighter>\nvar parser = new DOMParser();\nvar serializer = new XMLSerializer();\n\n// Parse xml as DOM document\n// Must inject \"<root></root>\" wrapper because \n// E4X's toString() method doesn't give it to us\n// Not sure if this is expected behaviour.. doesn't seem so to me.\nvar xmlDoc = parser.parseFromString(\"<root>\" + \n xml.toString() + \"</root>\", \"text/xml\");\n\n// Make the change\nxmlDoc.documentElement.firstChild.nodeValue = \"CHANGED\";\n\n// Serialize back to string and then to E4X XML()\nxml = new XML(serializer.serializeToString(xmlDoc));\n\nYou can ignore the fix_syntax_highlighter comment.\n" ]
[ 5, 1, 0 ]
[]
[]
[ "air", "apache_flex", "e4x", "javascript" ]
stackoverflow_0000062086_air_apache_flex_e4x_javascript.txt
Q: Shared/Static variable in Global.asax isolated per request? I have some ASP.NET web services which all share a common helper class they only need to instantiate one instance of per server. It's used for simple translation of data, but does spend some time during start-up loading things from the web.config file, etc. The helper class is 100% thread-safe. Think of it as a simple library of utility calls. I'd make all the methods shared on the class, but I want to load the initial configuration from web.config. We've deployed the web services to IIS 6.0 and using an Application Pool, with a Web Garden of 15 workers. I declared the helper class as a Private Shared variable in Global.asax, and added a lazy load Shared ReadOnly property like this: Private Shared _helper As MyHelperClass Public Shared ReadOnly Property Helper() As MyHelperClass Get If _helper Is Nothing Then _helper = New MyHelperClass() End If Return _helper End Get End Property I have logging code in the constructor for MyHelperClass(), and it shows the constructor running for each request, even on the same thread. I'm sure I'm just missing some key detail of ASP.NET but MSDN hasn't been very helpful. I've tried doing similar things using both Application("Helper") and Cache("Helper") and I still saw the constructor run with each request. A: You can place your Helper in the Application State. Do this in global.asax: void Application_Start(object sender, EventArgs e) { Application.Add("MyHelper", new MyHelperClass()); } You can use the Helper that way: MyHelperClass helper = (MyHelperClass)HttpContext.Current.Application["MyHelper"]; helper.Foo(); This results in a single instance of the MyHelperClass class that is created on application start and lives in application state. Since the instance is created in Application_Start, this happens only once for each HttpApplication instance and not per Request. A: I 'v done something like this in my own app in the past and it caused all kinds of weird errors. Every user will have access to everyone else's data in the property. Plus you could end up with one user being in the middle of using it and than getting cut off because its being requested by another user. No there not isolated. A: It's not wise to use application state unless you absolutely require it, things are much simpler if you stick to using per-request objects. Any addition of state to the helper classes could cause all sorts of subtle errors. Use the HttpContext.Current items collection and intialise it per request. A VB module would do what you want, but you must be sure not to make it stateful.
Shared/Static variable in Global.asax isolated per request?
I have some ASP.NET web services which all share a common helper class they only need to instantiate one instance of per server. It's used for simple translation of data, but does spend some time during start-up loading things from the web.config file, etc. The helper class is 100% thread-safe. Think of it as a simple library of utility calls. I'd make all the methods shared on the class, but I want to load the initial configuration from web.config. We've deployed the web services to IIS 6.0 and using an Application Pool, with a Web Garden of 15 workers. I declared the helper class as a Private Shared variable in Global.asax, and added a lazy load Shared ReadOnly property like this: Private Shared _helper As MyHelperClass Public Shared ReadOnly Property Helper() As MyHelperClass Get If _helper Is Nothing Then _helper = New MyHelperClass() End If Return _helper End Get End Property I have logging code in the constructor for MyHelperClass(), and it shows the constructor running for each request, even on the same thread. I'm sure I'm just missing some key detail of ASP.NET but MSDN hasn't been very helpful. I've tried doing similar things using both Application("Helper") and Cache("Helper") and I still saw the constructor run with each request.
[ "You can place your Helper in the Application State. Do this in global.asax:\n void Application_Start(object sender, EventArgs e)\n {\n Application.Add(\"MyHelper\", new MyHelperClass());\n }\n\nYou can use the Helper that way:\n MyHelperClass helper = (MyHelperClass)HttpContext.Current.Application[\"MyHelper\"];\n helper.Foo();\n\nThis results in a single instance of the MyHelperClass class that is created on application start and lives in application state. Since the instance is created in Application_Start, this happens only once for each HttpApplication instance and not per Request.\n", "I 'v done something like this in my own app in the past and it caused all kinds of weird errors.\nEvery user will have access to everyone else's data in the property. Plus you could end up with one user being in the middle of using it and than getting cut off because its being requested by another user.\nNo there not isolated.\n", "It's not wise to use application state unless you absolutely require it, things are much simpler if you stick to using per-request objects. Any addition of state to the helper classes could cause all sorts of subtle errors. Use the HttpContext.Current items collection and intialise it per request. A VB module would do what you want, but you must be sure not to make it stateful.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "asp.net", "vb.net", "web_services" ]
stackoverflow_0000062588_asp.net_vb.net_web_services.txt
Q: Yes/No dialog in Java ME I'm looking for a simple solution for a yes/no dialog to use in a Java ME midlet. I'd like to use it like this but other ways are okey. if (YesNoDialog.ask("Are you sure?") == true) { // yes was chosen } else { // no was chosen } A: You need an Alert: An alert is a screen that shows data to the user and waits for a certain period of time before proceeding to the next Displayable. An alert can contain a text string and an image. The intended use of Alert is to inform the user about errors and other exceptional conditions. With 2 commands ("Yes"/"No" in your case): If there are two or more Commands present on the Alert, it is automatically turned into a modal Alert, and the timeout value is always FOREVER. The Alert remains on the display until a Command is invoked. These are built-in classes supported in MIDP 1.0 and higher. Also your code snippet will never work. Such an API would need to block the calling thread awaiting for the user to select and answer. This goes exactly in the opposite direction of the UI interaction model of MIDP, which is based in callbacks and delegation. You need to provide your own class, implementing CommandListener, and prepare your code for asynchronous execution. Here is an (untested!) example class based on Alert: public class MyPrompter implements CommandListener { private Alert yesNoAlert; private Command softKey1; private Command softKey2; private boolean status; public MyPrompter() { yesNoAlert = new Alert("Attention"); yesNoAlert.setString("Are you sure?"); softKey1 = new Command("No", Command.BACK, 1); softKey2 = new Command("Yes", Command.OK, 1); yesNoAlert.addCommand(softKey1); yesNoAlert.addCommand(softKey2); yesNoAlert.setCommandListener(this); status = false; } public Displayable getDisplayable() { return yesNoAlert; } public boolean getStatus() { return status; } public void commandAction(Command c, Displayable d) { status = c.getCommandType() == Command.OK; // maybe do other stuff here. remember this is asynchronous } }; To use it (again, untested and on top of my head): MyPrompter prompt = new MyPrompter(); Display.getDisplay(YOUR_MIDLET_INSTANCE).setCurrent(prompt.getDisplayable()); This code will make the prompt the current displayed form in your app, but it won't block your thread like in the example you posted. You need to continue running and wait for a commandAction invocation.
Yes/No dialog in Java ME
I'm looking for a simple solution for a yes/no dialog to use in a Java ME midlet. I'd like to use it like this but other ways are okey. if (YesNoDialog.ask("Are you sure?") == true) { // yes was chosen } else { // no was chosen }
[ "You need an Alert:\n\nAn alert is a screen that shows data to the user and waits for a certain period of time before proceeding to the next Displayable. An alert can contain a text string and an image. The intended use of Alert is to inform the user about errors and other exceptional conditions.\n\nWith 2 commands (\"Yes\"/\"No\" in your case):\n\nIf there are two or more Commands present on the Alert, it is automatically turned into a modal Alert, and the timeout value is always FOREVER. The Alert remains on the display until a Command is invoked.\n\nThese are built-in classes supported in MIDP 1.0 and higher. Also your code snippet will never work. Such an API would need to block the calling thread awaiting for the user to select and answer. This goes exactly in the opposite direction of the UI interaction model of MIDP, which is based in callbacks and delegation. You need to provide your own class, implementing CommandListener, and prepare your code for asynchronous execution.\nHere is an (untested!) example class based on Alert:\npublic class MyPrompter implements CommandListener {\n\n private Alert yesNoAlert;\n\n private Command softKey1;\n private Command softKey2;\n\n private boolean status;\n\n public MyPrompter() {\n yesNoAlert = new Alert(\"Attention\");\n yesNoAlert.setString(\"Are you sure?\");\n softKey1 = new Command(\"No\", Command.BACK, 1);\n softKey2 = new Command(\"Yes\", Command.OK, 1);\n yesNoAlert.addCommand(softKey1);\n yesNoAlert.addCommand(softKey2);\n yesNoAlert.setCommandListener(this);\n status = false;\n }\n\n public Displayable getDisplayable() {\n return yesNoAlert;\n }\n\n public boolean getStatus() {\n return status;\n }\n\n public void commandAction(Command c, Displayable d) {\n status = c.getCommandType() == Command.OK;\n // maybe do other stuff here. remember this is asynchronous\n }\n\n};\n\nTo use it (again, untested and on top of my head):\nMyPrompter prompt = new MyPrompter();\nDisplay.getDisplay(YOUR_MIDLET_INSTANCE).setCurrent(prompt.getDisplayable());\n\nThis code will make the prompt the current displayed form in your app, but it won't block your thread like in the example you posted. You need to continue running and wait for a commandAction invocation.\n" ]
[ 7 ]
[ "I dont have programed in Java ME, but i found in it's reference for optional packages the\nAdvanced Graphics and User Interface API, and it's used like the Java SE API to create these dialogs with the JOptionPane Class\n\nint JOptionPane.showConfirmDialog(java.awt.Component parentComponent, java.lang.Object >message, java.lang.String title, int optionType)\n\nReturn could be\nJOptionPane.YES_OPTION, JOptionPane.NO_OPTION, JOptionPane.CANCEL_OPTION...\n" ]
[ -2 ]
[ "java", "java_me", "midlet", "user_interface" ]
stackoverflow_0000056943_java_java_me_midlet_user_interface.txt
Q: How do I get an auto-scrolling text display on .NET forms - e.g. for credits Need to show a credits screen where I want to acknowledge the many contributors to my application. Want it to be an automatically scrolling box, much like the credits roll at the end of the film. A: A easy-to-use snippet would be to make a multiline textbox. With a timer you may insert line after line and scroll to the end after that: textbox1.SelectionStart = textbox1.Text.Length; textbox1.ScrollToCaret(); textbox1.Refresh(); Not the best method but it's simple and working. There are also some free controls available for exactly this auto-scrolling. A: A quick and dirty method would be to use a Panel with a long list of Label controls on it that list out the various people and contributions. Then you need to set the Panel to be AutoScroll so that it has a vertical scrollbar because the list of labels goes past the bottom of the displayed Panel. Then add a time that updates the AutoScrollOffset by 1 vertical pixel each timer tick. When you get to the bottom you reset the offset to 0 and carry on. The only downside is the vertical scrollbar showing. A: Embed a WebBrowser control, and use a technique like this to do some javascript scrolling of the HTML content of your choice. A: If you're using a .NET form you can just flick to the HTML view and use the marquee html element: http://www.htmlcodetutorial.com/_MARQUEE.html To be honest it's not great and I wouldn't use it for a commercial job since it can come across as a bit tacky - mainly because it's been overused on so many bad sites in the past. However, it might just be a quick solution to your problem. Another option is to use some of the features of the Scriptaculous JavaScript library: http://script.aculo.us/ It has many functions for moving text around and is much more powerful.
How do I get an auto-scrolling text display on .NET forms - e.g. for credits
Need to show a credits screen where I want to acknowledge the many contributors to my application. Want it to be an automatically scrolling box, much like the credits roll at the end of the film.
[ "A easy-to-use snippet would be to make a multiline textbox. With a timer you may insert line after line and scroll to the end after that:\ntextbox1.SelectionStart = textbox1.Text.Length;\ntextbox1.ScrollToCaret();\ntextbox1.Refresh();\n\nNot the best method but it's simple and working. There are also some free controls available for exactly this auto-scrolling.\n", "A quick and dirty method would be to use a Panel with a long list of Label controls on it that list out the various people and contributions. Then you need to set the Panel to be AutoScroll so that it has a vertical scrollbar because the list of labels goes past the bottom of the displayed Panel. Then add a time that updates the AutoScrollOffset by 1 vertical pixel each timer tick. When you get to the bottom you reset the offset to 0 and carry on. The only downside is the vertical scrollbar showing.\n", "Embed a WebBrowser control, and use a technique like this to do some javascript scrolling of the HTML content of your choice.\n", "If you're using a .NET form you can just flick to the HTML view and use the marquee html element:\nhttp://www.htmlcodetutorial.com/_MARQUEE.html\nTo be honest it's not great and I wouldn't use it for a commercial job since it can come across as a bit tacky - mainly because it's been overused on so many bad sites in the past. However, it might just be a quick solution to your problem.\nAnother option is to use some of the features of the Scriptaculous JavaScript library:\nhttp://script.aculo.us/\nIt has many functions for moving text around and is much more powerful.\n" ]
[ 3, 0, 0, 0 ]
[]
[]
[ ".net", "scroll", "textbox", "winforms" ]
stackoverflow_0000062940_.net_scroll_textbox_winforms.txt
Q: bitwise indexing in C? I'm trying to implement a data compression idea I've had, and since I'm imagining running it against a large corpus of test data, I had thought to code it in C (I mostly have experience in scripting languages like Ruby and Tcl.) Looking through the O'Reilly 'cow' books on C, I realize that I can't simply index the bits of a simple 'char' or 'int' type variable as I'd like to to do bitwise comparisons and operators. Am I correct in this perception? Is it reasonable for me to use an enumerated type for representing a bit (and make an array of these, and writing functions to convert to and from char)? If so, is such a type and functions defined in a standard library already somewhere? Are there other (better?) approaches? Is there some example code somewhere that someone could point me to? Thanks - A: Following on from what Kyle has said, you can use a macro to do the hard work for you. It is possible. To set the nth bit, use OR: x |= (1 << 5); // sets the 6th-from right To clear a bit, use AND: x &= ~(1 << 5); // clears 6th-from-right To flip a bit, use XOR: x ^= (1 << 5); // flips 6th-from-right Or... #define GetBit(var, bit) ((var & (1 << bit)) != 0) // Returns true / false if bit is set #define SetBit(var, bit) (var |= (1 << bit)) #define FlipBit(var, bit) (var ^= (1 << bit)) Then you can use it in code like: int myVar = 0; SetBit(myVar, 5); if (GetBit(myVar, 5)) { // Do something } A: It is possible. To set the nth bit, use OR: x |= (1 << 5); // sets the 5th-from right To clear a bit, use AND: x &= ~(1 << 5); // clears 5th-from-right To flip a bit, use XOR: x ^= (1 << 5); // flips 5th-from-right To get the value of a bit use shift and AND: (x & (1 << 5)) >> 5 // gets the value (0 or 1) of the 5th-from-right note: the shift right 5 is to ensure the value is either 0 or 1. If you're just interested in 0/not 0, you can get by without the shift. A: Have a look at the answers to this question. A: Theory There is no C syntax for accessing or setting the n-th bit of a built-in datatype (e.g. a 'char'). However, you can access bits using a logical AND operation, and set bits using a logical OR operation. As an example, say that you have a variable that holds 1101 and you want to check the 2nd bit from the left. Simply perform a logical AND with 0100: 1101 0100 ---- AND 0100 If the result is non-zero, then the 2nd bit must have been set; otherwise is was not set. If you want to set the 3rd bit from the left, then perform a logical OR with 0010: 1101 0010 ---- OR 1111 You can use the C operators && (for AND) and || (for OR) to perform these tasks. You will need to construct the bit access patterns (the 0100 and 0010 in the above examples) yourself. The trick is to remember that the least significant bit (LSB) counts 1s, the next LSB counts 2s, then 4s etc. So, the bit access pattern for the n-th LSB (starting at 0) is simply the value of 2^n. The easiest way to compute this in C is to shift the binary value 0001 (in this four bit example) to the left by the required number of places. As this value is always equal to 1 in unsigned integer-like quantities, this is just '1 << n' Example unsigned char myVal = 0x65; /* in hex; this is 01100101 in binary. */ /* Q: is the 3-rd least significant bit set (again, the LSB is the 0th bit)? */ unsigned char pattern = 1; pattern <<= 3; /* Shift pattern left by three places.*/ if(myVal && (char)(1<<3)) {printf("Yes!\n");} /* Perform the test. */ /* Set the most significant bit. */ myVal |= (char)(1<<7); This example hasn't been tested, but should serve to illustrate the general idea. A: To query state of bit with specific index: int index_state = variable & ( 1 << bit_index ); To set bit: varabile |= 1 << bit_index; To restart bit: variable &= ~( 1 << bit_index ); A: IF you want to index a bit you could: bit = (char & 0xF0) >> 7; gets the msb of a char. You could even leave out the right shift and do a test on 0. bit = char & 0xF0; if the bit is set the result will be > 0; obviousuly, you need to change the mask to get different bits (NB: the 0xF is the bit mask if it is unclear). It is possible to define numerous masks e.g. #define BIT_0 0x1 // or 1 << 0 #define BIT_1 0x2 // or 1 << 1 #define BIT_2 0x4 // or 1 << 2 #define BIT_3 0x8 // or 1 << 3 etc... This gives you: bit = char & BIT_1; You can use these definitions in the above code to sucessfully index a bit within either a macro or a function. To set a bit: char |= BIT_2; To clear a bit: char &= ~BIT_3 To toggle a bit char ^= BIT_4 This help? A: There is a standard library container for bits: std::vector. It is specialised in the library to be space efficient. There is also a boost dynamic_bitset class. These will let you perform operations on a set of boolean values, using one bit per value of underlying storage. Boost dynamic bitset documentation For the STL documentation, see your compiler documentation. Of course, you can also address the individual bits in other integral types by hand. If you do that, you should use unsigned types so that you don't get undefined behaviour if decide to do a right shift on a value with the high bit set. However, it sounds like you want the containers. To the commenter who claimed this takes 32x more space than necessary: boost::dynamic_bitset and vector are specialised to use one bit per entry, and so there is not a space penalty, assuming that you actually want more than the number of bits in a primitive type. These classes allow you to address individual bits in a large container with efficient underlying storage. If you just want (say) 32 bits, by all means, use an int. If you want some large number of bits, you can use a library container. A: Try using bitfields. Be careful the implementation can vary by compiler. http://publications.gbdirect.co.uk/c_book/chapter6/bitfields.html A: Individual bits can be indexed as follows. Define a struct like this one: struct { unsigned bit0 : 1; unsigned bit1 : 1; unsigned bit2 : 1; unsigned bit3 : 1; unsigned reserved : 28; } bitPattern; Now if I want to know the individual bit values of a var named "value", do the following: CopyMemory( &input, &value, sizeof(value) ); To see if bit 2 is high or low: int state = bitPattern.bit2; Hope this helps.
bitwise indexing in C?
I'm trying to implement a data compression idea I've had, and since I'm imagining running it against a large corpus of test data, I had thought to code it in C (I mostly have experience in scripting languages like Ruby and Tcl.) Looking through the O'Reilly 'cow' books on C, I realize that I can't simply index the bits of a simple 'char' or 'int' type variable as I'd like to to do bitwise comparisons and operators. Am I correct in this perception? Is it reasonable for me to use an enumerated type for representing a bit (and make an array of these, and writing functions to convert to and from char)? If so, is such a type and functions defined in a standard library already somewhere? Are there other (better?) approaches? Is there some example code somewhere that someone could point me to? Thanks -
[ "Following on from what Kyle has said, you can use a macro to do the hard work for you.\n\nIt is possible.\nTo set the nth bit, use OR:\nx |= (1 << 5); // sets the 6th-from\n right\nTo clear a bit, use AND:\nx &= ~(1 << 5); // clears\n 6th-from-right\nTo flip a bit, use XOR:\nx ^= (1 << 5); // flips 6th-from-right\n\nOr...\n#define GetBit(var, bit) ((var & (1 << bit)) != 0) // Returns true / false if bit is set\n#define SetBit(var, bit) (var |= (1 << bit))\n#define FlipBit(var, bit) (var ^= (1 << bit))\n\nThen you can use it in code like:\nint myVar = 0;\nSetBit(myVar, 5);\nif (GetBit(myVar, 5))\n{\n // Do something\n}\n\n", "It is possible.\nTo set the nth bit, use OR:\nx |= (1 << 5); // sets the 5th-from right\n\nTo clear a bit, use AND:\nx &= ~(1 << 5); // clears 5th-from-right\n\nTo flip a bit, use XOR:\nx ^= (1 << 5); // flips 5th-from-right\n\nTo get the value of a bit use shift and AND:\n(x & (1 << 5)) >> 5 // gets the value (0 or 1) of the 5th-from-right\n\nnote: the shift right 5 is to ensure the value is either 0 or 1. If you're just interested in 0/not 0, you can get by without the shift.\n", "Have a look at the answers to this question.\n", "Theory\nThere is no C syntax for accessing or setting the n-th bit of a built-in datatype (e.g. a 'char'). However, you can access bits using a logical AND operation, and set bits using a logical OR operation.\nAs an example, say that you have a variable that holds 1101 and you want to check the 2nd bit from the left. Simply perform a logical AND with 0100:\n1101\n0100\n---- AND\n0100\n\nIf the result is non-zero, then the 2nd bit must have been set; otherwise is was not set.\nIf you want to set the 3rd bit from the left, then perform a logical OR with 0010:\n1101\n0010\n---- OR\n1111\n\nYou can use the C operators && (for AND) and || (for OR) to perform these tasks. You will need to construct the bit access patterns (the 0100 and 0010 in the above examples) yourself. The trick is to remember that the least significant bit (LSB) counts 1s, the next LSB counts 2s, then 4s etc. So, the bit access pattern for the n-th LSB (starting at 0) is simply the value of 2^n. The easiest way to compute this in C is to shift the binary value 0001 (in this four bit example) to the left by the required number of places. As this value is always equal to 1 in unsigned integer-like quantities, this is just '1 << n'\nExample\nunsigned char myVal = 0x65; /* in hex; this is 01100101 in binary. */\n\n/* Q: is the 3-rd least significant bit set (again, the LSB is the 0th bit)? */\nunsigned char pattern = 1;\npattern <<= 3; /* Shift pattern left by three places.*/\n\nif(myVal && (char)(1<<3)) {printf(\"Yes!\\n\");} /* Perform the test. */\n\n/* Set the most significant bit. */\nmyVal |= (char)(1<<7);\n\nThis example hasn't been tested, but should serve to illustrate the general idea.\n", "To query state of bit with specific index:\nint index_state = variable & ( 1 << bit_index );\n\nTo set bit:\nvarabile |= 1 << bit_index;\n\nTo restart bit:\nvariable &= ~( 1 << bit_index );\n\n", "IF you want to index a bit you could:\nbit = (char & 0xF0) >> 7;\n\ngets the msb of a char. You could even leave out the right shift and do a test on 0.\nbit = char & 0xF0;\n\nif the bit is set the result will be > 0;\nobviousuly, you need to change the mask to get different bits (NB: the 0xF is the bit mask if it is unclear). It is possible to define numerous masks e.g.\n#define BIT_0 0x1 // or 1 << 0\n#define BIT_1 0x2 // or 1 << 1\n#define BIT_2 0x4 // or 1 << 2\n#define BIT_3 0x8 // or 1 << 3\n\netc...\nThis gives you:\nbit = char & BIT_1;\n\nYou can use these definitions in the above code to sucessfully index a bit within either a macro or a function.\nTo set a bit:\nchar |= BIT_2;\n\nTo clear a bit:\nchar &= ~BIT_3\n\nTo toggle a bit\nchar ^= BIT_4\n\nThis help?\n", "There is a standard library container for bits: std::vector. It is specialised in the library to be space efficient. There is also a boost dynamic_bitset class.\nThese will let you perform operations on a set of boolean values, using one bit per value of underlying storage.\nBoost dynamic bitset documentation\nFor the STL documentation, see your compiler documentation.\nOf course, you can also address the individual bits in other integral types by hand. If you do that, you should use unsigned types so that you don't get undefined behaviour if decide to do a right shift on a value with the high bit set. However, it sounds like you want the containers.\nTo the commenter who claimed this takes 32x more space than necessary: boost::dynamic_bitset and vector are specialised to use one bit per entry, and so there is not a space penalty, assuming that you actually want more than the number of bits in a primitive type. These classes allow you to address individual bits in a large container with efficient underlying storage. If you just want (say) 32 bits, by all means, use an int. If you want some large number of bits, you can use a library container.\n", "Try using bitfields. Be careful the implementation can vary by compiler.\nhttp://publications.gbdirect.co.uk/c_book/chapter6/bitfields.html\n", "Individual bits can be indexed as follows. \nDefine a struct like this one:\nstruct\n{\n unsigned bit0 : 1;\n unsigned bit1 : 1;\n unsigned bit2 : 1;\n unsigned bit3 : 1;\n unsigned reserved : 28;\n} bitPattern; \n\nNow if I want to know the individual bit values of a var named \"value\", do the following:\nCopyMemory( &input, &value, sizeof(value) );\n\nTo see if bit 2 is high or low:\nint state = bitPattern.bit2;\n\nHope this helps.\n" ]
[ 10, 7, 3, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "bit_manipulation", "c", "coding_style" ]
stackoverflow_0000062689_bit_manipulation_c_coding_style.txt
Q: SQL: aggregate function and group by Consider the Oracle emp table. I'd like to get the employees with the top salary with department = 20 and job = clerk. Also assume that there is no "empno" column, and that the primary key involves a number of columns. You can do this with: select * from scott.emp where deptno = 20 and job = 'CLERK' and sal = (select max(sal) from scott.emp where deptno = 20 and job = 'CLERK') This works, but I have to duplicate the test deptno = 20 and job = 'CLERK', which I would like to avoid. Is there a more elegant way to write this, maybe using a group by? BTW, if this matters, I am using Oracle. A: The following is slightly over-engineered, but is a good SQL pattern for "top x" queries. SELECT * FROM scott.emp WHERE (deptno,job,sal) IN (SELECT deptno, job, max(sal) FROM scott.emp WHERE deptno = 20 and job = 'CLERK' GROUP BY deptno, job ) Also note that this will work in Oracle and Postgress (i think) but not MS SQL. For something similar in MS SQL see question SQL Query to get latest price A: If I was certain of the targeted database I'd go with Mark Nold's solution, but if you ever want some dialect agnostic SQL*, try SELECT * FROM scott.emp e WHERE e.deptno = 20 AND e.job = 'CLERK' AND e.sal = ( SELECT MAX(e2.sal) FROM scott.emp e2 WHERE e.deptno = e2.deptno AND e.job = e2.job ) *I believe this should work everywhere, but I don't have the environments to test it. A: In Oracle I'd do it with an analytical function, so you'd only query the emp table once : SELECT * FROM (SELECT e.*, MAX (sal) OVER () AS max_sal FROM scott.emp e WHERE deptno = 20 AND job = 'CLERK') WHERE sal = max_sal It's simpler, easier to read and more efficient. If you want to modify it to list list this information for all departments, then you'll need to use the "PARTITION BY" clause in OVER: SELECT * FROM (SELECT e.*, MAX (sal) OVER (PARTITION BY deptno) AS max_sal FROM scott.emp e WHERE job = 'CLERK') WHERE sal = max_sal ORDER BY deptno A: That's great! I didn't know you could do a comparison of (x, y, z) with the result of a SELECT statement. This works great with Oracle. As a side-note for other readers, the above query is missing a "=" after "(deptno,job,sal)". Maybe the Stack Overflow formatter ate it (?). Again, thanks Mark. A: In Oracle you can also use the EXISTS statement, which in some cases is faster. For example... SELECT name, number FROM cust WHERE cust IN ( SELECT cust_id FROM big_table ) AND entered > SYSDATE -1 would be slow. but SELECT name, number FROM cust c WHERE EXISTS ( SELECT cust_id FROM big_table WHERE cust_id=c.cust_id ) AND entered > SYSDATE -1 would be very fast with proper indexing. You can also use this with multiple parameters. A: There are many solutions. You could also keep your original query layout by simply adding table aliases and joining on the column names, you would still only have DEPTNO = 20 and JOB = 'CLERK' in the query once. SELECT * FROM scott.emp emptbl WHERE emptbl.DEPTNO = 20 AND emptbl.JOB = 'CLERK' AND emptbl.SAL = ( select max(salmax.SAL) from scott.emp salmax where salmax.DEPTNO = emptbl.DEPTNO AND salmax.JOB = emptbl.JOB ) It could also be noted that the key word "ALL" can be used for these types of queries which would allow you to remove the "MAX" function. SELECT * FROM scott.emp emptbl WHERE emptbl.DEPTNO = 20 AND emptbl.JOB = 'CLERK' AND emptbl.SAL >= ALL ( select salmax.SAL from scott.emp salmax where salmax.DEPTNO = emptbl.DEPTNO AND salmax.JOB = emptbl.JOB ) I hope that helps and makes sense.
SQL: aggregate function and group by
Consider the Oracle emp table. I'd like to get the employees with the top salary with department = 20 and job = clerk. Also assume that there is no "empno" column, and that the primary key involves a number of columns. You can do this with: select * from scott.emp where deptno = 20 and job = 'CLERK' and sal = (select max(sal) from scott.emp where deptno = 20 and job = 'CLERK') This works, but I have to duplicate the test deptno = 20 and job = 'CLERK', which I would like to avoid. Is there a more elegant way to write this, maybe using a group by? BTW, if this matters, I am using Oracle.
[ "The following is slightly over-engineered, but is a good SQL pattern for \"top x\" queries.\nSELECT \n * \nFROM \n scott.emp\nWHERE \n (deptno,job,sal) IN\n (SELECT \n deptno,\n job,\n max(sal) \n FROM \n scott.emp\n WHERE \n deptno = 20 \n and job = 'CLERK'\n GROUP BY \n deptno,\n job\n )\n\nAlso note that this will work in Oracle and Postgress (i think) but not MS SQL. For something similar in MS SQL see question SQL Query to get latest price\n", "If I was certain of the targeted database I'd go with Mark Nold's solution, but if you ever want some dialect agnostic SQL*, try\nSELECT * \nFROM scott.emp e\nWHERE e.deptno = 20 \nAND e.job = 'CLERK'\nAND e.sal = (\n SELECT MAX(e2.sal) \n FROM scott.emp e2\n WHERE e.deptno = e2.deptno \n AND e.job = e2.job\n)\n\n*I believe this should work everywhere, but I don't have the environments to test it.\n", "In Oracle I'd do it with an analytical function, so you'd only query the emp table once :\nSELECT *\n FROM (SELECT e.*, MAX (sal) OVER () AS max_sal\n FROM scott.emp e\n WHERE deptno = 20 \n AND job = 'CLERK')\n WHERE sal = max_sal\n\nIt's simpler, easier to read and more efficient. \nIf you want to modify it to list list this information for all departments, then you'll need to use the \"PARTITION BY\" clause in OVER:\nSELECT *\n FROM (SELECT e.*, MAX (sal) OVER (PARTITION BY deptno) AS max_sal\n FROM scott.emp e\n WHERE job = 'CLERK')\n WHERE sal = max_sal\nORDER BY deptno\n\n", "That's great! I didn't know you could do a comparison of (x, y, z) with the result of a SELECT statement. This works great with Oracle.\nAs a side-note for other readers, the above query is missing a \"=\" after \"(deptno,job,sal)\". Maybe the Stack Overflow formatter ate it (?).\nAgain, thanks Mark.\n", "In Oracle you can also use the EXISTS statement, which in some cases is faster.\nFor example...\nSELECT name, number\nFROM cust \nWHERE cust IN\n ( SELECT cust_id FROM big_table )\n AND entered > SYSDATE -1\nwould be slow.\nbut\nSELECT name, number\nFROM cust c\nWHERE EXISTS\n ( SELECT cust_id FROM big_table WHERE cust_id=c.cust_id )\n AND entered > SYSDATE -1\nwould be very fast with proper indexing. You can also use this with multiple parameters.\n", "There are many solutions. You could also keep your original query layout by simply adding table aliases and joining on the column names, you would still only have DEPTNO = 20 and JOB = 'CLERK' in the query once.\nSELECT \n * \nFROM \n scott.emp emptbl\nWHERE\n emptbl.DEPTNO = 20 \n AND emptbl.JOB = 'CLERK'\n AND emptbl.SAL = \n (\n select \n max(salmax.SAL) \n from \n scott.emp salmax\n where \n salmax.DEPTNO = emptbl.DEPTNO\n AND salmax.JOB = emptbl.JOB\n )\n\nIt could also be noted that the key word \"ALL\" can be used for these types of queries which would allow you to remove the \"MAX\" function.\nSELECT \n * \nFROM \n scott.emp emptbl\nWHERE\n emptbl.DEPTNO = 20 \n AND emptbl.JOB = 'CLERK'\n AND emptbl.SAL >= ALL \n (\n select \n salmax.SAL\n from \n scott.emp salmax\n where \n salmax.DEPTNO = emptbl.DEPTNO\n AND salmax.JOB = emptbl.JOB\n )\n\nI hope that helps and makes sense.\n" ]
[ 3, 2, 1, 0, 0, 0 ]
[]
[]
[ "aggregate", "oracle", "sql" ]
stackoverflow_0000051092_aggregate_oracle_sql.txt
Q: What kind of problems are state machines good for? What kind of programming problems are state machines most suited for? I have read about parsers being implemented using state machines, but would like to find out about problems that scream out to be implemented as a state machine. A: The easiest answer is probably that they are suited for practically any problem. Don't forget that a computer itself is also a state machine. Regardless of that, state machines are typically used for problems where there is some stream of input and the activity that needs to be done at a given moment depends the last elements seen in that stream at that point. Examples of this stream of input: some text file in the case of parsing, a string for regular expressions, events such as player entered room for game AI, etc. Examples of activities: be ready to read a number (after another number followed by a + have appear in the input in a parser for a calculator), turn around (after player approached and then sneezed), perform jumping kick (after player pressed left, left, right, up, up). A: A good resource is this free State Machine EBook. My own quick answer is below. When your logic must contain information about what happened the last time it was run, it must contain state. So a state machine is simply any code that remembers (or acts on) information that can only be gained by understanding what happened before. For instance, I have a cellular modem that my program must use. It has to perform the following steps in order: reset the modem initiate communications with the modem wait for the signal strength to indicate a good connection with a tower ... Now I could block the main program and simply go through all these steps in order, waiting for each to run, but I want to give my user feedback and perform other operations at the same time. So I implement this as a state machine inside a function, and run this function 100 times a second. enum states{reset,initsend, initresponse, waitonsignal,dial,ppp,...} modemfunction() { static currentstate switch(currentstate) { case reset: Do reset if reset was successful, nextstate=init else nextstate = reset break case initsend send "ATD" nextstate = initresponse break ... } currentstate=nextstate } More complex state machines implement protocols. For instance a ECU diagnostics protocol I used can only send 8 byte packets, but sometimes I need to send bigger packets. The ECU is slow, so I need to wait for a response. Ideally when I send a message I use one function and then I don't care what happens, but somewhere my program must monitor the line and send and respond to these messages, breaking them up into smaller pieces and reassembling the pieces of received messages into the final message. A: Stateful protocols such as TCP are often represented as state machines. However it's rare that you should want to implement anything as a state machine proper. Usually you will use a corruption of one, i.e. have it carrying out a repeated action while sitting in one state, logging data while it transitions, or exchanging data while remaining in one state. A: AI in games is very often implemented using State Machines. Helps create discrete logic that is much easier to build and test. A: Objects in games are often represented as state machines. An AI character might be: Guarding Aggressive Patroling Asleep So you can see these might model some simple but effective states. Of course you could probably make a more complex continuous system. Another example would be a process such as making a purchase on Google Checkout. Google gives a number of states for Financial and Order, and then informs you of transistions such as the credit card clearing or getting rejected, and allows you to inform it that the order has been shipped. A: Regular expression matching, Parsing, Flow control in a complex system. Regular expressions are a simple form of state machine, specifically finite automata. They have a natural represenation as such, although it is possible to implement them using mutually recursive functions. State machines when implemented well, will be very efficient. There is an excellent state machine compiler for a number of target languages, if you want to make a readable state machine. http://research.cs.queensu.ca/~thurston/ragel/ It also allows you to avoid the dreaded 'goto'. A: Workflow (see WF in .net 3.0) A: They have many uses, parsers being a notable one. I have personally used simplified state machines to implement complex multi-step task dialogs in applications. A: A parser example. I recently wrote a parser that takes a binary stream from another program. The meaning of the current element parsed indicates the size/meaning of the next elements. There are a (small) finite number of elements possible. Hence a state machine. A: They're great for modelling things that change status, and have logic that triggers on each transition. I'd use finite state machines for tracking packages by mail, or to keep track of the different stata of a user during the registration process, for example. As the number of possible status values goes up, the number of transitions explodes. State machines help a lot in that case. A: Just as a side note, you can implement state machines with proper tail calls like I explained in the tail recursion question. In that exemple each room in the game is considered one state. Also, Hardware design with VHDL (and other logic synthesis languages) uses state machines everywhere to describe hardware. A: Any workflow application, especially with asynchronous activities. You have an item in the workflow in a certain state, and the state machine knows how to react to external events by placing the item in a different state, at which point some other activity occurs. A: The concept of state is very useful for applications to "remember" the current context of your system and react properly when a new piece of information arrives. Any non trivial application has that notion embedded in the code thru variables and conditionals. So if your application has to react differently every time it receives a new piece of information because of the context you are in, you could model your system with with a state machines. An example would be how to interpret the keys on a calculator, which depends on what your are processing at that point in time. On the contrary, if your computation does not depend of the context but solely on the input (like a function adding two numbers), you will not need an state machine (or better said, you will have a state machine with zero states) Some people design the whole application in terms of state machines since they capture the essential things to keep in mind in your project and then use some procedure or autocoders to make them executable. It takes some paradigm chance to program in this way, but I found it very effective. A: Things that comes to mind are: Robot/Machine manipulation... those robot arms in factories Simulation Games, (SimCity, Racing Game etc..) Generalizing: When you have a string of inputs that when interacting with anyone of them, requires the knowledge of the previous inputs or in other words, when processing of any single input requires the knowledge of previous inputs. (that is, it needs to have "states") Not much that I know of that isn't reducible to a parsing problem though. A: If you need a simple stochastic process, you might use a Markov chain, which can be represented as a state machine (given the current state, at the next step the chain will be in state X with a certain probability).
What kind of problems are state machines good for?
What kind of programming problems are state machines most suited for? I have read about parsers being implemented using state machines, but would like to find out about problems that scream out to be implemented as a state machine.
[ "The easiest answer is probably that they are suited for practically any problem. Don't forget that a computer itself is also a state machine.\nRegardless of that, state machines are typically used for problems where there is some stream of input and the activity that needs to be done at a given moment depends the last elements seen in that stream at that point.\nExamples of this stream of input: some text file in the case of parsing, a string for regular expressions, events such as player entered room for game AI, etc.\nExamples of activities: be ready to read a number (after another number followed by a + have appear in the input in a parser for a calculator), turn around (after player approached and then sneezed), perform jumping kick (after player pressed left, left, right, up, up).\n", "A good resource is this free State Machine EBook. My own quick answer is below.\nWhen your logic must contain information about what happened the last time it was run, it must contain state.\nSo a state machine is simply any code that remembers (or acts on) information that can only be gained by understanding what happened before.\nFor instance, I have a cellular modem that my program must use. It has to perform the following steps in order:\n\nreset the modem\ninitiate communications with the modem\nwait for the signal strength to indicate a good connection with a tower\n...\n\nNow I could block the main program and simply go through all these steps in order, waiting for each to run, but I want to give my user feedback and perform other operations at the same time. So I implement this as a state machine inside a function, and run this function 100 times a second.\nenum states{reset,initsend, initresponse, waitonsignal,dial,ppp,...}\nmodemfunction()\n{\n static currentstate\n\n switch(currentstate)\n {\n case reset:\n Do reset\n if reset was successful, nextstate=init else nextstate = reset\n break\n case initsend\n send \"ATD\"\n nextstate = initresponse \n break\n ...\n }\ncurrentstate=nextstate\n}\n\nMore complex state machines implement protocols. For instance a ECU diagnostics protocol I used can only send 8 byte packets, but sometimes I need to send bigger packets. The ECU is slow, so I need to wait for a response. Ideally when I send a message I use one function and then I don't care what happens, but somewhere my program must monitor the line and send and respond to these messages, breaking them up into smaller pieces and reassembling the pieces of received messages into the final message.\n", "Stateful protocols such as TCP are often represented as state machines. However it's rare that you should want to implement anything as a state machine proper. Usually you will use a corruption of one, i.e. have it carrying out a repeated action while sitting in one state, logging data while it transitions, or exchanging data while remaining in one state.\n", "AI in games is very often implemented using State Machines. \nHelps create discrete logic that is much easier to build and test.\n", "Objects in games are often represented as state machines. An AI character might be:\n\nGuarding\nAggressive\nPatroling\nAsleep\n\nSo you can see these might model some simple but effective states. Of course you could probably make a more complex continuous system.\nAnother example would be a process such as making a purchase on Google Checkout. Google gives a number of states for Financial and Order, and then informs you of transistions such as the credit card clearing or getting rejected, and allows you to inform it that the order has been shipped.\n", "Regular expression matching, Parsing, Flow control in a complex system.\nRegular expressions are a simple form of state machine, specifically finite automata. They have a natural represenation as such, although it is possible to implement them using mutually recursive functions.\nState machines when implemented well, will be very efficient.\nThere is an excellent state machine compiler for a number of target languages, if you want to make a readable state machine.\nhttp://research.cs.queensu.ca/~thurston/ragel/\nIt also allows you to avoid the dreaded 'goto'.\n", "Workflow (see WF in .net 3.0)\n", "They have many uses, parsers being a notable one. I have personally used simplified state machines to implement complex multi-step task dialogs in applications. \n", "A parser example. I recently wrote a parser that takes a binary stream from another program. The meaning of the current element parsed indicates the size/meaning of the next elements. There are a (small) finite number of elements possible. Hence a state machine.\n", "They're great for modelling things that change status, and have logic that triggers on each transition.\nI'd use finite state machines for tracking packages by mail, or to keep track of the different stata of a user during the registration process, for example.\nAs the number of possible status values goes up, the number of transitions explodes. State machines help a lot in that case.\n", "Just as a side note, you can implement state machines with proper tail calls like I explained in the tail recursion question.\nIn that exemple each room in the game is considered one state.\nAlso, Hardware design with VHDL (and other logic synthesis languages) uses state machines everywhere to describe hardware.\n", "Any workflow application, especially with asynchronous activities. You have an item in the workflow in a certain state, and the state machine knows how to react to external events by placing the item in a different state, at which point some other activity occurs. \n", "The concept of state is very useful for applications to \"remember\" the current context of your system and react properly when a new piece of information arrives. Any non trivial application has that notion embedded in the code thru variables and conditionals.\nSo if your application has to react differently every time it receives a new piece of information because of the context you are in, you could model your system with with a state machines. An example would be how to interpret the keys on a calculator, which depends on what your are processing at that point in time. \nOn the contrary, if your computation does not depend of the context but solely on the input (like a function adding two numbers), you will not need an state machine (or better said, you will have a state machine with zero states)\nSome people design the whole application in terms of state machines since they capture the essential things to keep in mind in your project and then use some procedure or autocoders to make them executable. It takes some paradigm chance to program in this way, but I found it very effective.\n", "Things that comes to mind are:\n\n\nRobot/Machine manipulation... those robot arms in factories\nSimulation Games, (SimCity, Racing Game etc..)\n\n\nGeneralizing: When you have a string of inputs that when interacting with anyone of them, requires the knowledge of the previous inputs or in other words, when processing of any single input requires the knowledge of previous inputs. (that is, it needs to have \"states\")\nNot much that I know of that isn't reducible to a parsing problem though.\n", "If you need a simple stochastic process, you might use a Markov chain, which can be represented as a state machine (given the current state, at the next step the chain will be in state X with a certain probability).\n" ]
[ 16, 10, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "algorithm", "state" ]
stackoverflow_0000040602_algorithm_state.txt
Q: How to handle file uploads to a dedicated image server? I got a webserver with a running application. There's a webpage with a form: some text data and a file upload field. Now, what I would like to have is it working like this: The file is sent to the dedicated server, diffrent then the one application is running on. The server should return some kind of path (or anything that identifies the uploaded and saved file and allows to create an URL). Then, both this path and user-filled data should be submitted to the webserver with application, for any kind of database storage. Problem is, there are 2 diffrent servers, so I can't upload the file with javascript, can I? Another way would be just to use iframe and put the upload form in there - but then I think I can't access the result of the upload (still inside the iframe) with javascript to pass the file path to my main server. I could also just upload the file to same server my application is running on and then just rsync it to the other one - but I'd like to avoid it if I can, trying to minimalize the traffic actually :) How do you handle such thing in your applications? A: If you used an iframe, you could submit the upload form to the dedicated image server, and in the case of a successful result, have it in turn load a page from the original server with the info (eg. image path) "passed along" as a GET parameter. A: POST to dedicated server, server stores image and calls back to web server through a web service or other to give it any info required.
How to handle file uploads to a dedicated image server?
I got a webserver with a running application. There's a webpage with a form: some text data and a file upload field. Now, what I would like to have is it working like this: The file is sent to the dedicated server, diffrent then the one application is running on. The server should return some kind of path (or anything that identifies the uploaded and saved file and allows to create an URL). Then, both this path and user-filled data should be submitted to the webserver with application, for any kind of database storage. Problem is, there are 2 diffrent servers, so I can't upload the file with javascript, can I? Another way would be just to use iframe and put the upload form in there - but then I think I can't access the result of the upload (still inside the iframe) with javascript to pass the file path to my main server. I could also just upload the file to same server my application is running on and then just rsync it to the other one - but I'd like to avoid it if I can, trying to minimalize the traffic actually :) How do you handle such thing in your applications?
[ "If you used an iframe, you could submit the upload form to the dedicated image server, and in the case of a successful result, have it in turn load a page from the original server with the info (eg. image path) \"passed along\" as a GET parameter.\n", "POST to dedicated server, server stores image and calls back to web server through a web service or other to give it any info required.\n" ]
[ 1, 0 ]
[]
[]
[ "file_upload", "webserver" ]
stackoverflow_0000063146_file_upload_webserver.txt
Q: ASP Server variable not working on local IIS I'm working on a simple ASP.Net page (handler, actually) where I check the value of the LOGON_USER server variable. This works using Visual Studio's built-in web server and it works in other sites deployed to the live intranet site. But it doesn't work on the IIS instance on my local XP machine. How can I fix it, or what's going on if I can't? A: What authentication do you have enabled in IIS? Anonmyous, Basic, Digest, Integrated Windows? Sounds to me like anonymous access is enabled/allowed, and nothing else. This would means that LOGON_USER is not populated. When you access your local IIS, trying using http://127.0.0.1 in particular if you use IE. IE will recognize "localhost" as being in your local trusted zone and will automatically pass your XP login credentials through when Integrated Windows auth is enabled. A: In addition to Jon's answer, IIRC even if you have Integrated Authentication enabled, if Anonymous Authentication is enabled it will take precedence...
ASP Server variable not working on local IIS
I'm working on a simple ASP.Net page (handler, actually) where I check the value of the LOGON_USER server variable. This works using Visual Studio's built-in web server and it works in other sites deployed to the live intranet site. But it doesn't work on the IIS instance on my local XP machine. How can I fix it, or what's going on if I can't?
[ "What authentication do you have enabled in IIS? Anonmyous, Basic, Digest, Integrated Windows? Sounds to me like anonymous access is enabled/allowed, and nothing else. This would means that LOGON_USER is not populated. \nWhen you access your local IIS, trying using http://127.0.0.1 in particular if you use IE. IE will recognize \"localhost\" as being in your local trusted zone and will automatically pass your XP login credentials through when Integrated Windows auth is enabled.\n", "In addition to Jon's answer, IIRC even if you have Integrated Authentication enabled, if Anonymous Authentication is enabled it will take precedence...\n" ]
[ 2, 0 ]
[]
[]
[ "asp.net", "iis", "server_variables" ]
stackoverflow_0000059951_asp.net_iis_server_variables.txt
Q: Find all available JREs on Mac OS X from Java application installer If Java application requires certain JRE version then how can I check its availability on Mac OS X during installation? A: It should be as simple as looking at /System/Library/Frameworks/JavaVM.framework/Versions/ E.g. from my machine: manoa:~ stu$ ll /System/Library/Frameworks/JavaVM.framework/Versions/ total 56 774077 lrwxr-xr-x 1 root wheel 5 Jul 23 15:31 1.3 -> 1.3.1 167151 drwxr-xr-x 3 root wheel 102 Jan 14 2008 1.3.1 167793 lrwxr-xr-x 1 root wheel 5 Feb 21 2008 1.4 -> 1.4.2 774079 lrwxr-xr-x 1 root wheel 3 Jul 23 15:31 1.4.1 -> 1.4 166913 drwxr-xr-x 8 root wheel 272 Feb 21 2008 1.4.2 168494 lrwxr-xr-x 1 root wheel 5 Feb 21 2008 1.5 -> 1.5.0 166930 drwxr-xr-x 8 root wheel 272 Feb 21 2008 1.5.0 774585 lrwxr-xr-x 1 root wheel 5 Jul 23 15:31 1.6 -> 1.6.0 747415 drwxr-xr-x 8 root wheel 272 Jul 23 10:24 1.6.0 167155 drwxr-xr-x 8 root wheel 272 Jul 23 15:31 A 776765 lrwxr-xr-x 1 root wheel 1 Jul 23 15:31 Current -> A 774125 lrwxr-xr-x 1 root wheel 3 Jul 23 15:31 CurrentJDK -> 1.5 manoa:~ stu$ A: This artical may help: http://developer.apple.com/technotes/tn2002/tn2110.html Summery: String javaVersion = System.getProperty("java.version"); if (javaVersion.startsWith("1.4")) { // New features for 1.4 }
Find all available JREs on Mac OS X from Java application installer
If Java application requires certain JRE version then how can I check its availability on Mac OS X during installation?
[ "It should be as simple as looking at /System/Library/Frameworks/JavaVM.framework/Versions/\nE.g. from my machine:\nmanoa:~ stu$ ll /System/Library/Frameworks/JavaVM.framework/Versions/\ntotal 56\n774077 lrwxr-xr-x 1 root wheel 5 Jul 23 15:31 1.3 -> 1.3.1\n167151 drwxr-xr-x 3 root wheel 102 Jan 14 2008 1.3.1\n167793 lrwxr-xr-x 1 root wheel 5 Feb 21 2008 1.4 -> 1.4.2\n774079 lrwxr-xr-x 1 root wheel 3 Jul 23 15:31 1.4.1 -> 1.4\n166913 drwxr-xr-x 8 root wheel 272 Feb 21 2008 1.4.2\n168494 lrwxr-xr-x 1 root wheel 5 Feb 21 2008 1.5 -> 1.5.0\n166930 drwxr-xr-x 8 root wheel 272 Feb 21 2008 1.5.0\n774585 lrwxr-xr-x 1 root wheel 5 Jul 23 15:31 1.6 -> 1.6.0\n747415 drwxr-xr-x 8 root wheel 272 Jul 23 10:24 1.6.0\n167155 drwxr-xr-x 8 root wheel 272 Jul 23 15:31 A\n776765 lrwxr-xr-x 1 root wheel 1 Jul 23 15:31 Current -> A\n774125 lrwxr-xr-x 1 root wheel 3 Jul 23 15:31 CurrentJDK -> 1.5\nmanoa:~ stu$ \n\n", "This artical may help:\nhttp://developer.apple.com/technotes/tn2002/tn2110.html\nSummery:\nString javaVersion = System.getProperty(\"java.version\");\nif (javaVersion.startsWith(\"1.4\")) {\n // New features for 1.4\n}\n\n" ]
[ 1, 0 ]
[]
[]
[ "installation", "java", "macos" ]
stackoverflow_0000063206_installation_java_macos.txt
Q: Add Custom TextboxCell to a DataGridView control that contains a button to open the FileDialog I would like to add a DataGridViewTextBoxCell cell to a DataGridViewCell control, but as well as being able to type in the text cell as normal it must also contain a '...' button that once clicks brings up the OpenFileDialog window to allow the user to select a file. Once selected, the text cell will be populated with the full file path. What is the best way to go about this? Thanks A: This MSDN article explains how to add a custom control to a DataGridView. You should be able to make a UserControl that has a textbox and button on it and embed that in the DataGridView. A: You will need to create your own column and cell classes in order to do this. I would suggest using .NET Reflector to look at the implementation details of the DataGridViewTextBox as a starting point and then customizing to add display of a button at the end of it. Check out these tutorials to get started... MSDN Article MSDN Reference
Add Custom TextboxCell to a DataGridView control that contains a button to open the FileDialog
I would like to add a DataGridViewTextBoxCell cell to a DataGridViewCell control, but as well as being able to type in the text cell as normal it must also contain a '...' button that once clicks brings up the OpenFileDialog window to allow the user to select a file. Once selected, the text cell will be populated with the full file path. What is the best way to go about this? Thanks
[ "This MSDN article explains how to add a custom control to a DataGridView.\nYou should be able to make a UserControl that has a textbox and button on it and embed that in the DataGridView.\n", "You will need to create your own column and cell classes in order to do this. I would suggest using .NET Reflector to look at the implementation details of the DataGridViewTextBox as a starting point and then customizing to add display of a button at the end of it. Check out these tutorials to get started...\nMSDN Article\nMSDN Reference\n" ]
[ 1, 0 ]
[]
[]
[ ".net", ".net_2.0" ]
stackoverflow_0000063130_.net_.net_2.0.txt
Q: Column Tree Model doesn't expand node after EXPAND_NO_CHILDREN event I am displaying a list of items using a SAP ABAP column tree model, basically a tree of folder and files, with columns. I want to load the sub-nodes of folders dynamically, so I'm using the EXPAND_NO_CHILDREN event which is firing correctly. Unfortunately, after I add the new nodes and items to the tree, the folder is automatically collapsing again, requiring a second click to view the sub-nodes. Do I need to call a method when handling the event so that the folder stays open, or am I doing something else wrong? * Set up event handling. LS_EVENT-EVENTID = CL_ITEM_TREE_CONTROL=>EVENTID_EXPAND_NO_CHILDREN. LS_EVENT-APPL_EVENT = GC_X. APPEND LS_EVENT TO LT_EVENTS. CALL METHOD GO_MODEL->SET_REGISTERED_EVENTS EXPORTING EVENTS = LT_EVENTS EXCEPTIONS ILLEGAL_EVENT_COMBINATION = 1 UNKNOWN_EVENT = 2. SET HANDLER GO_APPLICATION->HANDLE_EXPAND_NO_CHILDREN FOR GO_MODEL. ... * Add new data to tree. CALL METHOD GO_MODEL->ADD_NODES EXPORTING NODE_TABLE = PTI_NODES[] EXCEPTIONS ERROR_IN_NODE_TABLE = 1. CALL METHOD GO_MODEL->ADD_ITEMS EXPORTING ITEM_TABLE = PTI_ITEMS[] EXCEPTIONS NODE_NOT_FOUND = 1 ERROR_IN_ITEM_TABLE = 2. A: It's been a while since I've played with SAP, but I always found the SAP Library to be particularly helpful when I got stuck... I managed to come up with this one for you: http://help.sap.com/saphelp_nw04/helpdata/en/47/aa7a18c80a11d3a6f90000e83dd863/frameset.htm, specifically: When you add new nodes to the tree model, set the flag ITEMSINCOM to 'X'. This informs the tree model that you want to load the items for that node on demand. Hope it helps? A: Your code looks fine, I would use the method ADD_NODES_AND_ITEMS myself if I were to add nodes and items ;) Beyond that, try to call EXPAND_NODE after you added the items/nodes and see if that helps.
Column Tree Model doesn't expand node after EXPAND_NO_CHILDREN event
I am displaying a list of items using a SAP ABAP column tree model, basically a tree of folder and files, with columns. I want to load the sub-nodes of folders dynamically, so I'm using the EXPAND_NO_CHILDREN event which is firing correctly. Unfortunately, after I add the new nodes and items to the tree, the folder is automatically collapsing again, requiring a second click to view the sub-nodes. Do I need to call a method when handling the event so that the folder stays open, or am I doing something else wrong? * Set up event handling. LS_EVENT-EVENTID = CL_ITEM_TREE_CONTROL=>EVENTID_EXPAND_NO_CHILDREN. LS_EVENT-APPL_EVENT = GC_X. APPEND LS_EVENT TO LT_EVENTS. CALL METHOD GO_MODEL->SET_REGISTERED_EVENTS EXPORTING EVENTS = LT_EVENTS EXCEPTIONS ILLEGAL_EVENT_COMBINATION = 1 UNKNOWN_EVENT = 2. SET HANDLER GO_APPLICATION->HANDLE_EXPAND_NO_CHILDREN FOR GO_MODEL. ... * Add new data to tree. CALL METHOD GO_MODEL->ADD_NODES EXPORTING NODE_TABLE = PTI_NODES[] EXCEPTIONS ERROR_IN_NODE_TABLE = 1. CALL METHOD GO_MODEL->ADD_ITEMS EXPORTING ITEM_TABLE = PTI_ITEMS[] EXCEPTIONS NODE_NOT_FOUND = 1 ERROR_IN_ITEM_TABLE = 2.
[ "It's been a while since I've played with SAP, but I always found the SAP Library to be particularly helpful when I got stuck...\nI managed to come up with this one for you:\nhttp://help.sap.com/saphelp_nw04/helpdata/en/47/aa7a18c80a11d3a6f90000e83dd863/frameset.htm, specifically: \n\nWhen you add new nodes to the tree model, set the flag ITEMSINCOM to 'X'.\n This informs the tree model that you want to load the items for that node on demand.\n\nHope it helps?\n", "Your code looks fine,\nI would use the method ADD_NODES_AND_ITEMS myself if I were to add nodes and items ;)\nBeyond that, try to call EXPAND_NODE after you added the items/nodes and see if that helps.\n" ]
[ 2, 0 ]
[]
[]
[ "abap", "events", "treeview" ]
stackoverflow_0000007558_abap_events_treeview.txt
Q: .Net 2+: why does if( 1 == null ) no longer throw a compiler exception? I'm using int as an example, but this applies to any value type in .Net In .Net 1 the following would throw a compiler exception: int i = SomeFunctionThatReturnsInt(); if( i == null ) //compiler exception here Now (in .Net 2 or 3.5) that exception has gone. I know why this is: int? j = null; //nullable int if( i == j ) //this shouldn't throw an exception The problem is that because int? is nullable and int now has a implicit cast to int?. The syntax above is compiler magic. Really we're doing: Nullable<int> j = null; //nullable int //compiler is smart enough to do this if( (Nullable<int>) i == j) //and not this if( i == (int) j) So now, when we do i == null we get: if( (Nullable<int>) i == null ) Given that C# is doing compiler logic to calculate this anyway why can't it be smart enough to not do it when dealing with absolute values like null? A: Odd ... compiling this with VS2008, targetting .NET 3.5: static int F() { return 42; } static void Main(string[] args) { int i = F(); if (i == null) { } } I get a compiler warning warning CS0472: The result of the expression is always 'false' since a value of type 'int' is never equal to 'null' of type 'int?' And it generates the following IL ... which presumably the JIT will optimize away L_0001: call int32 ConsoleApplication1.Program::F() L_0006: stloc.0 L_0007: ldc.i4.0 L_0008: ldc.i4.0 L_0009: ceq L_000b: stloc.1 L_000c: br.s L_000e Can you post a code snippet? A: I don't think this is a compiler problem per se; an integer value is never null, but the idea of equating them isn't invalid; it's a valid function that always returns false. And the compiler knows; the code bool oneIsNull = 1 == null; compiles, but gives a compiler warning: The result of the expression is always 'false' since a value of type 'int' is never equal to 'null' of type '<null>'. So if you want the compiler error back, go to the project properties and turn on 'treat warnings as errors' for this error, and you'll start seeing them as build-breaking problems again. A: Compiler still generates warning when you compare non-nullable type to null, which is just the way it should be. May be your warning level is too low or this was changed in recent versions (I only did that in .net 3.5). A: The 2.0 framework introduced the nullable value type. Even though the literal constant "1" can never be null, its underlying type (int) can now be cast to a Nullable int type. My guess is that the compiler can no longer assume that int types are not nullable, even when it is a literal constant. I do get a warning when compiling 2.0: Warning 1 The result of the expression is always 'false' since a value of type 'int' is never equal to 'null' of type 'int?' A: The warning is new (3.5 I think) - the error is the same as if I'd done 1 == 2, which it's smart enough to spot as never true. I suspect that with full 3.5 optimisations the whole statement will just be stripped out, as it's pretty smart with never true evaluations. While I might want 1==2 to compile (to switch off a function block while I test something else for instance) I don't want 1==null to. A: It ought to be a compile-time error, because the types are incompatible (value types can never be null). It's pretty sad that it isn't.
.Net 2+: why does if( 1 == null ) no longer throw a compiler exception?
I'm using int as an example, but this applies to any value type in .Net In .Net 1 the following would throw a compiler exception: int i = SomeFunctionThatReturnsInt(); if( i == null ) //compiler exception here Now (in .Net 2 or 3.5) that exception has gone. I know why this is: int? j = null; //nullable int if( i == j ) //this shouldn't throw an exception The problem is that because int? is nullable and int now has a implicit cast to int?. The syntax above is compiler magic. Really we're doing: Nullable<int> j = null; //nullable int //compiler is smart enough to do this if( (Nullable<int>) i == j) //and not this if( i == (int) j) So now, when we do i == null we get: if( (Nullable<int>) i == null ) Given that C# is doing compiler logic to calculate this anyway why can't it be smart enough to not do it when dealing with absolute values like null?
[ "Odd ... compiling this with VS2008, targetting .NET 3.5:\nstatic int F()\n{\n return 42;\n}\n\nstatic void Main(string[] args)\n{\n int i = F();\n\n if (i == null)\n {\n }\n}\n\nI get a compiler warning\nwarning CS0472: The result of the expression is always 'false' since a value of type 'int' is never equal to 'null' of type 'int?'\n\nAnd it generates the following IL ... which presumably the JIT will optimize away\nL_0001: call int32 ConsoleApplication1.Program::F()\nL_0006: stloc.0 \nL_0007: ldc.i4.0 \nL_0008: ldc.i4.0 \nL_0009: ceq \nL_000b: stloc.1 \nL_000c: br.s L_000e\n\nCan you post a code snippet?\n", "I don't think this is a compiler problem per se; an integer value is never null, but the idea of equating them isn't invalid; it's a valid function that always returns false. And the compiler knows; the code\nbool oneIsNull = 1 == null;\n\ncompiles, but gives a compiler warning: The result of the expression is always 'false' since a value of type 'int' is never equal to 'null' of type '<null>'.\nSo if you want the compiler error back, go to the project properties and turn on 'treat warnings as errors' for this error, and you'll start seeing them as build-breaking problems again.\n", "Compiler still generates warning when you compare non-nullable type to null, which is just the way it should be. May be your warning level is too low or this was changed in recent versions (I only did that in .net 3.5).\n", "The 2.0 framework introduced the nullable value type. Even though the literal constant \"1\" can never be null, its underlying type (int) can now be cast to a Nullable int type. My guess is that the compiler can no longer assume that int types are not nullable, even when it is a literal constant. I do get a warning when compiling 2.0:\nWarning 1 The result of the expression is always 'false' since a value of type 'int' is never equal to 'null' of type 'int?'\n", "The warning is new (3.5 I think) - the error is the same as if I'd done 1 == 2, which it's smart enough to spot as never true.\nI suspect that with full 3.5 optimisations the whole statement will just be stripped out, as it's pretty smart with never true evaluations.\nWhile I might want 1==2 to compile (to switch off a function block while I test something else for instance) I don't want 1==null to.\n", "It ought to be a compile-time error, because the types are incompatible (value types can never be null). It's pretty sad that it isn't.\n" ]
[ 3, 3, 1, 1, 0, 0 ]
[]
[]
[ ".net_2.0", ".net_3.5", "c#", "compiler_construction" ]
stackoverflow_0000062606_.net_2.0_.net_3.5_c#_compiler_construction.txt
Q: Access Global .resx file in ASP.Net View Page I am currently building in Version 3.5 of the .Net framework and I have a resource (.resx) file that I am trying to access in a web application. I have exposed the .resx properties as public access modifiers and am able to access these properties in the controller files or other .cs files in the web app. My question is this: Is it possible to access the name/value pairs within my view page? I'd like to do something like this... text="<%$ Resources: Namespace.ResourceFileName, NAME %>" or some other similar method in the view page. A: <%= Resources.<ResourceName>.<Property> %> A: Expose the resource property you want to consume in the page as a protected page property. Then you can just do use "this.ResourceName" A: If you are using ASP.NET 2.0 or higher, after you compile with the resource file, you can reference it through the Resources namespace: text = Resources.YourResourceFilename.YourProperty; You even get Intellisense on the filenames and properties.
Access Global .resx file in ASP.Net View Page
I am currently building in Version 3.5 of the .Net framework and I have a resource (.resx) file that I am trying to access in a web application. I have exposed the .resx properties as public access modifiers and am able to access these properties in the controller files or other .cs files in the web app. My question is this: Is it possible to access the name/value pairs within my view page? I'd like to do something like this... text="<%$ Resources: Namespace.ResourceFileName, NAME %>" or some other similar method in the view page.
[ "\n<%= Resources.<ResourceName>.<Property> %>\n\n", "Expose the resource property you want to consume in the page as a protected page property. Then you can just do use \"this.ResourceName\"\n", "If you are using ASP.NET 2.0 or higher, after you compile with the resource file, you can reference it through the Resources namespace:\ntext = Resources.YourResourceFilename.YourProperty;\n\nYou even get Intellisense on the filenames and properties.\n" ]
[ 6, 2, 1 ]
[]
[]
[ "asp.net", "localization", "resources" ]
stackoverflow_0000062995_asp.net_localization_resources.txt
Q: What's the best way to save and retrieve binary files with Oracle 10g? I'm about to implement a feature in our application that allows the user to 'upload' a PDF or Microsoft PowerPoint document, which the application will then make available to other users in a viewer (so they don't get to 'download' it in the 'Save as..' sense). I already know how to save and retrieve arbitrary binary information in database columns, but as this will be a commonly used feature of our application I fear that solution would lead to enormously large database tables (as we know one of our customers will want to put video in PowerPoint documents). I know there's a way to create a 'directory' object in Oracle, but is there a way to use this feature to store and retrieve binary files saved elsewhere on the Database Server? Or am I being overly paranoid about the database size? (for completeness our application is .Net WinForms using CoreLab / DevArt OraDirect.Net drivers to Oracle 10g) A: Couple of options: You could put the BLOB column in its own tablespace, with its own storage characteristics; you could store the BLOBs in their own table, linked to the other table by an ID column. In either case as you suggested you could define the column as a BFILE which means the actual file is stored externally from the database in a directory. What might be a concern there is that BFILE LOBs do not participate in transactions and are not recoverable with the rest of the database. This is all discussed in the Oracle 10gR2 SQL reference, chapter 2, starting on page 23. A: I guess it depends what you consider enormously large. It really does depend on the use case. If the documents are only being accessed rarely then putting it in the database would be fine (with the advantage of getting "free" backups, eg, with the database). If these are files which are going to be hit over and over again, you might be better to put them directly on disk and just store the location, or even (if its really high bandwidth) look into something like MogileFS No one is going to be able to give you a Yes or no answer for this. A: You could use a normal LOB column type and set the storage parameters for that field so it's on a seperate tablespace. Create the tablespace somewhere that can handle having huge amounts of data thrown at it and you'll minimise the impact. To be seriously super paranoid about disk usage you could additionally compress the tablespace by marking it as such. Something along the lines of: CREATE TABLESPACE binary_data1 DATAFILE some_san_location DEFAULT COMPRESS STORAGE(...) A: In my experience, a simple VARCHAR2 field containing the file name of the attachments is a better and easier solution. File system size is a lot easier to manage than database size. A: The data has to live somewhere, whether it's internal to the DB or whether you just store a link to a (server) accessible file path, you're still chewing space. I've just used simple LOB fields in the past, it seemed to work fine. If you keep the data inside the DB at least you keep your backup hassles low - you may have a lot of data to back up but when you restore it, it'll all be there. Splitting the binary out means you potentially break the DB or lose data if you're not careful about what you backup. A: One reason to just store the link or an ID that can be used to build the link is that the storage that you usually use for Oracle DB's is rather expensive. If you have lots of large files, it is usually much more cost-effective to put them on a less expensive array of disks.
What's the best way to save and retrieve binary files with Oracle 10g?
I'm about to implement a feature in our application that allows the user to 'upload' a PDF or Microsoft PowerPoint document, which the application will then make available to other users in a viewer (so they don't get to 'download' it in the 'Save as..' sense). I already know how to save and retrieve arbitrary binary information in database columns, but as this will be a commonly used feature of our application I fear that solution would lead to enormously large database tables (as we know one of our customers will want to put video in PowerPoint documents). I know there's a way to create a 'directory' object in Oracle, but is there a way to use this feature to store and retrieve binary files saved elsewhere on the Database Server? Or am I being overly paranoid about the database size? (for completeness our application is .Net WinForms using CoreLab / DevArt OraDirect.Net drivers to Oracle 10g)
[ "Couple of options: You could put the BLOB column in its own tablespace, with its own storage characteristics; you could store the BLOBs in their own table, linked to the other table by an ID column. In either case as you suggested you could define the column as a BFILE which means the actual file is stored externally from the database in a directory. What might be a concern there is that BFILE LOBs do not participate in transactions and are not recoverable with the rest of the database.\nThis is all discussed in the Oracle 10gR2 SQL reference, chapter 2, starting on page 23.\n", "I guess it depends what you consider enormously large.\nIt really does depend on the use case. If the documents are only being accessed rarely then putting it in the database would be fine (with the advantage of getting \"free\" backups, eg, with the database).\nIf these are files which are going to be hit over and over again, you might be better to put them directly on disk and just store the location, or even (if its really high bandwidth) look into something like MogileFS\nNo one is going to be able to give you a Yes or no answer for this.\n", "You could use a normal LOB column type and set the storage parameters for that field so it's on a seperate tablespace. Create the tablespace somewhere that can handle having huge amounts of data thrown at it and you'll minimise the impact.\nTo be seriously super paranoid about disk usage you could additionally compress the tablespace by marking it as such. Something along the lines of:\nCREATE TABLESPACE\n binary_data1\nDATAFILE\n some_san_location\nDEFAULT COMPRESS STORAGE(...)\n", "In my experience, a simple VARCHAR2 field containing the file name of the attachments is a better and easier solution. File system size is a lot easier to manage than database size.\n", "The data has to live somewhere, whether it's internal to the DB or whether you just store a link to a (server) accessible file path, you're still chewing space.\nI've just used simple LOB fields in the past, it seemed to work fine. If you keep the data inside the DB at least you keep your backup hassles low - you may have a lot of data to back up but when you restore it, it'll all be there. Splitting the binary out means you potentially break the DB or lose data if you're not careful about what you backup.\n", "One reason to just store the link or an ID that can be used to build the link is that the storage that you usually use for Oracle DB's is rather expensive. If you have lots of large files, it is usually much more cost-effective to put them on a less expensive array of disks.\n" ]
[ 6, 1, 1, 1, 1, 1 ]
[]
[]
[ ".net", "binaryfiles", "oracle" ]
stackoverflow_0000062876_.net_binaryfiles_oracle.txt
Q: How should I store short text strings into a SQL Server database? varchar(255), varchar(256), nvarchar(255), nvarchar(256), nvarchar(max), etc? 256 seems like a nice, round, space-efficient number. But I've seen 255 used a lot. Why? What's the difference between varchar and nvarchar? A: In MS SQL Server (7.0 and up), varchar data is represented internally with up to three values: The actual string of characters, which will be from 0 to something over 8000 bytes (it’s based on page size, the other columns stored for the row, and a few other factors) Two bytes used to indicate how long the data string is (which produces a value from 0 to 8000+) If the column is nullable, one bit in the row’s null bitmask (so the null status of up to eight nullable columns can be represented in one byte) The important part is that two-byte data length indicator. If it was one byte, you could only properly record strings of length 0 to 255; with two bytes, you can record strings of length 0 to something over 64000+ (specifically, 2^16 -1). However, the SQL Server page length is 8k, which is where that 8000+ character limit comes from. (There's data overflow stuff in SQL 2005, but if your strings are going to be that long you should just go with varchar(max).) So, no matter how long you declare your varchar datatype column to be (15, 127, 511), what you will actually be storing for each and every row is: 2 bytes to indicate how long the string is The actual string, i.e. the number of characters in that string Which gets me to my point: a number of older systems used only 1 byte to store the string length, and that limited you to a maximum length of 255 characters, which isn’t all that long. With 2 bytes, you have no such arbitrary limit... and so I recommend picking a number that makes sense to the (presumed non-technically oriented) user. , I like 50, 100, 250, 500, even 1000. Given that base of 8000+ bytes of storage, 255 or 256 is just as efficient as 200 or 250, and less efficient when it comes time to explain things to the end users. This applies to single byte data (i.e. ansii, SQL_Latin1*_*General_CP1, et. al.). If you have to store data for multiple code pages or languages using different alphabets, you’ll need to work with the nvarchar data type (which I think works the same, two bytes for number of charactesr, but each actual character of data requires two bytes of storage). If you have strings likely to go over 8000, or over 4000 in nvarchar, you will need to use the [n]varchar(max) datatypes. And if you want to know why it is so very important to take up space with extra bytes just to track how long the data is, check out http://www.joelonsoftware.com/articles/fog0000000319.html Philip A: VARCHAR(255). It won't use all 255 characters of storage, just the storage you need. It's 255 and not 256 because then you have space for 255 plus the null-terminator (or size byte). The "N" is for Unicode. Use if you expect non-ASCII characters. A: There are a couple of other points to consider when defining char/varchar and the N variations. First, there is some overhead to storing variable length strings in the database. A good general rule of thumb is to use CHAR for strings less than 10 chars long, since N/VARCHAR stores both the string and the length and the difference between storing short strings in N/CHAR vs. N/VARCHAR under 10 isn't worth the overhead of the string length. Second, a table in SQL server is stored on 8KB pages, so the max size of the row of data is 8060 bytes (the other 192 are used for overhead by SQL). That's why SQL allows a max defined column of VARCHAR(8000) and NVARCHAR(4000). Now, you can use VARCHAR(MAX) and the unicode version. But there can be extra overhead associated with that. If I'm not mistaken, SQL server will try to store the data on the same page as the rest of the row but, if you attempt to put too much data into a VARCHAR(Max) column, it will treat it as binary and store it on another page. Another big difference between CHAR and VARCHAR has to do with page splits. Given that SQL Server stores data in 8KB pages, you could have any number of rows of data stored on a page. If you UPDATE a VARCHAR column with a value that is large enough that the row will no longer fit on the page, the server will split that page, moving off some number of records. If the database has no available pages and the database is set to auto grow, the server will first grow the database to allocate blank pages to it, then allocate blank pages to the table and finally split the single page into two. A: If you will be supporting languages other than English, you will want to use nvarchar. HTML should be okay as long as it contains standard ASCII characters. I've used nvarchar mainly in databases that were multi-lingual support. A: Because there are 8-bits in 1 byte and so in 1 byte you can store up to 256 distinct values which is 0 1 2 3 4 5 ... 255 Note the first number is 0 so that's a total of 256 numbers. So if you use nvarchar(255) It'll use 1 byte to store the length of the string but if you tip over by 1 and use nvarchar(256) then you're wasting 1 more byte just for that extra 1 item off from 255 (since you need 2 bytes to store the number 256). That might not be the actual implementation of SQL server but I believe that is the typical reasoning for limiting things at 255 over 256 items. and nvarchar is for Unicode, which use 2+ bytes per character and varchar is for normal ASCII text which only use 1 byte A: IIRC, 255 is the max size of a varchar in MySQL before you had to switch to the text datatype, or was at some point (actually, I think it's higher now). So keeping it to 255 might buy you some compatibility there. You'll want to look this up before acting on it, though. varchar vs nvarchar is kinda like ascii vs unicode. varchar is limited to one byte per character, nvarchar can use two. That's why you can have a varchar(8000) but only an nvarchar(4000) A: Both varchar and nvarchar auto-size to the content, but the number you define when declaring the column type is a maximum. Values in "nvarchar" take up twice the disk/memory space as "varchar" because unicode is two-byte, but when you declare the column type you are declaring the number of characters, not bytes. So when you define a column type, you should determine the maximum number of characters that the column will ever need to hold and have that as the varchar (or nvarchar) size. A good rule of thumb is to estimate the maximum sting length the column needs to hold, then add support for about 10% more characters to it to avoid problems with unexpectedly long data in the future. A: varchar(255) was also the maximum length in SQL Server 7.0 and earlier.
How should I store short text strings into a SQL Server database?
varchar(255), varchar(256), nvarchar(255), nvarchar(256), nvarchar(max), etc? 256 seems like a nice, round, space-efficient number. But I've seen 255 used a lot. Why? What's the difference between varchar and nvarchar?
[ "In MS SQL Server (7.0 and up), varchar data is represented internally with up to three values:\n\nThe actual string of characters, which will be from 0 to something over 8000 bytes (it’s based on page size, the other columns stored for the row, and a few other factors)\nTwo bytes used to indicate how long the data string is (which produces a value from 0 to 8000+)\nIf the column is nullable, one bit in the row’s null bitmask (so the null status of up to eight nullable columns can be represented in one byte)\n\nThe important part is that two-byte data length indicator. If it was one byte, you could only properly record strings of length 0 to 255; with two bytes, you can record strings of length 0 to something over 64000+ (specifically, 2^16 -1). However, the SQL Server page length is 8k, which is where that 8000+ character limit comes from. (There's data overflow stuff in SQL 2005, but if your strings are going to be that long you should just go with varchar(max).)\nSo, no matter how long you declare your varchar datatype column to be (15, 127, 511), what you will actually be storing for each and every row is:\n\n2 bytes to indicate how long the string is\nThe actual string, i.e. the number of characters in that string\n\nWhich gets me to my point: a number of older systems used only 1 byte to store the string length, and that limited you to a maximum length of 255 characters, which isn’t all that long. With 2 bytes, you have no such arbitrary limit... and so I recommend picking a number that makes sense to the (presumed non-technically oriented) user. , I like 50, 100, 250, 500, even 1000. Given that base of 8000+ bytes of storage, 255 or 256 is just as efficient as 200 or 250, and less efficient when it comes time to explain things to the end users.\nThis applies to single byte data (i.e. ansii, SQL_Latin1*_*General_CP1, et. al.). If you have to store data for multiple code pages or languages using different alphabets, you’ll need to work with the nvarchar data type (which I think works the same, two bytes for number of charactesr, but each actual character of data requires two bytes of storage). If you have strings likely to go over 8000, or over 4000 in nvarchar, you will need to use the [n]varchar(max) datatypes.\nAnd if you want to know why it is so very important to take up space with extra bytes just to track how long the data is, check out http://www.joelonsoftware.com/articles/fog0000000319.html\nPhilip\n", "VARCHAR(255). It won't use all 255 characters of storage, just the storage you need. It's 255 and not 256 because then you have space for 255 plus the null-terminator (or size byte).\nThe \"N\" is for Unicode. Use if you expect non-ASCII characters.\n", "There are a couple of other points to consider when defining char/varchar and the N variations.\nFirst, there is some overhead to storing variable length strings in the database. A good general rule of thumb is to use CHAR for strings less than 10 chars long, since N/VARCHAR stores both the string and the length and the difference between storing short strings in N/CHAR vs. N/VARCHAR under 10 isn't worth the overhead of the string length.\nSecond, a table in SQL server is stored on 8KB pages, so the max size of the row of data is 8060 bytes (the other 192 are used for overhead by SQL). That's why SQL allows a max defined column of VARCHAR(8000) and NVARCHAR(4000). Now, you can use VARCHAR(MAX) and the unicode version. But there can be extra overhead associated with that. \nIf I'm not mistaken, SQL server will try to store the data on the same page as the rest of the row but, if you attempt to put too much data into a VARCHAR(Max) column, it will treat it as binary and store it on another page.\nAnother big difference between CHAR and VARCHAR has to do with page splits. Given that SQL Server stores data in 8KB pages, you could have any number of rows of data stored on a page. If you UPDATE a VARCHAR column with a value that is large enough that the row will no longer fit on the page, the server will split that page, moving off some number of records. If the database has no available pages and the database is set to auto grow, the server will first grow the database to allocate blank pages to it, then allocate blank pages to the table and finally split the single page into two. \n", "If you will be supporting languages other than English, you will want to use nvarchar.\nHTML should be okay as long as it contains standard ASCII characters. I've used nvarchar mainly in databases that were multi-lingual support. \n", "Because there are 8-bits in 1 byte and so in 1 byte you can store up to 256 distinct values which is\n0 1 2 3 4 5 ... 255\n\nNote the first number is 0 so that's a total of 256 numbers.\nSo if you use nvarchar(255) It'll use 1 byte to store the length of the string but if you tip over by 1 and use nvarchar(256) then you're wasting 1 more byte just for that extra 1 item off from 255 (since you need 2 bytes to store the number 256).\nThat might not be the actual implementation of SQL server but I believe that is the typical reasoning for limiting things at 255 over 256 items.\nand nvarchar is for Unicode, which use 2+ bytes per character and\nvarchar is for normal ASCII text which only use 1 byte\n", "IIRC, 255 is the max size of a varchar in MySQL before you had to switch to the text datatype, or was at some point (actually, I think it's higher now). So keeping it to 255 might buy you some compatibility there. You'll want to look this up before acting on it, though.\nvarchar vs nvarchar is kinda like ascii vs unicode. varchar is limited to one byte per character, nvarchar can use two. That's why you can have a varchar(8000) but only an nvarchar(4000)\n", "Both varchar and nvarchar auto-size to the content, but the number you define when declaring the column type is a maximum.\nValues in \"nvarchar\" take up twice the disk/memory space as \"varchar\" because unicode is two-byte, but when you declare the column type you are declaring the number of characters, not bytes.\nSo when you define a column type, you should determine the maximum number of characters that the column will ever need to hold and have that as the varchar (or nvarchar) size.\nA good rule of thumb is to estimate the maximum sting length the column needs to hold, then add support for about 10% more characters to it to avoid problems with unexpectedly long data in the future.\n", "varchar(255) was also the maximum length in SQL Server 7.0 and earlier.\n" ]
[ 19, 11, 5, 3, 3, 2, 2, 2 ]
[]
[]
[ "database", "database_design", "sql", "sql_server" ]
stackoverflow_0000054512_database_database_design_sql_sql_server.txt
Q: C++ Exception code lookup Knowing an exception code, is there a way to find out more about what the actual exception that was thrown means? My exception in question: 0x64487347 Exception address: 0x1 The call stack shows no information. I'm reviewing a .dmp of a crash and not actually debugging in Visual Studio. A: A true C++ exception thrown from Microsoft's runtime will have an SEH code of 0xe06d7363 (E0 + 'msc'). You have some other exception. .NET generates SEH exceptions with the code 0xe0434f4d (E0 + 'COM'). NT's status codes are documented in ntstatus.h, and generally start 0x80 (warnings) or 0xC0 (errors). The most famous is 0xC0000005, STATUS_ACCESS_VIOLATION. A: Because you're reviewing a crash dump I'll assume it came in from a customer and you cannot easily reproduce the fault with more instrumentation. I don't have much help to offer save to note that the exception code 0x64487347 is ASCII "dShG", and developers often use the initials of the routine or fault condition when making up magic numbers like this. A little Googling turned up one hit for dHsg in the proper context, the name of a function in a Google Book search for "Using Visual C++ 6" By Kate Gregory. Unfortunately that alone was not helpful. A: If you know which block threw the exceptioon, can you put more specific handlers in the catch block to try and isolate it that way? Are you throwing an exception that you rolled yourself? Edit: I forgot to point you towards this article on Visual C++ exceptions which I've found to be quite useful. Rob
C++ Exception code lookup
Knowing an exception code, is there a way to find out more about what the actual exception that was thrown means? My exception in question: 0x64487347 Exception address: 0x1 The call stack shows no information. I'm reviewing a .dmp of a crash and not actually debugging in Visual Studio.
[ "A true C++ exception thrown from Microsoft's runtime will have an SEH code of 0xe06d7363 (E0 + 'msc'). You have some other exception.\n.NET generates SEH exceptions with the code 0xe0434f4d (E0 + 'COM').\nNT's status codes are documented in ntstatus.h, and generally start 0x80 (warnings) or 0xC0 (errors). The most famous is 0xC0000005, STATUS_ACCESS_VIOLATION.\n", "Because you're reviewing a crash dump I'll assume it came in from a customer and you cannot easily reproduce the fault with more instrumentation.\nI don't have much help to offer save to note that the exception code 0x64487347 is ASCII \"dShG\", and developers often use the initials of the routine or fault condition when making up magic numbers like this.\nA little Googling turned up one hit for dHsg in the proper context, the name of a function in a Google Book search for \"Using Visual C++ 6\" By Kate Gregory. Unfortunately that alone was not helpful.\n", "If you know which block threw the exceptioon, can you put more specific handlers in the catch block to try and isolate it that way?\nAre you throwing an exception that you rolled yourself?\nEdit: I forgot to point you towards this article on Visual C++ exceptions which I've found to be quite useful.\nRob\n" ]
[ 7, 4, 0 ]
[]
[]
[ "c++", "crash", "exception", "memory_dump", "visual_c++" ]
stackoverflow_0000061402_c++_crash_exception_memory_dump_visual_c++.txt
Q: How can I change IE's homepage without opening IE? Here's an interesting problem. On a recently installed Server 2008 64bit I opened IE and through the Tools -> Options I changed the homepage to iGoogle.com. Clicked okay and then clicked the homepage button. IE crashes. Now you'd think that I could just remove iGoogle as the homepage but when I open IE it immediately goes to that page and crashes on open. Obviously I'd prefer to find a solution to why IE is crashing on the iGoogle page but just to get IE running again I need to remove iGoogle as the homepage. Is there anyway to do this without opening IE? A: Control Panel -> Internet Options A: Looking at the registry, the start page seems to be stored in HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main\Start Page A: You could do it through the control panel, but you could also supply a url as a parameter to iexplore.exe. start » run » iexplore about:blank A: Two ways: Control Panel->Internet Options Start->Run... "%windir%\system32\inetcpl.cpl" A: Not sure about IE7 on Windows Server 2008, but for IE6 the start page is stored in a registry key "Start Page" in HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main. A: The answer is posted, but here's how you can discover the answer without having to ask 1: Set the homepage to something random ie FindMeKeyForURL.com 2: Search the registry for it 3: Extract it out and modify it, now you can deploy the .reg file
How can I change IE's homepage without opening IE?
Here's an interesting problem. On a recently installed Server 2008 64bit I opened IE and through the Tools -> Options I changed the homepage to iGoogle.com. Clicked okay and then clicked the homepage button. IE crashes. Now you'd think that I could just remove iGoogle as the homepage but when I open IE it immediately goes to that page and crashes on open. Obviously I'd prefer to find a solution to why IE is crashing on the iGoogle page but just to get IE running again I need to remove iGoogle as the homepage. Is there anyway to do this without opening IE?
[ "Control Panel -> Internet Options\n", "Looking at the registry, the start page seems to be stored in\nHKEY_CURRENT_USER\\Software\\Microsoft\\Internet Explorer\\Main\\Start Page\n", "You could do it through the control panel, but you could also supply a url as a parameter to iexplore.exe.\nstart » run » iexplore about:blank\n\n", "Two ways:\n\nControl Panel->Internet Options\nStart->Run... \"%windir%\\system32\\inetcpl.cpl\"\n\n", "Not sure about IE7 on Windows Server 2008, but for IE6 the start page is stored in a registry key \"Start Page\" in HKEY_CURRENT_USER\\Software\\Microsoft\\Internet Explorer\\Main.\n", "The answer is posted, but here's how you can discover the answer without having to ask 1: Set the homepage to something random ie FindMeKeyForURL.com\n 2: Search the registry for it\n 3: Extract it out and modify it, now you can deploy the .reg file\n" ]
[ 10, 5, 4, 1, 0, 0 ]
[]
[]
[ "internet_explorer_7", "windows_server_2008" ]
stackoverflow_0000063343_internet_explorer_7_windows_server_2008.txt
Q: How do I implement a HTML cache for a PHP site? What is the best way of implementing a cache for a PHP site? Obviously, there are some things that shouldn't be cached (for example search queries), but I want to find a good solution that will make sure that I avoid the 'digg effect'. I know there is WP-Cache for WordPress, but I'm writing a custom solution that isn't built on WP. I'm interested in either writing my own cache (if it's simple enough), or you could point me to a nice, light framework. I don't know much Apache though, so if it was a PHP framework then it would be a better fit. Thanks. A: You can use output buffering to selectively save parts of your output (those you want to cache) and display them to the next user if it hasn't been long enough. This way you're still rendering other parts of the page on-the-fly (e.g., customizable boxes, personal information). A: The best way to go is to use a proxy cache (Squid, Varnish) and serve appropriate Cache-Control/Expires headers, along with ETags : see Mark Nottingham's Caching Tutorial for a full description of how caches work and how you can get the most performance out of a caching proxy. Also check out memcached, and try to cache your database queries (or better yet, pre-rendered page fragments) in there. A: If a proxy cache is out of the question, and you're serving complete HTML files, you'll get the best performance by bypassing PHP altogether. Study how WP Super Cache works. Uncached pages are copied to a cache folder with similar URL structure as your site. On later requests, mod_rewrite notes the existence of the cached file and serves it instead. other RewriteCond directives are used to make sure commenters/logged in users see live PHP requests, but the majority of visitors will be served by Apache directly. A: I would recommend Memcached or APC. Both are in-memory caching solutions with dead-simple APIs and lots of libraries. The trouble with those 2 is you need to install them on your web server or another server if it's Memcached. APC Pros: Simple Fast Speeds up PHP execution also Cons Doesn't work for distributed systems, each machine stores its cache locally Memcached Pros: Fast(ish) Can be installed on a separate server for all web servers to use Highly tested, developed at LiveJournal Used by all the big guys (Facebook, Yahoo, Mozilla) Cons: Slower than APC Possible network latency Slightly more configuration I wouldn't recommend writing your own, there are plenty out there. You could go with a disk-based cache if you can't install software on your webserver, but there are possible race issues to deal with. One request could be writing to the file while another is reading. You actually could cache search queries, even for a few seconds to a minute. Unless your db is being updated more than a few times a second, some delay would be ok. A: The PHP Smarty template engine (http://www.smarty.net) includes a fairly advanced caching system. You can find details in the caching section of the Smarty manual: http://www.smarty.net/manual/en/caching.php A: You seems to be looking for a PHP cache framework. I recommend you the template system TinyButStrong that comes with a very good CacheSystem plugin. It's simple, light, customizable (you can cache whatever part of the html file you want), very powerful ^^ A: Simple caching of pages, or parts of pages - the Pear::CacheLite class. I also use APC and memcache for different things, but the other answers I've seen so far are more for more complete, and complex systems. If you just need to save some effort rebuilding a part of a page - Cache_lite with a file-backed store is entirely sufficient, and very simple to implement. A: Project Gazelle (an open source torrent site) provides a step by step guide on setting up Memcached on the site which you can easily use on any other website you might want to set up which will handle a lot of traffic. Grab down the source and read the documentation.
How do I implement a HTML cache for a PHP site?
What is the best way of implementing a cache for a PHP site? Obviously, there are some things that shouldn't be cached (for example search queries), but I want to find a good solution that will make sure that I avoid the 'digg effect'. I know there is WP-Cache for WordPress, but I'm writing a custom solution that isn't built on WP. I'm interested in either writing my own cache (if it's simple enough), or you could point me to a nice, light framework. I don't know much Apache though, so if it was a PHP framework then it would be a better fit. Thanks.
[ "You can use output buffering to selectively save parts of your output (those you want to cache) and display them to the next user if it hasn't been long enough. This way you're still rendering other parts of the page on-the-fly (e.g., customizable boxes, personal information).\n", "The best way to go is to use a proxy cache (Squid, Varnish) and serve appropriate Cache-Control/Expires headers, along with ETags : see Mark Nottingham's Caching Tutorial for a full description of how caches work and how you can get the most performance out of a caching proxy.\nAlso check out memcached, and try to cache your database queries (or better yet, pre-rendered page fragments) in there.\n", "If a proxy cache is out of the question, and you're serving complete HTML files, you'll get the best performance by bypassing PHP altogether. Study how WP Super Cache works. \nUncached pages are copied to a cache folder with similar URL structure as your site. On later requests, mod_rewrite notes the existence of the cached file and serves it instead. other RewriteCond directives are used to make sure commenters/logged in users see live PHP requests, but the majority of visitors will be served by Apache directly.\n", "I would recommend Memcached or APC. Both are in-memory caching solutions with dead-simple APIs and lots of libraries.\nThe trouble with those 2 is you need to install them on your web server or another server if it's Memcached.\nAPC\n\nPros:\n\n\nSimple\nFast\nSpeeds up PHP execution also\n\n\nCons\n\n\nDoesn't work for distributed systems, each machine stores its cache locally\n\nMemcached\n\nPros:\n\n\nFast(ish)\nCan be installed on a separate server for all web servers to use\nHighly tested, developed at LiveJournal\nUsed by all the big guys (Facebook, Yahoo, Mozilla)\n\nCons:\nSlower than APC\nPossible network latency\nSlightly more configuration\n\nI wouldn't recommend writing your own, there are plenty out there. You could go with a disk-based cache if you can't install software on your webserver, but there are possible race issues to deal with. One request could be writing to the file while another is reading. \nYou actually could cache search queries, even for a few seconds to a minute. Unless your db is being updated more than a few times a second, some delay would be ok.\n", "The PHP Smarty template engine (http://www.smarty.net) includes a fairly advanced caching system.\nYou can find details in the caching section of the Smarty manual: http://www.smarty.net/manual/en/caching.php\n", "You seems to be looking for a PHP cache framework.\nI recommend you the template system TinyButStrong that comes with a very good CacheSystem plugin.\nIt's simple, light, customizable (you can cache whatever part of the html file you want), very powerful ^^\n", "Simple caching of pages, or parts of pages - the Pear::CacheLite class. I also use APC and memcache for different things, but the other answers I've seen so far are more for more complete, and complex systems. If you just need to save some effort rebuilding a part of a page - Cache_lite with a file-backed store is entirely sufficient, and very simple to implement.\n", "Project Gazelle (an open source torrent site) provides a step by step guide on setting up Memcached on the site which you can easily use on any other website you might want to set up which will handle a lot of traffic.\nGrab down the source and read the documentation.\n" ]
[ 16, 7, 7, 4, 2, 1, 1, 0 ]
[]
[]
[ "caching", "html", "php" ]
stackoverflow_0000055223_caching_html_php.txt
Q: MySQL statement that returns a SQL statement? I need to do a dump of a table on a remote server, but I can't access the server directly. The only access I have is through PHP scripts. Is there some way in which MySQL will return an INSERT INTO `table_name` (`field1`, `field2`) VALUES ('a', 'b'), ('c', 'd') statement, like what mysqldump will return? I don't have access to phpMyAdmin, and I preferably don't want to use exec, system or passthru. See this question for another export method A: 1) can you run mysqldump from exec or passthru 2) take a look at this: http://www.php-mysql-tutorial.com/perform-mysql-backup-php.php A: If you can use php-scripts on the server i would recommend phpmyadmin. Then you can do this from the web-interface. A: You should check out PHPMyAdmin, it is a php based MySQL administration tool. It supports backups and recovery for the database as well as a 'GUI' to the database server. It works very well. A: I'm pretty sure phpMyAdmin will do this for you. A: This select 'insert into table table_name (field1, field2) values' || table_name.field1 || ', ' || table_field2 || ');' from table_name should get you started. Replace || with the string concatenation operator for your db flavour. If field1 or field2 are strings you will have to come up with some trick for quoting/escaping. A: Here is one approach generating a lot of separate query statements. You can also use implode to more efficiently combine the strings, but this is easier to read for starters and derived from this you can come up with a million other approaches. $results = mysql_query("SELECT * FROM `table_name`"); while($row = mysql_fetch_assoc($results)) { $query = "INSERT INTO `table_name` "; $fields = '('; $values = '('; foreach($row as $field=>$value) { $fields .= "'".$field."',"; $values .= "'".mysql_escape_string($value)."',"; } //drop the last comma off $fields = substr($fields,0,-1); $values = substr($values,0,-1); $query .= $fields . " VALUES " . $values; //your final result echo $query; } See if that gets you started
MySQL statement that returns a SQL statement?
I need to do a dump of a table on a remote server, but I can't access the server directly. The only access I have is through PHP scripts. Is there some way in which MySQL will return an INSERT INTO `table_name` (`field1`, `field2`) VALUES ('a', 'b'), ('c', 'd') statement, like what mysqldump will return? I don't have access to phpMyAdmin, and I preferably don't want to use exec, system or passthru. See this question for another export method
[ "1) can you run mysqldump from exec or passthru\n2) take a look at this: http://www.php-mysql-tutorial.com/perform-mysql-backup-php.php\n", "If you can use php-scripts on the server i would recommend phpmyadmin. Then you can do this from the web-interface.\n", "You should check out PHPMyAdmin, it is a php based MySQL administration tool. It supports backups and recovery for the database as well as a 'GUI' to the database server. It works very well.\n", "I'm pretty sure phpMyAdmin will do this for you.\n", "This\nselect 'insert into table table_name (field1, field2) values'\n || table_name.field1 || ', ' || table_field2 || ');'\nfrom table_name\n\nshould get you started. Replace || with the string concatenation operator for your db flavour. If field1 or field2 are strings you will have to come up with some trick for quoting/escaping.\n", "Here is one approach generating a lot of separate query statements. You can also use implode to more efficiently combine the strings, but this is easier to read for starters and derived from this you can come up with a million other approaches.\n$results = mysql_query(\"SELECT * FROM `table_name`\");\nwhile($row = mysql_fetch_assoc($results)) {\n\n $query = \"INSERT INTO `table_name` \";\n $fields = '(';\n $values = '('; \n\n foreach($row as $field=>$value) {\n $fields .= \"'\".$field.\"',\";\n $values .= \"'\".mysql_escape_string($value).\"',\";\n }\n\n //drop the last comma off\n $fields = substr($fields,0,-1);\n $values = substr($values,0,-1);\n\n $query .= $fields . \" VALUES \" . $values;\n\n //your final result\n echo $query;\n}\n\nSee if that gets you started \n" ]
[ 5, 0, 0, 0, 0, 0 ]
[]
[]
[ "export", "mysql", "php" ]
stackoverflow_0000063399_export_mysql_php.txt
Q: Process vs Threads How to decide whether to use threads or create separate process altogether in your application to achieve parallelism. A: Threads are more light weight, and for the making several "workers" just to utilize all availabe CPUs or cores, you're better of with threads. When you need the workers to be better isolated and more robust, like with most servers, go with sockets. When one thread crashes badly, it usually takes down the entire process, including other threads working in that process. If a process turns sour and dies, it doesn't touch any other process, so they can happily go on with their bussiness as if nothing happened. A: Processes have more isolated memory. This is important for a number of reasons: It is harder for a single task to crash the other tasks. More memory will be available per process. This is important for large, high-performance applications like Apache or database servers, like Postgres. This is important for both allocated memory and memory mapped files. A: The degree of parallelism mainly depends on the physical processors / cores available on your machine. If you have a single-processor/core machine, then having seperate processes may cause too much overhead. Threads would generally be preferred in that case. If you have multiple cores/CPUs then depending on what each process/thread does, you may opt for processes if the overhead is justified. Processes obviously have a much better level of memory isolation than threads - but at the same time in Windows, processes are fairly heavy, compared to threads. Threads of course can share data in the same process - but again you would need to synchronize access to the shared data - to prevent corrupt state. Sharing data between processes is more involved, the overhead (which is greated than simple thread synchronization) depending on the mechanisms used such as Named pipes, custom sockets-based communication, using a remoting framework, shared file / database etc. A: Generally you should use processes when the individual execution streams don't need to share global data and you would like to have each protected from the other. A: A couple of links that could help you decide, I hope: http://blog.labnotes.org/2006/08/29/why-processes-scale-better-than-threads/ http://www.jroller.com/cpurdy/entry/fastcgi_not_so_fast A: In Windows, processes are heavier to create then threads. So if you have several smaller tasks a thread or thread pool would be better. Or use a process pool to recycle the processes. Also sharing state between processes is more work then sharing state between threads. But then again: Threads could destabilize a complete process taking other threads down with it. If you want to minimize the chance of that happening you could go for separate processes. .Net's AppDomains might be a middle ground between both.
Process vs Threads
How to decide whether to use threads or create separate process altogether in your application to achieve parallelism.
[ "Threads are more light weight, and for the making several \"workers\" just to utilize all availabe CPUs or cores, you're better of with threads.\nWhen you need the workers to be better isolated and more robust, like with most servers, go with sockets. When one thread crashes badly, it usually takes down the entire process, including other threads working in that process. If a process turns sour and dies, it doesn't touch any other process, so they can happily go on with their bussiness as if nothing happened.\n", "Processes have more isolated memory. This is important for a number of reasons:\n\nIt is harder for a single task to crash the other tasks. \nMore memory will be available per process. This is important for large, high-performance applications like Apache or database servers, like Postgres. This is important for both allocated memory and memory mapped files.\n\n", "The degree of parallelism mainly depends on the physical processors / cores available on your machine. If you have a single-processor/core machine, then having seperate processes may cause too much overhead. Threads would generally be preferred in that case.\nIf you have multiple cores/CPUs then depending on what each process/thread does, you may opt for processes if the overhead is justified. Processes obviously have a much better level of memory isolation than threads - but at the same time in Windows, processes are fairly heavy, compared to threads.\nThreads of course can share data in the same process - but again you would need to synchronize access to the shared data - to prevent corrupt state. Sharing data between processes is more involved, the overhead (which is greated than simple thread synchronization) depending on the mechanisms used such as Named pipes, custom sockets-based communication, using a remoting framework, shared file / database etc.\n", "Generally you should use processes when the individual execution streams don't need to share global data and you would like to have each protected from the other.\n", "A couple of links that could help you decide, I hope:\nhttp://blog.labnotes.org/2006/08/29/why-processes-scale-better-than-threads/\nhttp://www.jroller.com/cpurdy/entry/fastcgi_not_so_fast\n", "In Windows, processes are heavier to create then threads. So if you have several smaller tasks a thread or thread pool would be better. Or use a process pool to recycle the processes. Also sharing state between processes is more work then sharing state between threads. But then again: Threads could destabilize a complete process taking other threads down with it. If you want to minimize the chance of that happening you could go for separate processes. .Net's AppDomains might be a middle ground between both. \n" ]
[ 15, 5, 4, 2, 2, 1 ]
[]
[]
[ "operating_system", "optimization" ]
stackoverflow_0000062921_operating_system_optimization.txt
Q: How do I sync between VSS and SVN I am forced to use VSS at work, but use SVN for a personal repository. What is the best way to sync between VSS and sync? A: To get rid of the manual merge step, I could use a separate svn branch (svn://branches/VSS) as follows: Create a working copy of svn://branches/VSS Do a VSS Get Latest on this working copy svn commit svn merge from svn://trunk svn commit Do a VSS diff and checkout all files (without overwriting) with differences Check in those files reintegrate svn://branches/VSS into svn://trunk A: You could also treat this as a vendor supplied branch as defined in the redbean book: Vendor Branches With this, the basic flow would be: Have a vendor branch "branches/VSS/current" containing the latest code from VSS Tag the current version as "branches/VSS/2008-09-15" Next day, get the new files into "current" Tag again into "branches/VSS/2008-09-16" Merge differences between the two tags into trunk, resolving conflicts Delete old tags as required This is actually the technique we used when migrating from VSS to SVN. If you care about the return trip from SVN->VSS, you'll just have to diff between trunk and branches/VSS/current and apply the diffs to VSS. A: What I have done in the past is as follows: Make sure all my changes are committed to svn://trunk Do a get latest from VSS into my working copy. Manually merge the changes in my working copy. Commit the merged code into the svn://trunk Do a VSS diff and checkout any files with differences (without overwriting files) Check in those files.
How do I sync between VSS and SVN
I am forced to use VSS at work, but use SVN for a personal repository. What is the best way to sync between VSS and sync?
[ "To get rid of the manual merge step, I could use a separate svn branch (svn://branches/VSS) as follows:\n\nCreate a working copy of svn://branches/VSS\nDo a VSS Get Latest on this working copy\nsvn commit\nsvn merge from svn://trunk\nsvn commit\nDo a VSS diff and checkout all files (without overwriting) with differences\nCheck in those files\nreintegrate svn://branches/VSS into svn://trunk\n\n", "You could also treat this as a vendor supplied branch as defined in the redbean book:\nVendor Branches\nWith this, the basic flow would be:\n\nHave a vendor branch \"branches/VSS/current\" containing the latest code from VSS\nTag the current version as \"branches/VSS/2008-09-15\"\nNext day, get the new files into \"current\"\nTag again into \"branches/VSS/2008-09-16\"\nMerge differences between the two tags into trunk, resolving conflicts\nDelete old tags as required\n\nThis is actually the technique we used when migrating from VSS to SVN.\nIf you care about the return trip from SVN->VSS, you'll just have to diff between trunk and branches/VSS/current and apply the diffs to VSS.\n", "What I have done in the past is as follows:\n\nMake sure all my changes are committed to svn://trunk\nDo a get latest from VSS into my working copy.\nManually merge the changes in my working copy.\nCommit the merged code into the svn://trunk\nDo a VSS diff and checkout any files with differences (without overwriting files)\nCheck in those files.\n\n" ]
[ 4, 2, 0 ]
[]
[]
[ "svn", "sync", "visual_sourcesafe" ]
stackoverflow_0000057372_svn_sync_visual_sourcesafe.txt
Q: ASP.NET WebService Returns Gibberish Characters When Throwing Exceptions I have a web service (ASMX) and in it, a web method that does some work and throws an exception if the input wasn't valid. [ScriptMethod] [WebMethod] public string MyWebMethod(string input) { string l_returnVal; if (!ValidInput(input)) { string l_errMsg = System.Web.HttpUtility.HtmlEncode(GetErrorMessage()); throw new Exception(l_errMsg); } // some work gets done... return System.Web.HttpUtility.HtmlEncode(l_returnVal); } Back in the client-side JavaScript on the Web page, on the error callback function, I display my error: function GetInputErrorCallback(error) { $get('input_error_msg_div').innerHTML = error.get_message(); } This works great and when my Web method returns (a string), it always looks perfect. However, if one of my error messages from a my thrown exception contains a special character, it's displayed incorrectly in the browser. For example, if the error message were to contain the following: That input isn’t valid! (that's an ASCII #146 in there) It displays this: That input isn’t valid! Or: Do you like Hüsker Dü? (ASCII # 252) Becomes: Do you like Hüsker Dü? The content of the error messages comes from XML files with UTF-8 encoding: <?xml version="1.0" encoding="UTF-8"?> <ErrorMessages> <Message id="invalid_input">Your input isn’t valid!</Message> . . . </ErrorMessages> And as far as page encoding is concerned, in my Web.config, I have: <globalization enableClientBasedCulture="true" fileEncoding="utf-8" /> I also have an HTTP Module to set L10n parameters: Thread.CurrentThread.CurrentUICulture = m_selectedCulture; Encoding l_Enc = Encoding.GetEncoding(m_selectedCulture.TextInfo.ANSICodePage); HttpContext.Current.Response.ContentEncoding = l_Enc; HttpContext.Current.Request.ContentEncoding = l_Enc; I've tried disabling this HTTP Module but the result is the same. The values returned by the web service (in the l_errMsg variable) look fine in the VS debugger. It's just once the client script has a hold of, it displays incorrectly. I've used Firebug to look at the response and special characters are mangled in there, too. So I find it pretty strange that strings returned by my web method look fine, even if there's special characters in them. Yet when I throw an exception from the web method, special characters in its message are incorrect. How can I fix this? A: Are you sure setting the "fileEncoding" is what you want, and not "responseEncoding"? Setting the fileEncoding determines how the web server will try to read physical .asmx/.aspx files from disk when it can't determine the encoding automatically. So, settings this to "utf-8" means you must save all your .asmx/.aspx files in utf-8. I don't think is relevant though. The mangling you're seeing is when text encoded as utf-8 is parsed using an 8-bit encoding (i.e. an utf-8 bytestream is decoded using an 8-bit decoder, such as, in your case, iso-8859-1/Windows-1252). So it's possible that the HtmlEncode() you're doing before throw()ing the Exception is wrong about the intended output encoding. So what happens if you don't HtmlEncode() the error message? (Technically, "ASCII # 252" isn't quite right; ASCII has 128 characters; the apostrophe you use is coming from an 8-bit encoding such as, in your case, iso-8859-1/Windows-1252.) Are you sure you've disabled that HTTP Module correctly? This line looks like it could be causing the problem: HttpContext.Current.Response.ContentEncoding = l_Enc; ...since it's most likely setting the output encoding to an 8-bit encoding (the ANSI code page equivalent). To support as many cultures as possible, you should set the response encoding to utf-8. This is the most supported Unicode format in browsers (I daresay all modern browsers support it), and Unicode is the only alternative to local encodings. That said, I don't fully understand what HTTP Module you are using and why you need it, so the situation may be more complex than I think.
ASP.NET WebService Returns Gibberish Characters When Throwing Exceptions
I have a web service (ASMX) and in it, a web method that does some work and throws an exception if the input wasn't valid. [ScriptMethod] [WebMethod] public string MyWebMethod(string input) { string l_returnVal; if (!ValidInput(input)) { string l_errMsg = System.Web.HttpUtility.HtmlEncode(GetErrorMessage()); throw new Exception(l_errMsg); } // some work gets done... return System.Web.HttpUtility.HtmlEncode(l_returnVal); } Back in the client-side JavaScript on the Web page, on the error callback function, I display my error: function GetInputErrorCallback(error) { $get('input_error_msg_div').innerHTML = error.get_message(); } This works great and when my Web method returns (a string), it always looks perfect. However, if one of my error messages from a my thrown exception contains a special character, it's displayed incorrectly in the browser. For example, if the error message were to contain the following: That input isn’t valid! (that's an ASCII #146 in there) It displays this: That input isn’t valid! Or: Do you like Hüsker Dü? (ASCII # 252) Becomes: Do you like Hüsker Dü? The content of the error messages comes from XML files with UTF-8 encoding: <?xml version="1.0" encoding="UTF-8"?> <ErrorMessages> <Message id="invalid_input">Your input isn’t valid!</Message> . . . </ErrorMessages> And as far as page encoding is concerned, in my Web.config, I have: <globalization enableClientBasedCulture="true" fileEncoding="utf-8" /> I also have an HTTP Module to set L10n parameters: Thread.CurrentThread.CurrentUICulture = m_selectedCulture; Encoding l_Enc = Encoding.GetEncoding(m_selectedCulture.TextInfo.ANSICodePage); HttpContext.Current.Response.ContentEncoding = l_Enc; HttpContext.Current.Request.ContentEncoding = l_Enc; I've tried disabling this HTTP Module but the result is the same. The values returned by the web service (in the l_errMsg variable) look fine in the VS debugger. It's just once the client script has a hold of, it displays incorrectly. I've used Firebug to look at the response and special characters are mangled in there, too. So I find it pretty strange that strings returned by my web method look fine, even if there's special characters in them. Yet when I throw an exception from the web method, special characters in its message are incorrect. How can I fix this?
[ "Are you sure setting the \"fileEncoding\" is what you want, and not \"responseEncoding\"? Setting the fileEncoding determines how the web server will try to read physical .asmx/.aspx files from disk when it can't determine the encoding automatically. So, settings this to \"utf-8\" means you must save all your .asmx/.aspx files in utf-8. I don't think is relevant though.\nThe mangling you're seeing is when text encoded as utf-8 is parsed using an 8-bit encoding (i.e. an utf-8 bytestream is decoded using an 8-bit decoder, such as, in your case, iso-8859-1/Windows-1252). So it's possible that the HtmlEncode() you're doing before throw()ing the Exception is wrong about the intended output encoding. So what happens if you don't HtmlEncode() the error message?\n(Technically, \"ASCII # 252\" isn't quite right; ASCII has 128 characters; the apostrophe you use is coming from an 8-bit encoding such as, in your case, iso-8859-1/Windows-1252.)\nAre you sure you've disabled that HTTP Module correctly? This line looks like it could be causing the problem:\nHttpContext.Current.Response.ContentEncoding = l_Enc;\n...since it's most likely setting the output encoding to an 8-bit encoding (the ANSI code page equivalent).\nTo support as many cultures as possible, you should set the response encoding to utf-8. This is the most supported Unicode format in browsers (I daresay all modern browsers support it), and Unicode is the only alternative to local encodings. That said, I don't fully understand what HTTP Module you are using and why you need it, so the situation may be more complex than I think.\n" ]
[ 1 ]
[]
[]
[ "ajax", "asp.net", "encoding", "exception", "web_services" ]
stackoverflow_0000062965_ajax_asp.net_encoding_exception_web_services.txt
Q: Detect DOM modification in Internet Explorer I am writing a Browser Helper Object for ie7, and I need to detect DOM modification (i.e. via AJAX). So far I couldn't find any feasible solution. A: You want to use IMarkupContainer2::CreateChangeLog. A: The best thing I could recommend is the Internet Explorer Developer Toolbar which allow you to view changes in the DOM.
Detect DOM modification in Internet Explorer
I am writing a Browser Helper Object for ie7, and I need to detect DOM modification (i.e. via AJAX). So far I couldn't find any feasible solution.
[ "You want to use IMarkupContainer2::CreateChangeLog.\n", "The best thing I could recommend is the Internet Explorer Developer Toolbar which allow you to view changes in the DOM.\n" ]
[ 2, 0 ]
[]
[]
[ "bho", "c++", "internet_explorer", "internet_explorer_7" ]
stackoverflow_0000034544_bho_c++_internet_explorer_internet_explorer_7.txt
Q: How do I get sun webserver to redirect from I have Sun webserver iws6 (iplanet 6) proxying my bea cluster. My cluster is under /portal/yadda. I want anyone who goes to http://the.domain.com/ to be quickly redirected to http://the.domain.com/portal/ I have and index.html that does a post and redirect, but the user sometimes sees it. Does anyone have a better way? Aaron I have tried the 3 replies below. None of them worked for me. Back to the drawing board. A A: Does this help? http://docs.sun.com/source/816-5691-10/essearch.htm#25618 To map a URL, perform the following steps: Open the Class Manager and select the server instance from the drop-down list. Choose the Content Mgmt tab. Click the Additional Document Directories link. The web server displays the Additional Document Directories page. (Optional) Add another directory by entering one of the following. URL prefix. For example: plans. Absolute physical path of the directory you want the URL mapped to. For example: C:/iPlanet/Servers/docs/marketing/plans Click OK. Click Apply. Edit one of the current additional directories listed by selecting one of the following: Edit Remove If editing, select edit next to the listed directory you wish to change. Enter a new prefix using ASCII format. (Optional) Select a style in the Apply Style drop-down list if you want to apply a style to the directory: For more information about styles, see Applying Configuration Styles. Click OK to add the new document directory. Click Apply. Choose Apply Changes to hard start /restart your server. A: You could also just add the below line in the .htaccess file Redirect permanent /oldpage.html http://www.example.com/newpage.html
How do I get sun webserver to redirect from
I have Sun webserver iws6 (iplanet 6) proxying my bea cluster. My cluster is under /portal/yadda. I want anyone who goes to http://the.domain.com/ to be quickly redirected to http://the.domain.com/portal/ I have and index.html that does a post and redirect, but the user sometimes sees it. Does anyone have a better way? Aaron I have tried the 3 replies below. None of them worked for me. Back to the drawing board. A
[ "Does this help?\nhttp://docs.sun.com/source/816-5691-10/essearch.htm#25618\n\nTo map a URL, perform the following steps:\nOpen the Class Manager and select the server instance from the drop-down list.\nChoose the Content Mgmt tab.\nClick the Additional Document Directories link.\nThe web server displays the Additional Document Directories page.\n(Optional) Add another directory by entering one of the following.\nURL prefix.\nFor example: plans.\nAbsolute physical path of the directory you want the URL mapped to.\nFor example:\nC:/iPlanet/Servers/docs/marketing/plans\nClick OK.\nClick Apply.\nEdit one of the current additional directories listed by selecting one of the following:\nEdit\nRemove\nIf editing, select edit next to the listed directory you wish to change.\nEnter a new prefix using ASCII format.\n(Optional) Select a style in the Apply Style drop-down list if you want to apply a style to the directory:\nFor more information about styles, see Applying Configuration Styles.\nClick OK to add the new document directory.\nClick Apply.\nChoose Apply Changes to hard start /restart your server.\n", "You could also just add the below line in the .htaccess file\nRedirect permanent /oldpage.html http://www.example.com/newpage.html\n" ]
[ 0, 0 ]
[ "You should be able to configure the webserver to do a header redirect (301 or 302 depending on your situation) so it redirects without ever loading an HTML page. This can be done in PHP as well:\n<?php\nheader(\"Location: http://www.example.com/\"); /* Redirect browser */\n\n/* Make sure that code below does not get executed when we redirect. */\nexit;\n?>\n\nIf you don't want to modify your server configuration.\nIf your server uses the .htaccess file, insert a line similar to the following:\nRedirect 301 /oldpage.html http://www.example.com/newpage.html\n\n-Adam\n" ]
[ -2 ]
[ "redirect", "sunone", "webserver" ]
stackoverflow_0000063295_redirect_sunone_webserver.txt
Q: Blocking part of a website I am trying to block Google Reader: reader.google.com www.google.com/reader The hard part is blocking the reader directory I blocked reader.google.com by changing my /etc/hosts file (this is for a Mac) Is there any way to block www.google.com/reader without buying software? Note this is for Safari so Greasemonkey won't work, and Leopard's Parental Controls throttle the CPU when they are turned on. Also I've tried OpenDNS, which is awesome, but doesn't work for this... Any thoughts? Update: This is for a laptop that travels a lot. So a router or a home proxy server won't work. Firefox would work, but I don't think I can uninstall Safari from a mac. A: Set up a proxy server and block it via that. -Adam A: You could do this with a proxy (for example Proxomitron). A: You can use Privoxy to filter about anything. A: Another option is to use a free service like www.opendns.com as your dns servers, they allow you to block specific domains or turn on filtering etc. A: What about at the router level? My router as an URL blocker built in. A: Maybe you can find some sort of http proxy you could install to filter this content and use that when browsing. On Firefox you could easily define a rule for Adblock Plus. A: I did exactly what you're looking for using Safari AdBlock. Just define a few rules in Safari->Preferences->AdBlock and you should be good to go!
Blocking part of a website
I am trying to block Google Reader: reader.google.com www.google.com/reader The hard part is blocking the reader directory I blocked reader.google.com by changing my /etc/hosts file (this is for a Mac) Is there any way to block www.google.com/reader without buying software? Note this is for Safari so Greasemonkey won't work, and Leopard's Parental Controls throttle the CPU when they are turned on. Also I've tried OpenDNS, which is awesome, but doesn't work for this... Any thoughts? Update: This is for a laptop that travels a lot. So a router or a home proxy server won't work. Firefox would work, but I don't think I can uninstall Safari from a mac.
[ "Set up a proxy server and block it via that.\n-Adam\n", "You could do this with a proxy (for example Proxomitron).\n", "You can use Privoxy to filter about anything.\n", "Another option is to use a free service like www.opendns.com as your dns servers, they allow you to block specific domains or turn on filtering etc. \n", "What about at the router level? My router as an URL blocker built in.\n", "Maybe you can find some sort of http proxy you could install to filter this content and use that when browsing. On Firefox you could easily define a rule for Adblock Plus.\n", "I did exactly what you're looking for using Safari AdBlock. Just define a few rules in Safari->Preferences->AdBlock and you should be good to go!\n" ]
[ 1, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "dns", "google_reader", "osx_leopard" ]
stackoverflow_0000063126_dns_google_reader_osx_leopard.txt
Q: Rolling your own message loop, any pitfalls? This question is slightly related to this question about exception handling. The workaround I found there consists of rolling my own message loop. So my Main method now looks basically like this: [STAThread] static void Main() { // this is needed so there'll actually an exception be thrown by // Application.Run/Application.DoEvents, instead of the ThreadException // event being raised. Application.SetUnhandledExceptionMode(UnhandledExceptionMode.ThrowException); Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Form form = new MainForm(); form.Show(); // the loop is here to keep app running if non-fatal exception is caught. do { try { Application.DoEvents(); Thread.Sleep(100); } catch (Exception ex) { ExceptionHandler.ConsumeException(ex); } } while (!form.IsDisposed); } What I'm wondering though, is this a safe/decent way to replace the more typical 'Application.Run(new MainForm());', whether it's used for exception handling or for whatever else, or should I always stick to using Application.Run? On another app that's in testing now a similar approach is used for both loading (splashscreen) and exception handling, and I don't think it has caused any troubles (yet :-)) A: Pitfall 1: Thread.Sleep(100); Never. Use WaitMessage(). Otherwise, it is possible roll out your own message loop, but in your scenario it seems somewhat pointless. You may also want to examine Application.Run() code (with .Net Reflector, for instance). A: If you want to customize message processing, consider implementing IMessageFilter, then call Application.AddMessageFilter to tell the standard message pump to call your filter function. A: Yes... I think some components wont work with that code. Some of them require to live in a thread that has an Application.Run in it to effectively pick up their messages.
Rolling your own message loop, any pitfalls?
This question is slightly related to this question about exception handling. The workaround I found there consists of rolling my own message loop. So my Main method now looks basically like this: [STAThread] static void Main() { // this is needed so there'll actually an exception be thrown by // Application.Run/Application.DoEvents, instead of the ThreadException // event being raised. Application.SetUnhandledExceptionMode(UnhandledExceptionMode.ThrowException); Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Form form = new MainForm(); form.Show(); // the loop is here to keep app running if non-fatal exception is caught. do { try { Application.DoEvents(); Thread.Sleep(100); } catch (Exception ex) { ExceptionHandler.ConsumeException(ex); } } while (!form.IsDisposed); } What I'm wondering though, is this a safe/decent way to replace the more typical 'Application.Run(new MainForm());', whether it's used for exception handling or for whatever else, or should I always stick to using Application.Run? On another app that's in testing now a similar approach is used for both loading (splashscreen) and exception handling, and I don't think it has caused any troubles (yet :-))
[ "Pitfall 1: \nThread.Sleep(100);\n\nNever. Use WaitMessage().\nOtherwise, it is possible roll out your own message loop, but in your scenario it seems somewhat pointless.\nYou may also want to examine Application.Run() code (with .Net Reflector, for instance).\n", "If you want to customize message processing, consider implementing IMessageFilter, then call Application.AddMessageFilter to tell the standard message pump to call your filter function.\n", "Yes... I think some components wont work with that code. Some of them require to live in a thread that has an Application.Run in it to effectively pick up their messages.\n" ]
[ 2, 2, 0 ]
[]
[]
[ ".net", "winforms" ]
stackoverflow_0000061366_.net_winforms.txt
Q: What technical considerations must a system/network administrator worry about when a site gets onto social bookmarking/sharing sites? The reason I ask is that Stack Overflow has been Slashdotted, and Redditted. First, what kinds of effect does this have on the servers that power a website? Second, what can be done by system administrators to ensure that their sites remain up and running as best as possible? A: Unfortunately, if you haven't planned for this before it happens, it's probably too late and your users will have a poor experience. Scalability is your first immediate concern. You may start getting more hits per second than you were getting per month. Your first line of defense is good programming and design. Make sure you're not doing anything stupid like reloading data from a database multiple times per request instead of caching it. Before the spike happens, you need to do some fairly realistic load tests to see where the bottlenecks are. For absurdly high traffic, consider the ability to switch some dynamic pages over to static pages. Having a server architecture that can scale also helps. Shared hosts generally don't scale. A single dedicated machine generally doesn't scale. Using something like Amazon's EC2 to host can help, especially if you plan for a cluster of servers from the beginning (even if your cluster is a single computer). You're next major concern is security. You're suddenly a much bigger target for the bad guys. Make sure you have a good security plan in place. This is something you should always have, but it become more important with high usage. A: Firstly, ask if you really want to spend weeks and thousands of $ on planning for something that might not even happen, and if it does happen, lasts about 5 hours. Easiest solution is to have a good way to switch to a page simply allowing a signup. People will sign up and you can email them when the storm has passed. More elaborate solutions rely on being able to scale quickly. That's firstly a software issue (can you connect to a db on another server, can you do load balancing). Secondly, your hosting solution needs to support fast expansion. Amazon EC2 comes to mind, or maybe slicehost. With both services you can easily start new instances ("Let's move the database to a different server") and expand your instances ("Let's upgrade the db server to 4GB RAM"). If you keep all data in the db (including sessions), you can easily have multiple front-end servers. For the database I'd usually try a single server with the highest resources available, but only because I haven't worked with db replication and it used to be quite hard to do, at least with mysql. Things might have improved. A: The app designer needs to think about scaling up (larger machines with more cores and higher performance) and/or scaling out (distributing workload across multiple systems). The IT guy needs to work out how to best support that. The network is what you look at first, because obviously everything rides on top of it. Starting at the border, that usually means network load balancers and redundant routers being served by multiple providers. You can also look at geographic caching services and apps such as cachefly. You want to reduce your bottlenecks as much as possible. You also want to design the environment such that it can be scaled out as needed without much work. Do the design work up front and it'll mean less headaches when you do get dugg. A: Some ideas (of what I used in the past and current projects): For boosting performance (if needed) you can put a reverse-proxying, caching squid in front of your server. Of course that only works if you don't have session keys and if the pages are somewhat static (means: they change only once an hour or so) and not personalised. With the squid you can boost a bloated and slow CMS like typo3, thus having the performance of static websites with the comfort of a CMS. You can outsource large files to external services like Amazon S3, saving your server's bandwidth. And if you are able to spend some (three-figures per month) bucks, you can as well use a Content Delivery Network. Whith that in place you automatically have scaling, high-availability and low latencys for your users. Of course, your pages must be cachable, so session keys and personalised pages are a no-no. If designed carefully and with CDNs in mind, you can at least cache SOME content, like pics and videos and static stuff. A: The load goes up, as other answers have mentioned. You'll also get an influx of new users/blog comments/votes from bored folks who are only really interested in vandalism. This is mostly a problem for blogs which allow completely anonymous commenting, where some dreadful stuff will be entered. The blog platform might have spam filters sufficient to block it, but manual intervention is frequently required to clean up remaining drivel. Even a little barrier to entry, like requiring a user name or email address even if no verification is done, will dramatically reduce the volume of the vandalism.
What technical considerations must a system/network administrator worry about when a site gets onto social bookmarking/sharing sites?
The reason I ask is that Stack Overflow has been Slashdotted, and Redditted. First, what kinds of effect does this have on the servers that power a website? Second, what can be done by system administrators to ensure that their sites remain up and running as best as possible?
[ "Unfortunately, if you haven't planned for this before it happens, it's probably too late and your users will have a poor experience. \nScalability is your first immediate concern. You may start getting more hits per second than you were getting per month. Your first line of defense is good programming and design. Make sure you're not doing anything stupid like reloading data from a database multiple times per request instead of caching it. Before the spike happens, you need to do some fairly realistic load tests to see where the bottlenecks are.\nFor absurdly high traffic, consider the ability to switch some dynamic pages over to static pages. \nHaving a server architecture that can scale also helps. Shared hosts generally don't scale. A single dedicated machine generally doesn't scale. Using something like Amazon's EC2 to host can help, especially if you plan for a cluster of servers from the beginning (even if your cluster is a single computer).\nYou're next major concern is security. You're suddenly a much bigger target for the bad guys. Make sure you have a good security plan in place. This is something you should always have, but it become more important with high usage.\n", "Firstly, ask if you really want to spend weeks and thousands of $ on planning for something that might not even happen, and if it does happen, lasts about 5 hours. \nEasiest solution is to have a good way to switch to a page simply allowing a signup. People will sign up and you can email them when the storm has passed.\nMore elaborate solutions rely on being able to scale quickly. That's firstly a software issue (can you connect to a db on another server, can you do load balancing). Secondly, your hosting solution needs to support fast expansion. Amazon EC2 comes to mind, or maybe slicehost. With both services you can easily start new instances (\"Let's move the database to a different server\") and expand your instances (\"Let's upgrade the db server to 4GB RAM\").\nIf you keep all data in the db (including sessions), you can easily have multiple front-end servers. For the database I'd usually try a single server with the highest resources available, but only because I haven't worked with db replication and it used to be quite hard to do, at least with mysql. Things might have improved.\n", "The app designer needs to think about scaling up (larger machines with more cores and higher performance) and/or scaling out (distributing workload across multiple systems). The IT guy needs to work out how to best support that. The network is what you look at first, because obviously everything rides on top of it. Starting at the border, that usually means network load balancers and redundant routers being served by multiple providers. You can also look at geographic caching services and apps such as cachefly.\nYou want to reduce your bottlenecks as much as possible. You also want to design the environment such that it can be scaled out as needed without much work. Do the design work up front and it'll mean less headaches when you do get dugg.\n", "Some ideas (of what I used in the past and current projects):\nFor boosting performance (if needed) you can put a reverse-proxying, caching squid in front of your server. Of course that only works if you don't have session keys and if the pages are somewhat static (means: they change only once an hour or so) and not personalised. \nWith the squid you can boost a bloated and slow CMS like typo3, thus having the performance of static websites with the comfort of a CMS.\nYou can outsource large files to external services like Amazon S3, saving your server's bandwidth.\nAnd if you are able to spend some (three-figures per month) bucks, you can as well use a Content Delivery Network. Whith that in place you automatically have scaling, high-availability and low latencys for your users. Of course, your pages must be cachable, so session keys and personalised pages are a no-no. If designed carefully and with CDNs in mind, you can at least cache SOME content, like pics and videos and static stuff.\n", "The load goes up, as other answers have mentioned.\nYou'll also get an influx of new users/blog comments/votes from bored folks who are only really interested in vandalism. This is mostly a problem for blogs which allow completely anonymous commenting, where some dreadful stuff will be entered. The blog platform might have spam filters sufficient to block it, but manual intervention is frequently required to clean up remaining drivel.\nEven a little barrier to entry, like requiring a user name or email address even if no verification is done, will dramatically reduce the volume of the vandalism.\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "system_administration" ]
stackoverflow_0000062403_system_administration.txt
Q: Does every Linux distro ship with gcc/g++ 4.* these days? I'm considering dumping boost as a dependency... atm the only thing that I really need is shared_ptr<>, and I can get that from std::tr1, available in gcc suite 4.* A: It's available on Fedora, installable via "yum" if you didn't pick "Development System" as your default install set. "yum search gcc" to get the package to install. A: These days, I believe most Linux distros do not ship with the development system by default. But I'm pretty sure g++ v4 is the 'standard' development C++ compiler if you install the C++ development environment at all. g++ v3 is usually just available as a special install. For openSUSE 11, gcc 4.3 is the current package installed when you pick the Base Development pattern. A: That depends on what you mean by ship? If you download and burn a CD or DVD, it will almost certainly be available, but not necessarily installed by default. Some distros (e.g. Fedora) allow choices during the install which will install development tools, but a default install generally does not include them. They are easily installed using whatever package management system the distro supports. Ubuntu includes a package called build-essential which installs gcc, g++, make, etc. so apt-get install build-essential is the first step for doing development on Ubuntu. A: No, on my debian systems I have to install it. But any half-decent system admin should be able to figure out how to install it. Edit: to be specific it is not always installed by default, but it should be available for most every distro. A: AFAIK, all of the distros package V 4.+ nowadays.
Does every Linux distro ship with gcc/g++ 4.* these days?
I'm considering dumping boost as a dependency... atm the only thing that I really need is shared_ptr<>, and I can get that from std::tr1, available in gcc suite 4.*
[ "It's available on Fedora, installable via \"yum\" if you didn't pick \"Development System\" as your default install set. \"yum search gcc\" to get the package to install.\n", "These days, I believe most Linux distros do not ship with the development system by default. But I'm pretty sure g++ v4 is the 'standard' development C++ compiler if you install the C++ development environment at all. g++ v3 is usually just available as a special install. For openSUSE 11, gcc 4.3 is the current package installed when you pick the Base Development pattern.\n", "That depends on what you mean by ship? If you download and burn a CD or DVD, it will almost certainly be available, but not necessarily installed by default. Some distros (e.g. Fedora) allow choices during the install which will install development tools, but a default install generally does not include them. They are easily installed using whatever package management system the distro supports. Ubuntu includes a package called build-essential which installs gcc, g++, make, etc. so apt-get install build-essential is the first step for doing development on Ubuntu.\n", "No, on my debian systems I have to install it. But any half-decent system admin should be able to figure out how to install it.\nEdit: to be specific it is not always installed by default, but it should be available for most every distro.\n", "AFAIK, all of the distros package V 4.+ nowadays.\n" ]
[ 2, 2, 1, 0, 0 ]
[]
[]
[ "c++", "distro", "gcc", "linux" ]
stackoverflow_0000062623_c++_distro_gcc_linux.txt
Q: MS Visual Studio "Package Load Failure" error I'm receiving "Package Load Failure" error when I open VS 2005 after I installed the latest VisualSVN (v. 1.5.2). Anyone facing this error? Is there any tool out there to help identify which package didn't load and/or help unload a specific package? A: Installing the Visual Studio SDK will install the "Package Load Analyzer" package. This allows you to see what package failed to load and why. A: There should be VisualSVN log files in your temp folder (somewhat like "C:\Documents and Settings\\Local Settings\Temp\VisualSVN-2007-06-02-00-01-416.log"). Do you see anything in that file that helps?
MS Visual Studio "Package Load Failure" error
I'm receiving "Package Load Failure" error when I open VS 2005 after I installed the latest VisualSVN (v. 1.5.2). Anyone facing this error? Is there any tool out there to help identify which package didn't load and/or help unload a specific package?
[ "Installing the Visual Studio SDK will install the \"Package Load Analyzer\" package. This allows you to see what package failed to load and why.\n", "There should be VisualSVN log files in your temp folder (somewhat like \n\"C:\\Documents and Settings\\\\Local \nSettings\\Temp\\VisualSVN-2007-06-02-00-01-416.log\").\nDo you see anything in that file that helps?\n" ]
[ 1, 0 ]
[]
[]
[ "visual_studio_2005", "visualsvn" ]
stackoverflow_0000045331_visual_studio_2005_visualsvn.txt
Q: How to create J2ME midlets for Nokia using Eclipse Nokia has stopped offering its Developer's Suite, relying on other IDEs, including Eclipse. Meanwhile, Nokia changed its own development tools again and EclipseMe has also changed. This leaves most documentation irrelevant. I want to know what does it take to make a simple Hello-World? (I already found out myself, so this is a Q&A for other people to use) A: Here's what's needed to make a simple hello world - Get Eclipse IDE for Java. I used Ganymede. Set it up. Get Sun's Wireless Toolkit. I used 2.5.2. Install it. Get Nokia's SDK (found here), in my case for S40 6230i Edition, and install it choosing the option to integrate with Sun's WTK Follow the instructions at http://www.eclipseme.org/ to download and install Mobile Tools Java (MTJ). I used version 1.7.9. When configuring devices profiles in MTJ (inside Eclipse) use the Nokia device from the WTK folder and NOT from Nokia's folder. Set the WTK root to the main installation folder - for instance c:\WTK2.5.2; Note that the WTK installer creates other folders apparently for backward compatibility. Get Antenna and set its location in MTJ's property page (in Eclipse). Here's an HelloWorld sample to test the configuration. Note: It worked for me on WindowsXP. Also note: This should work for S60 as well. Just replace the S40 SDK in phase 3 with S60's. A: Unless you need to do something Nokia-specific, I suggest avoiding the Nokia device definitions altogether. Develop for a generic device, then download your application to real, physical devices for final testing. The steps I suggest: Download and install Sun's Wireless Toolkit. Install EclipseME, using the method "installing via a downloaded archive". Configure EclipseME. Choose a generic device, such as the "DefaultColorPhone" to develop on. Create a new project "J2ME Midlet Suite" Right-click on the project, and create a new Midlet "HelloWorld" Enter the code, for example: public HelloWorld() { super(); myForm = new Form("Hello World!"); myForm.append( new StringItem(null, "Hello, world!")); myForm.addCommand(new Command("Exit", Command.EXIT, 0)); myForm.setCommandListener(this); } protected void startApp() throws MIDletStateChangeException { Display.getDisplay(this).setCurrent(myForm); } protected void pauseApp() {} protected void destroyApp(boolean arg0) throws MIDletStateChangeException {} public void commandAction(Command arg0, Displayable arg1) { notifyDestroyed(); } A: The most annoying issue with EclipseME for me was the "broken" debugger, which just wouldn't start. This is covered in docs, but it took me about an hour to find this tip when I first installed EclipseME, and another hour when I returned to JavaME development a year later, so I decided to share this piece of knowledge here, too. If the debugger won't start, open "Java > Debug" section in Eclipse "Preferences" menu, and uncheck "Suspend execution on uncaught exceptions" and "Suspend execution on compilation errors" and increase the "Debugger timeout" near the bottom of the dialog to at least 15000 ms. After that, Eclipse should be able to connect to KVM and run a midlet with a debugger attached.
How to create J2ME midlets for Nokia using Eclipse
Nokia has stopped offering its Developer's Suite, relying on other IDEs, including Eclipse. Meanwhile, Nokia changed its own development tools again and EclipseMe has also changed. This leaves most documentation irrelevant. I want to know what does it take to make a simple Hello-World? (I already found out myself, so this is a Q&A for other people to use)
[ "Here's what's needed to make a simple hello world -\n\nGet Eclipse IDE for Java. I used Ganymede. Set it up.\nGet Sun's Wireless Toolkit. I used 2.5.2. Install it.\nGet Nokia's SDK (found here), in my case for S40 6230i Edition, and install it choosing the option to integrate with Sun's WTK\nFollow the instructions at http://www.eclipseme.org/ to download and install Mobile Tools Java (MTJ). I used version 1.7.9. \nWhen configuring devices profiles in MTJ (inside Eclipse) use the Nokia device from the WTK folder and NOT from Nokia's folder.\nSet the WTK root to the main installation folder - for instance c:\\WTK2.5.2; Note that the WTK installer creates other folders apparently for backward compatibility.\nGet Antenna and set its location in MTJ's property page (in Eclipse).\n\nHere's an HelloWorld sample to test the configuration.\nNote: It worked for me on WindowsXP.\nAlso note: This should work for S60 as well. Just replace the S40 SDK in phase 3 with S60's.\n", "Unless you need to do something Nokia-specific, I suggest avoiding the Nokia device definitions altogether. Develop for a generic device, then download your application to real, physical devices for final testing. The steps I suggest:\n\nDownload and install Sun's Wireless Toolkit.\nInstall EclipseME, using the method \"installing via a downloaded archive\".\nConfigure EclipseME. Choose a generic device, such as the \"DefaultColorPhone\" to develop on.\nCreate a new project \"J2ME Midlet Suite\"\nRight-click on the project, and create a new Midlet \"HelloWorld\"\nEnter the code, for example:\n\npublic HelloWorld() {\n super();\n myForm = new Form(\"Hello World!\");\n myForm.append( new StringItem(null, \"Hello, world!\"));\n myForm.addCommand(new Command(\"Exit\", Command.EXIT, 0));\n myForm.setCommandListener(this);\n}\n\nprotected void startApp() throws MIDletStateChangeException {\n Display.getDisplay(this).setCurrent(myForm);\n}\n\nprotected void pauseApp() {}\n\nprotected void destroyApp(boolean arg0) throws MIDletStateChangeException {}\n\npublic void commandAction(Command arg0, Displayable arg1) {\n notifyDestroyed();\n}\n\n", "The most annoying issue with EclipseME for me was the \"broken\" debugger, which just wouldn't start. This is covered in docs, but it took me about an hour to find this tip when I first installed EclipseME, and another hour when I returned to JavaME development a year later, so I decided to share this piece of knowledge here, too.\nIf the debugger won't start,\n\nopen \"Java > Debug\" section in Eclipse \"Preferences\" menu, and uncheck \"Suspend execution on uncaught exceptions\" and \"Suspend execution on compilation errors\" and\nincrease the \"Debugger timeout\" near the bottom of the dialog to at least 15000 ms. \n\nAfter that, Eclipse should be able to connect to KVM and run a midlet with a debugger attached.\n" ]
[ 10, 5, 2 ]
[]
[]
[ "eclipse", "java", "java_me", "java_wireless_toolkit", "nokia" ]
stackoverflow_0000062491_eclipse_java_java_me_java_wireless_toolkit_nokia.txt
Q: How do I stop network flooding using Windows 2003 Network Load balancing? I know that the MsNLB can be configured to user mulitcast with IGMP. However, if the switch does not support IGMP what are the options? A: If you can find an old "dumb" hub, you can run the node NIC's through it, or if your switch is managable you can set the ports up so that they do not remember the MAC address to IP address mappings. I will say that I have had horrible experience with WLBS (the 2003+ version of NLB) in regards to port flooding. We have an existing load balanced system where we have the load balanced NIC's going into a VLAN to keep the traffic separate and we've turned off the MAC address to IP mapping in order to reduce the problem. We are migrating the load balancing off of WLBS; however, due to the reliability of this configuration.
How do I stop network flooding using Windows 2003 Network Load balancing?
I know that the MsNLB can be configured to user mulitcast with IGMP. However, if the switch does not support IGMP what are the options?
[ "If you can find an old \"dumb\" hub, you can run the node NIC's through it, or if your switch is managable you can set the ports up so that they do not remember the MAC address to IP address mappings.\nI will say that I have had horrible experience with WLBS (the 2003+ version of NLB) in regards to port flooding. We have an existing load balanced system where we have the load balanced NIC's going into a VLAN to keep the traffic separate and we've turned off the MAC address to IP mapping in order to reduce the problem. We are migrating the load balancing off of WLBS; however, due to the reliability of this configuration.\n" ]
[ 0 ]
[]
[]
[ "load_balancing", "windows_server_2003" ]
stackoverflow_0000063658_load_balancing_windows_server_2003.txt
Q: How do you retrofit unit tests into a code base? Do you have any strategies for retrofitting unit tests onto a code base that currently has no unit tests ? A: Read Working Effectively With Legacy Code by Feathers. Jimmy Bogard has a good blog series on SOC. A: The best way to retrofit an existing project without any unit tests is to do it when fixing bugs. Write a test that fails on the logic that has the bug in it with the steps to reproduce the bug. Then refactor the code until the tests pass. Now you can have confidence that the bug is fixed and it will not be introduced later on in the cycle and you started introducing unit tests into the project. A: Here's another great article on testing. In particular, a somewhat relevant quote from it: Here’s a terrible idea - decide you are going to spend a whole week building a test suite for your project. First of all, you’ll likely just get frustrated and burn out on testing. Secondly, you’ll probably write bad tests at first, so even if you get a bunch of tests written, you’re going to need to go back and rewrite them one you figure out how slow, brittle, or unreadable they are. I think you are better off building tests 1 at a time as you are fixing bugs or adding new functionality... don't try to build missing test cases, you should have an end goal for each test, rather than just to improve coverage. A: Dale gets voted up. Yes, there is no gain for adding unit tests to code that's working. Lets say there are two unknown bugs X & Y. At some point X is revealed by typical field use. You fix it, add a unit test, and move on. Now lets assume Y is never uncovered over the entire lifetime of the program. Since Y never revealed itself it's as if it never existed; no need to waste the resources. Multiply this by hundreds or thousands of dormant bugs and you save yourself a great deal of superfluous maintenance. A: If ever you are trying to add unit tests to old perl code I strongly recommend Perl Testing: A Developer's Notebook by Ian Langworth and chromatic. It has some very nice trick on testing legacy and "untestable" code. A: Why do you want to add unit tests? Do you feel the code has bugs? Do you just want something to do? Are you about to embark on a new feature? If it is an older product that has been released for quite some time then I'd agree with the others and only add the tests when I find a bug or add a new feature. If it is a product that is still being developed and not released or only recently released, then I'd start by reviewing the code. If I saw something not quite right then I'd add a test for it. I'd probably make some tests to create some sample data. Creating sample data seems to offer quite a bang for your buck, and it can be useful too. I think there is benefit to writing the tests even when you don't have a bug to test - when you're adding new features or fixing bugs later, your tests confirm that you haven't introduced new bugs. A: Is it possible that we are in a panic and are getting confused between unit tests and performance tests? Is it that your application works fine with few users, but starts throwing errors when under heavier load? If so, unit tests are not the answer. Unit tests != Load tests. If unit tests are in fact the answer, retrofitting unit tests is a good idea as it will help clean up the code. Just be prepared to refactor a lot. Code written with TDD turns out looking a lot different than code written without TDD. In my case, I had a method HandleDisposition() which took care of a lot of cases. This kind of method would not have existed if we had written the code with TDD. When retrofitting unit tests, we refactored that function and now have methods like XDisposition(), YDisposition(), ZDisposition(), which are a lot easier to write unit tests against.
How do you retrofit unit tests into a code base?
Do you have any strategies for retrofitting unit tests onto a code base that currently has no unit tests ?
[ "Read Working Effectively With Legacy Code by Feathers.\nJimmy Bogard has a good blog series on SOC.\n", "The best way to retrofit an existing project without any unit tests is to do it when fixing bugs. Write a test that fails on the logic that has the bug in it with the steps to reproduce the bug. Then refactor the code until the tests pass. Now you can have confidence that the bug is fixed and it will not be introduced later on in the cycle and you started introducing unit tests into the project.\n", "Here's another great article on testing. In particular, a somewhat relevant quote from it:\n\nHere’s a terrible idea - decide you are going to spend a whole week building a test suite for your project. First of all, you’ll likely just get frustrated and burn out on testing. Secondly, you’ll probably write bad tests at first, so even if you get a bunch of tests written, you’re going to need to go back and rewrite them one you figure out how slow, brittle, or unreadable they are.\n\nI think you are better off building tests 1 at a time as you are fixing bugs or adding new functionality... don't try to build missing test cases, you should have an end goal for each test, rather than just to improve coverage.\n", "Dale gets voted up. Yes, there is no gain for adding unit tests to code that's working. Lets say there are two unknown bugs X & Y. At some point X is revealed by typical field use. You fix it, add a unit test, and move on. Now lets assume Y is never uncovered over the entire lifetime of the program. Since Y never revealed itself it's as if it never existed; no need to waste the resources. Multiply this by hundreds or thousands of dormant bugs and you save yourself a great deal of superfluous maintenance.\n", "If ever you are trying to add unit tests to old perl code I strongly recommend\nPerl Testing: A Developer's Notebook by Ian Langworth and chromatic.\nIt has some very nice trick on testing legacy and \"untestable\" code.\n", "Why do you want to add unit tests? Do you feel the code has bugs? Do you just want something to do? Are you about to embark on a new feature?\nIf it is an older product that has been released for quite some time then I'd agree with the others and only add the tests when I find a bug or add a new feature.\nIf it is a product that is still being developed and not released or only recently released, then I'd start by reviewing the code. If I saw something not quite right then I'd add a test for it. I'd probably make some tests to create some sample data. Creating sample data seems to offer quite a bang for your buck, and it can be useful too.\nI think there is benefit to writing the tests even when you don't have a bug to test - when you're adding new features or fixing bugs later, your tests confirm that you haven't introduced new bugs.\n", "Is it possible that we are in a panic and are getting confused between unit tests and performance tests? Is it that your application works fine with few users, but starts throwing errors when under heavier load? If so, unit tests are not the answer. Unit tests != Load tests.\nIf unit tests are in fact the answer, retrofitting unit tests is a good idea as it will help clean up the code. Just be prepared to refactor a lot. Code written with TDD turns out looking a lot different than code written without TDD. In my case, I had a method HandleDisposition() which took care of a lot of cases. This kind of method would not have existed if we had written the code with TDD. When retrofitting unit tests, we refactored that function and now have methods like XDisposition(), YDisposition(), ZDisposition(), which are a lot easier to write unit tests against.\n" ]
[ 10, 6, 4, 2, 2, 2, 1 ]
[]
[]
[ "language_agnostic", "unit_testing" ]
stackoverflow_0000042785_language_agnostic_unit_testing.txt
Q: c# properties with repeated code I have a class with a bunch of properties that look like this: public string Name { get { return _name; } set { IsDirty = true; _name = value; } } It would be a lot easier if I could rely on C# 3.0 to generate the backing store for these, but is there any way to factor out the IsDirty=true; so that I can write my properties something like this and still get the same behaviour: [MakesDirty] public string Name { get; set; } A: No. Not without writing considerably more (arcane?) code than the original version (You'd have to use reflection to check for the attribute on the property and what not.. did I mention it being 'slower').. This is the kind of duplication I can live with. MS has the same need for raising events when a property is changed. INotifyPropertyChanged that is a vital interface for change notifications. Every implementation I've seen yet does set { _name = value; NotifyPropertyChanged("Name"); } If it was possible, I'd figure those smart guys at MS would already have something like that in place.. A: You could try setting up a code snippet to make it easy to create those. A: If you really want to go that way, to modify what the code does using an attribute, there are some ways to do it and they all are related to AOP (Aspect oriented programming). Check out PostSharp, which is an aftercompiler that can modify your code in a after compilation step. For example you could set up one custom attribute for your properties (or aspect, how it is called in AOP) that injects code inside property setters, that marks your objects as dirty. If you want some examples of how this is achieved you can check out their tutorials. But be careful with AOP and because you can just as easily create more problems using it that you're trying to solve if not used right. There are more AOP frameworks out there some using post compilation and some using method interception mechanisms that are present in .Net, the later have some performance drawbacks compared to the first. A: No, when you use automatic properties you don't have any control over the implementation. The best option is to use a templating tool, code snippets or create a private SetValue<T>(ref T backingField, T value) which encapsulates the setter logic. private void SetValue<T>(ref T backingField, T value) { if (backingField != value) { backingField = value; IsDirty = true; } } public string Name { get { return _name; } set { SetValue(ref _name, value); } } A: The other alternative might be a code generator such as codesmith to automate creating the properties. This would be especially useful if the properties you are creating are columns in a database table A: ContextBound object. If you create a class that extends context bound object and you create a ContextAttribute you can intercept the calls made to such a property and set the IsDirty. .NET will create a proxy to your class so all calls go over something like a remoting sink. The problem with such an approach though is that your proxy will only be invoked when called externally. I'll give you an example. class A { [Foo] public int Property1{get; set;} public int Property2{get {return variable;} set{ Property1 = value; variable = value; } } When property1 is called from another class, your proxy would be invoked. But if another class calls property2, even though the set of property2 will call into property1 no proxy will be invoked, (a proxy isn't necessary when you're in the class itself). There is a lot of sample code out there of using ContextBoundObjects, look into it. A: I can recommend to use Enterprise Library for that purpose. Policy Application Block delivers the infrastructure to do "something" (something = you can code that on your own) whenever you enter/exit a method for example. You can control the behavior with attributes. Take that as a hint an go into detail with the documentation of enterprise library. A: There's a DefaultValueAttribute that can be assigned to a property, this is mainly used by the designer tools so they can indicate when a property has been changed, but, it might be a "tidy" way of describing what the default value for a property is, and thus being able to identify if it's changed. You'd need to use Reflection to identify property changes - which isn't actually that expensive unless you're doing lots of it! Caveat: You wouldn't be able to tell if a property had been changed BACK from a non-default value to the default one. A: I'd say that the best way of solving this is to use Aspect-Oriented Programming (AOP). Mats Helander did a write up on this on InfoQ. The article is a bit messy, but it's possible to follow. There are a number of different products that does AOP in the .NET space, i recommend PostSharp. A: If you do go with Attributes, I'm fairly certain you'll have to roll your own logic to deduce what they mean and what to do about them. Whatever is using your custom class objects will have to have a way of performing these attribute actions/checks, preferably at instantiation. Otherwise, you're looking at using maybe events. You'd still have to add the event to every set method, but the benefit there would be you're not hard-coding what to do about dirty sets on every property and can control, in one place, what is to be done. That would, at the very least, introduce a bit more code re-use.
c# properties with repeated code
I have a class with a bunch of properties that look like this: public string Name { get { return _name; } set { IsDirty = true; _name = value; } } It would be a lot easier if I could rely on C# 3.0 to generate the backing store for these, but is there any way to factor out the IsDirty=true; so that I can write my properties something like this and still get the same behaviour: [MakesDirty] public string Name { get; set; }
[ "No. Not without writing considerably more (arcane?) code than the original version (You'd have to use reflection to check for the attribute on the property and what not.. did I mention it being 'slower').. This is the kind of duplication I can live with.\nMS has the same need for raising events when a property is changed. INotifyPropertyChanged that is a vital interface for change notifications. Every implementation I've seen yet\ndoes\nset\n{ \n _name = value; \n NotifyPropertyChanged(\"Name\"); \n}\n\nIf it was possible, I'd figure those smart guys at MS would already have something like that in place.. \n", "You could try setting up a code snippet to make it easy to create those.\n", "If you really want to go that way, to modify what the code does using an attribute, there are some ways to do it and they all are related to AOP (Aspect oriented programming). Check out PostSharp, which is an aftercompiler that can modify your code in a after compilation step. For example you could set up one custom attribute for your properties (or aspect, how it is called in AOP) that injects code inside property setters, that marks your objects as dirty. If you want some examples of how this is achieved you can check out their tutorials. \nBut be careful with AOP and because you can just as easily create more problems using it that you're trying to solve if not used right.\nThere are more AOP frameworks out there some using post compilation and some using method interception mechanisms that are present in .Net, the later have some performance drawbacks compared to the first.\n", "No, when you use automatic properties you don't have any control over the implementation. The best option is to use a templating tool, code snippets or create a private SetValue<T>(ref T backingField, T value) which encapsulates the setter logic.\nprivate void SetValue<T>(ref T backingField, T value)\n{\n if (backingField != value)\n {\n backingField = value;\n IsDirty = true;\n }\n}\n\npublic string Name\n{\n get\n {\n return _name;\n }\n set\n {\n SetValue(ref _name, value);\n }\n}\n\n", "The other alternative might be a code generator such as codesmith to automate creating the properties. This would be especially useful if the properties you are creating are columns in a database table\n", "ContextBound object. If you create a class that extends context bound object and you create a ContextAttribute you can intercept the calls made to such a property and set the IsDirty. .NET will create a proxy to your class so all calls go over something like a remoting sink.\nThe problem with such an approach though is that your proxy will only be invoked when called externally. I'll give you an example.\nclass A\n{\n [Foo]\n public int Property1{get; set;}\n public int Property2{get {return variable;} set{ Property1 = value; variable = value; }\n}\n\nWhen property1 is called from another class, your proxy would be invoked. But if another class calls property2, even though the set of property2 will call into property1 no proxy will be invoked, (a proxy isn't necessary when you're in the class itself).\nThere is a lot of sample code out there of using ContextBoundObjects, look into it.\n", "I can recommend to use Enterprise Library for that purpose. Policy Application Block delivers the infrastructure to do \"something\" (something = you can code that on your own) whenever you enter/exit a method for example. You can control the behavior with attributes. Take that as a hint an go into detail with the documentation of enterprise library.\n", "There's a DefaultValueAttribute that can be assigned to a property, this is mainly used by the designer tools so they can indicate when a property has been changed, but, it might be a \"tidy\" way of describing what the default value for a property is, and thus being able to identify if it's changed.\nYou'd need to use Reflection to identify property changes - which isn't actually that expensive unless you're doing lots of it!\nCaveat: You wouldn't be able to tell if a property had been changed BACK from a non-default value to the default one.\n", "I'd say that the best way of solving this is to use Aspect-Oriented Programming (AOP). Mats Helander did a write up on this on InfoQ. The article is a bit messy, but it's possible to follow. \nThere are a number of different products that does AOP in the .NET space, i recommend PostSharp.\n", "If you do go with Attributes, I'm fairly certain you'll have to roll your own logic to deduce what they mean and what to do about them. Whatever is using your custom class objects will have to have a way of performing these attribute actions/checks, preferably at instantiation.\nOtherwise, you're looking at using maybe events. You'd still have to add the event to every set method, but the benefit there would be you're not hard-coding what to do about dirty sets on every property and can control, in one place, what is to be done. That would, at the very least, introduce a bit more code re-use.\n" ]
[ 5, 3, 3, 2, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "attributes", "c#", "properties" ]
stackoverflow_0000063556_attributes_c#_properties.txt
Q: Does generated code need to be human readable? I'm working on a tool that will generate the source code for an interface and a couple classes implementing that interface. My output isn't particularly complicated, so it's not going to be hard to make the output conform to our normal code formatting standards. But this got me thinking: how human-readable does auto-generated code need to be? When should extra effort be expended to make sure the generated code is easily read and understood by a human? In my case, the classes I'm generating are essentially just containers for some data related to another part of the build with methods to get the data. No one should ever need to look at the code for the classes themselves, they just need to call the various getters the classes provide. So, it's probably not too important if the code is "clean", well formatted and easily read by a human. However, what happens if you're generating code that has more than a small amount of simple logic in it? A: I think it's just as important for generated code to be readable and follow normal coding styles. At some point, someone is either going to need to debug the code or otherwise see what is happening "behind the scenes". A: Yes!, absolutely!; I can even throw in a story for you to explain why it is important that a human can easily read the auto generated code... I once got the opportunity to work on a new project. Now, one of the first things you need to do when you start writing code is to create some sort of connection and data representation to and from the database. But instead of just writing this code by hand, we had someone who had developed his own code generator to automatically build base classes from a database schema. It was really neat, the tedious job of writing all this code was now out of our hands... The only problem was, the generated code was far from readable for a normal human. Of course we didn't about that, because hey, it just saved us a lot of work. But after a while things started to go wrong, data was incorrectly read from the user input (or so we thought), corruptions occurred inside the database while we where only reading. Strange.. because reading doesn't change any data (again, so we thought)... Like any good developer we started to question our own code, but after days of searching.. even rewriting code, we could not find anything... and then it dawned on us, the auto generated code was broken! So now an even bigger task awaited us, checking auto generated code that no sane person could understand in a reasonable amount of time... I'm talking about non indented, really bad style code with unpronounceable variable and function names... It turned out that it would even be faster to rewrite the code ourselves, instead of trying to figure out how the code actually worked. Eventually the developer who wrote the code generator remade it later on, so it now produces readable code, in case something went wrong like before. Here is a link I just found about the topic at hand; I was acctually looking for a link to one of the chapters from the "pragmatic programmer" book to point out why we looked in our code first. A: I think that depends on how the generated code will be used. If the code is not meant to be read by humans, i.e. it's regenerated whenever something changes, I don't think it has to be readable. However, if you are using code generation as an intermediate step in "normal" programming, the generated could should have the same readability as the rest of your source code. In fact, making the generated code "unreadable" can be an advantage, because it will discourage people from "hacking" generated code, and rather implement their changes in the code-generator instead—which is very useful whenever you need to regenerate the code for whatever reason and not lose the changes your colleague did because he thought the generated code was "finished". A: Yes it does. Firstly, you might need to debug it -- you will be making it easy on yourself. Secondly it should adhere to any coding conventions you use in your shop because someday the code might need to be changed by hand and thus become human code. This scenario typically ensues when your code generation tool does not cover one specific thing you need and it is not deemed worthwhile modifying the tool just for that purpose. A: Look up active code generation vs. passive code generation. With respect to passive code generation, absolutely yes, always. With regards to active code generation, when the code achieves the goal of being transparent, which is acting exactly like a documented API, then no. A: I would say that it is imperative that the code is human readable, unless your code-gen tool has an excellent debugger you (or unfortunate co-worker) will probably by the one waist deep in the code trying to track that oh so elusive bug in the system. My own excursion into 'code from UML' left a bitter tast in my mouth as I could not get to grips with the supposedly 'fancy' debugging process. A: You will kill yourself if you have to debug your own generated code. Don't start thinking you won't. Keep in mind that when you trust your code to generate code then you've already introduced two errors into the system - You've inserted yourself twice. There is absolutely NO reason NOT to make it human parseable, so why in the world would you want to do so? -Adam A: The whole point of generated code is to do something "complex" that is easier defined in some higher level language. Due to it being generated, the actual maintenance of this generated code should be within the subroutine that generates the code, not the generated code. Therefor, human readability should have a lower priority; things like runtime speed or functionality are far more important. This is particularly the case when you look at tools like bison and flex, which use the generated code to pre-generate speedy lookup tables to do pattern matching, which would simply be insane to manually maintain. A: One more aspect of the problem which was not mentioned is that the generated code should also be "version control-friendly" (as far as it is feasible). I found it useful many times to double-check diffs in generated code vs the source code. That way you could even occasionally find bugs in tools which generate code. A: It's quite possible that somebody in the future will want to go through and see what your code does. So making it somewhat understandable is a good thing. You also might want to include at the top of each generated file a comment saying how and why this file was generated and what it's purpose is. A: Generally, if you're generating code that needs to be human-modified later, it needs to be as human-readable as possible. However, even if it's code that will be generated and never touched again, it still needs to be readable enough that you (as the developer writing the code generator) can debug the generator - if your generator spits out bad code, it may be hard to track down if it's difficult to understand. A: I would think it's worth it to take the extra time to make it human readable just to make it easier to debug. A: Generated code should be readable, (format etc can usually be handled by a half decent IDE). At some stage in the codes lifetime it is going to be viewed by someone and they will want to make sense of it. A: I think for data containers or objects with very straightforward workings, human readability is not very important. However, as soon as a developer may have to read the code to understand how something happens, it needs to be readable. What if the logic has a bug? How will anybody ever discover it if no one is able to read and understand the code? I would go so far as generating comments for the more complicated logic sections, to express the intent, so it's easier to determine if there really is a bug. A: Logic should always be readable. If someone else is going to read the code, try to put yourself in their place and see if you would fully understand the code in high (and low?) level without reading that particular piece of code. I wouldn't spend too much time with code that never would be read, but if it's not too much time i would go through the generated code. If not, at least make comment to cover the loss of readability. A: If this code is likely to be debugged, then you should seriously consider to generate it in a human readable format. A: There are different types of generated code, but the most simple types would be: Generated code that is not meant to be seen by the developer. e.g., xml-ish code that defines layouts (think .frm files, or the horrible files generated by SSIS) Generated code that is meant to be a basis for a class that will be later customized by your developer, e.g., code is generated to reduce typing tedium If you're making the latter, you definitely want your code to be human readable. Classes and interfaces, no matter how "off limits" to developers you think they should be, would almost certainly fall under generated code type number 2. They will be hit by the debugger at one point of another -- applying code formatting is the least you can do the ease that debugging process when the compiler hits those generated classes A: Like virtually everybody else here, I say make it readable. It costs nothing extra in your generation process and you (or your successor) will appreciate it when they go digging. For a real world example - look at anything Visual Studio generates. Well formatted, with comments and everything. A: Generated code is code, and there's no reason any code shouldn't be readable and nicely formatted. This is cheap especially in generated code: you don't need to apply formatting yourself, the generator does it for you everytime! :) As a secondary option in case you're really that lazy, how about piping the code through a beautifier utility of your choice before writing it to disk to ensure at least some level of consistency. Nevertheless, almost all good programmers I know format their code rather pedantically and there's a good reason for it: there's no write-only code. A: Absolutely yes for tons of good reasons already said above. And one more is that if your code need to be checked by an assesor (for safety and dependability issues), it is pretty better if the code is human redeable. If not, the assessor will refuse to assess it and your project will be refected by authorities. The only solution is then to assess... the code generator (that's usually much more difficult ;)) A: It depends on whether the code will only be read by a compiler or also by a human. In addition, it matters whether the code is supposed to be super-fast or whether readability is important. When in doubt, put in the extra effort to generate readable code. A: I think the answer is: it depends. *It depends upon whether you need to configure and store the generated code as an artefact. For example, people very rarely keep or configure the object code output from a c-compiler, because they know they can reproduce it from the source every time. I think there may be a similar analogy here. *It depends upon whether you need to certify the code to some standard, e.g. Misra-C or DO178. *It depends upon whether the source will be generated via your tool every time the code is compiled, or if it will you be stored for inclusion in a build at a later time. Personally, if all you want to do is build the code, compile it into an executable and then throw the intermediate code away, then I can't see any point in making it too pretty.
Does generated code need to be human readable?
I'm working on a tool that will generate the source code for an interface and a couple classes implementing that interface. My output isn't particularly complicated, so it's not going to be hard to make the output conform to our normal code formatting standards. But this got me thinking: how human-readable does auto-generated code need to be? When should extra effort be expended to make sure the generated code is easily read and understood by a human? In my case, the classes I'm generating are essentially just containers for some data related to another part of the build with methods to get the data. No one should ever need to look at the code for the classes themselves, they just need to call the various getters the classes provide. So, it's probably not too important if the code is "clean", well formatted and easily read by a human. However, what happens if you're generating code that has more than a small amount of simple logic in it?
[ "I think it's just as important for generated code to be readable and follow normal coding styles. At some point, someone is either going to need to debug the code or otherwise see what is happening \"behind the scenes\".\n", "Yes!, absolutely!; I can even throw in a story for you to explain why it is important that a human can easily read the auto generated code...\nI once got the opportunity to work on a new project. Now, one of the first things you need to do when you start writing code is to create some sort of connection and data representation to and from the database. But instead of just writing this code by hand, we had someone who had developed his own code generator to automatically build base classes from a database schema. It was really neat, the tedious job of writing all this code was now out of our hands... The only problem was, the generated code was far from readable for a normal human.\nOf course we didn't about that, because hey, it just saved us a lot of work.\nBut after a while things started to go wrong, data was incorrectly read from the user input (or so we thought), corruptions occurred inside the database while we where only reading. Strange.. because reading doesn't change any data (again, so we thought)...\nLike any good developer we started to question our own code, but after days of searching.. even rewriting code, we could not find anything... and then it dawned on us, the auto generated code was broken!\nSo now an even bigger task awaited us, checking auto generated code that no sane person could understand in a reasonable amount of time... I'm talking about non indented, really bad style code with unpronounceable variable and function names... It turned out that it would even be faster to rewrite the code ourselves, instead of trying to figure out how the code actually worked.\nEventually the developer who wrote the code generator remade it later on, so it now produces readable code, in case something went wrong like before.\nHere is a link I just found about the topic at hand; I was acctually looking for a link to one of the chapters from the \"pragmatic programmer\" book to point out why we looked in our code first.\n", "I think that depends on how the generated code will be used. If the code is not meant to be read by humans, i.e. it's regenerated whenever something changes, I don't think it has to be readable. However, if you are using code generation as an intermediate step in \"normal\" programming, the generated could should have the same readability as the rest of your source code. \nIn fact, making the generated code \"unreadable\" can be an advantage, because it will discourage people from \"hacking\" generated code, and rather implement their changes in the code-generator instead—which is very useful whenever you need to regenerate the code for whatever reason and not lose the changes your colleague did because he thought the generated code was \"finished\".\n", "Yes it does.\nFirstly, you might need to debug it -- you will be making it easy on yourself. \nSecondly it should adhere to any coding conventions you use in your shop because someday the code might need to be changed by hand and thus become human code. This scenario typically ensues when your code generation tool does not cover one specific thing you need and it is not deemed worthwhile modifying the tool just for that purpose.\n", "Look up active code generation vs. passive code generation. With respect to passive code generation, absolutely yes, always. With regards to active code generation, when the code achieves the goal of being transparent, which is acting exactly like a documented API, then no.\n", "I would say that it is imperative that the code is human readable, unless your code-gen tool has an excellent debugger you (or unfortunate co-worker) will probably by the one waist deep in the code trying to track that oh so elusive bug in the system. My own excursion into 'code from UML' left a bitter tast in my mouth as I could not get to grips with the supposedly 'fancy' debugging process. \n", "You will kill yourself if you have to debug your own generated code. Don't start thinking you won't. Keep in mind that when you trust your code to generate code then you've already introduced two errors into the system - You've inserted yourself twice.\nThere is absolutely NO reason NOT to make it human parseable, so why in the world would you want to do so?\n-Adam\n", "The whole point of generated code is to do something \"complex\" that is easier defined in some higher level language. Due to it being generated, the actual maintenance of this generated code should be within the subroutine that generates the code, not the generated code.\nTherefor, human readability should have a lower priority; things like runtime speed or functionality are far more important. This is particularly the case when you look at tools like bison and flex, which use the generated code to pre-generate speedy lookup tables to do pattern matching, which would simply be insane to manually maintain.\n", "One more aspect of the problem which was not mentioned is that the generated code should also be \"version control-friendly\" (as far as it is feasible).\nI found it useful many times to double-check diffs in generated code vs the source code. \nThat way you could even occasionally find bugs in tools which generate code.\n", "It's quite possible that somebody in the future will want to go through and see what your code does. So making it somewhat understandable is a good thing.\nYou also might want to include at the top of each generated file a comment saying how and why this file was generated and what it's purpose is.\n", "Generally, if you're generating code that needs to be human-modified later, it needs to be as human-readable as possible. However, even if it's code that will be generated and never touched again, it still needs to be readable enough that you (as the developer writing the code generator) can debug the generator - if your generator spits out bad code, it may be hard to track down if it's difficult to understand.\n", "I would think it's worth it to take the extra time to make it human readable just to make it easier to debug. \n", "Generated code should be readable, (format etc can usually be handled by a half decent IDE). At some stage in the codes lifetime it is going to be viewed by someone and they will want to make sense of it.\n", "I think for data containers or objects with very straightforward workings, human readability is not very important. \nHowever, as soon as a developer may have to read the code to understand how something happens, it needs to be readable. What if the logic has a bug? How will anybody ever discover it if no one is able to read and understand the code? I would go so far as generating comments for the more complicated logic sections, to express the intent, so it's easier to determine if there really is a bug.\n", "Logic should always be readable. If someone else is going to read the code, try to put yourself in their place and see if you would fully understand the code in high (and low?) level without reading that particular piece of code. \nI wouldn't spend too much time with code that never would be read, but if it's not too much time i would go through the generated code. If not, at least make comment to cover the loss of readability.\n", "If this code is likely to be debugged, then you should seriously consider to generate it in a human readable format. \n", "There are different types of generated code, but the most simple types would be:\n\nGenerated code that is not meant to be seen by the developer. e.g., xml-ish code that defines layouts (think .frm files, or the horrible files generated by SSIS)\nGenerated code that is meant to be a basis for a class that will be later customized by your developer, e.g., code is generated to reduce typing tedium\n\nIf you're making the latter, you definitely want your code to be human readable.\nClasses and interfaces, no matter how \"off limits\" to developers you think they should be, would almost certainly fall under generated code type number 2. They will be hit by the debugger at one point of another -- applying code formatting is the least you can do the ease that debugging process when the compiler hits those generated classes\n", "Like virtually everybody else here, I say make it readable. It costs nothing extra in your generation process and you (or your successor) will appreciate it when they go digging.\nFor a real world example - look at anything Visual Studio generates. Well formatted, with comments and everything.\n", "Generated code is code, and there's no reason any code shouldn't be readable and nicely formatted. This is cheap especially in generated code: you don't need to apply formatting yourself, the generator does it for you everytime! :)\nAs a secondary option in case you're really that lazy, how about piping the code through a beautifier utility of your choice before writing it to disk to ensure at least some level of consistency. Nevertheless, almost all good programmers I know format their code rather pedantically and there's a good reason for it: there's no write-only code.\n", "Absolutely yes for tons of good reasons already said above. And one more is that if your code need to be checked by an assesor (for safety and dependability issues), it is pretty better if the code is human redeable. If not, the assessor will refuse to assess it and your project will be refected by authorities. The only solution is then to assess... the code generator (that's usually much more difficult ;))\n", "It depends on whether the code will only be read by a compiler or also by a human. In addition, it matters whether the code is supposed to be super-fast or whether readability is important. When in doubt, put in the extra effort to generate readable code.\n", "I think the answer is: it depends.\n*It depends upon whether you need to configure and store the generated code as an artefact. For example, people very rarely keep or configure the object code output from a c-compiler, because they know they can reproduce it from the source every time. I think there may be a similar analogy here.\n*It depends upon whether you need to certify the code to some standard, e.g. Misra-C or DO178.\n*It depends upon whether the source will be generated via your tool every time the code is compiled, or if it will you be stored for inclusion in a build at a later time.\nPersonally, if all you want to do is build the code, compile it into an executable and then throw the intermediate code away, then I can't see any point in making it too pretty.\n" ]
[ 17, 7, 3, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "code_generation", "language_agnostic", "readability" ]
stackoverflow_0000063257_code_generation_language_agnostic_readability.txt
Q: Split out ints from string Let's say I have a web page that currently accepts a single ID value via a url parameter: http://example.com/mypage.aspx?ID=1234 I want to change it to accept a list of ids, like this: http://example.com/mypage.aspx?IDs=1234,4321,6789 So it's available to my code as a string via context.Request.QueryString["IDs"]. What's the best way to turn that string value into a List<int>? Edit: I know how to do .split() on a comma to get a list of strings, but I ask because I don't know how to easily convert that string list to an int list. This is still in .Net 2.0, so no lambdas. A: No offense to those who provided clear answers, but many people seem to be answering your question instead of addressing your problem. You want multiple IDs, so you think you could this this: http://example.com/mypage.aspx?IDs=1234,4321,6789 The problem is that this is a non-robust solution. In the future, if you want multiple values, what do you do if they have commas? A better solution (and this is perfectly valid in a query string), is to use multiple parameters with the same name: http://example.com/mypage.aspx?ID=1234;ID=4321;ID=6789 Then, whatever query string parser you use should be able to return a list of IDs. If it can't handle this (and also handle semi-colons instead of ampersands), then it's broken. A: Something like this might work: public static IList<int> GetIdListFromString(string idList) { string[] values = idList.Split(','); List<int> ids = new List<int>(values.Length); foreach (string s in values) { int i; if (int.TryParse(s, out i)) { ids.Add(i); } } return ids; } Which would then be used: string intString = "1234,4321,6789"; IList<int> list = GetIdListFromString(intString); foreach (int i in list) { Console.WriteLine(i); } A: You can instantiate a List<T> from an array. VB.NET: Dim lstIDs as new List(of Integer)(ids.split(',')) This is prone to casting errors though if the array contains non-int elements A: All I can think of is to loop over the list of strings (which you have got from performing a split) and doing something like int.TryParse() on them one after the other and putting them into a new List<int>. Encapsulate it in a nice little helper method somewhere and it won't be too horrid. A: If you like the functional style, you can try something like string ids = "1,2,3,4,5"; List<int> l = new List<int>(Array.ConvertAll( ids.Split(','), new Converter<string, int>(int.Parse))); No lambdas, but you do have Converters and Predicates and other nice things that can be made from methods. A: I see my answer came rather late, i.e. several other had written the same. Therefore I present an alternative method using regular expressions to validate and divide the string. class Program { //Accepts one or more groups of one or more digits, separated by commas. private static readonly Regex CSStringPattern = new Regex(@"^(\d+,?)*\d+$"); //A single ID inside the string. Must only be used after validation private static readonly Regex SingleIdPattern = new Regex(@"\d+"); static void Main(string[] args) { string queryString = "1234,4321,6789"; int[] ids = ConvertCommaSeparatedStringToIntArray(queryString); } private static int[] ConvertCommaSeparatedStringToIntArray(string csString) { if (!CSStringPattern.IsMatch(csString)) throw new FormatException(string.Format("Invalid comma separated string '{0}'", csString)); List<int> ids = new List<int>(); foreach (Match match in SingleIdPattern.Matches(csString)) { ids.Add(int.Parse(match.Value)); //No need to TryParse since string has been validated } return ids.ToArray(); } } A: split is the first thing that comes to mind, but that returns an array, not a List; you could try something like: List<int> intList = new List<int>; foreach (string tempString in ids.split(',') { intList.add (convert.int32(tempString)); } A: Final code snippet that takes what I hope is the best from all the suggestions: Function GetIDs(ByVal IDList As String) As List(Of Integer) Dim SplitIDs() As String = IDList.Split(new Char() {","c}, StringSplitOptions.RemoveEmptyEntries) GetIDs = new List(Of Integer)(SplitIDs.Length) Dim CurID As Integer For Each id As String In SplitIDs If Integer.TryParse(id, CurID) Then GetIDs.Add(CurID) Next id End Function I was hoping to be able to do it in one or two lines of code inline. One line to create the string array and hopefully find something in the framework I didn't already know to handle importing it to a List<int> that could handle the cast intelligently. But if I must move it to a method then I will. And yes, I'm using VB. I just prefer C# for asking questions because they'll get a larger audience and I'm just about as fluent. A: You can use string.Split() to split the values once you have extracted them from the URL. string[] splitIds = ids.split(','); A: You'll just have to foreach through them and int.TryParse each one of them. after that just add to the list. Nevermind - @Splash beat me to it A: List<int> convertIDs = new List<int>; string[] splitIds = ids.split(','); foreach(string s in splitIds) { convertIDs.Add(int.Parse(s)); } For completeness you will want to put try/catches around the for loop (or around the int.Parse() call) and handle the error based on your requirements. You can also do a tryparse() like so: List<int> convertIDs = new List<int>; string[] splitIds = ids.split(','); foreach(string s in splitIds) { int i; int.TryParse(out i); if (i != 0) convertIDs.Add(i); } A: To continue on previous answer, quite simply iterating through the array returned by Split and converting to a new array of ints. This sample below in C#: string[] splitIds = stringIds.Split(','); int[] ids = new int[splitIds.Length]; for (int i = 0; i < ids.Length; i++) { ids[i] = Int32.Parse(splitIds[i]); } A: I think the easiest way is to split as shown before, and then loop through the values and try to convert to int. class Program { static void Main(string[] args) { string queryString = "1234,4321,6789"; int[] ids = ConvertCommaSeparatedStringToIntArray(queryString); } private static int[] ConvertCommaSeparatedStringToIntArray(string csString) { //splitting string to substrings string[] idStrings = csString.Split(','); //initializing int-array of same length int[] ids = new int[idStrings.Length]; //looping all substrings for (int i = 0; i < idStrings.Length; i++) { string idString = idStrings[i]; //trying to convert one substring to int int id; if (!int.TryParse(idString, out id)) throw new FormatException(String.Format("Query string contained malformed id '{0}'", idString)); //writing value back to the int-array ids[i] = id; } return ids; } }
Split out ints from string
Let's say I have a web page that currently accepts a single ID value via a url parameter: http://example.com/mypage.aspx?ID=1234 I want to change it to accept a list of ids, like this: http://example.com/mypage.aspx?IDs=1234,4321,6789 So it's available to my code as a string via context.Request.QueryString["IDs"]. What's the best way to turn that string value into a List<int>? Edit: I know how to do .split() on a comma to get a list of strings, but I ask because I don't know how to easily convert that string list to an int list. This is still in .Net 2.0, so no lambdas.
[ "No offense to those who provided clear answers, but many people seem to be answering your question instead of addressing your problem. You want multiple IDs, so you think you could this this:\nhttp://example.com/mypage.aspx?IDs=1234,4321,6789\nThe problem is that this is a non-robust solution. In the future, if you want multiple values, what do you do if they have commas? A better solution (and this is perfectly valid in a query string), is to use multiple parameters with the same name:\nhttp://example.com/mypage.aspx?ID=1234;ID=4321;ID=6789\nThen, whatever query string parser you use should be able to return a list of IDs. If it can't handle this (and also handle semi-colons instead of ampersands), then it's broken.\n", "Something like this might work:\npublic static IList<int> GetIdListFromString(string idList)\n{\n string[] values = idList.Split(',');\n\n List<int> ids = new List<int>(values.Length);\n\n foreach (string s in values)\n {\n int i;\n\n if (int.TryParse(s, out i))\n {\n ids.Add(i);\n }\n }\n\n return ids;\n}\n\nWhich would then be used:\nstring intString = \"1234,4321,6789\";\n\nIList<int> list = GetIdListFromString(intString);\n\nforeach (int i in list)\n{\n Console.WriteLine(i);\n}\n\n", "You can instantiate a List<T> from an array.\nVB.NET:\nDim lstIDs as new List(of Integer)(ids.split(','))\n\nThis is prone to casting errors though if the array contains non-int elements\n", "All I can think of is to loop over the list of strings (which you have got from performing a split) and doing something like int.TryParse() on them one after the other and putting them into a new List<int>. Encapsulate it in a nice little helper method somewhere and it won't be too horrid.\n", "If you like the functional style, you can try something like\n string ids = \"1,2,3,4,5\";\n\n List<int> l = new List<int>(Array.ConvertAll(\n ids.Split(','), new Converter<string, int>(int.Parse)));\n\nNo lambdas, but you do have Converters and Predicates and other nice things that can be made from methods.\n", "I see my answer came rather late, i.e. several other had written the same. Therefore I present an alternative method using regular expressions to validate and divide the string.\nclass Program\n{\n //Accepts one or more groups of one or more digits, separated by commas.\n private static readonly Regex CSStringPattern = new Regex(@\"^(\\d+,?)*\\d+$\");\n\n //A single ID inside the string. Must only be used after validation\n private static readonly Regex SingleIdPattern = new Regex(@\"\\d+\");\n\n static void Main(string[] args)\n {\n string queryString = \"1234,4321,6789\";\n\n int[] ids = ConvertCommaSeparatedStringToIntArray(queryString);\n }\n\n private static int[] ConvertCommaSeparatedStringToIntArray(string csString)\n {\n if (!CSStringPattern.IsMatch(csString))\n throw new FormatException(string.Format(\"Invalid comma separated string '{0}'\",\n csString));\n\n List<int> ids = new List<int>();\n foreach (Match match in SingleIdPattern.Matches(csString))\n {\n ids.Add(int.Parse(match.Value)); //No need to TryParse since string has been validated\n }\n return ids.ToArray();\n }\n}\n\n", "split is the first thing that comes to mind, but that returns an array, not a List;\nyou could try something like:\n\nList<int> intList = new List<int>;\n\nforeach (string tempString in ids.split(',')\n{\n intList.add (convert.int32(tempString));\n}\n\n\n", "Final code snippet that takes what I hope is the best from all the suggestions:\nFunction GetIDs(ByVal IDList As String) As List(Of Integer)\n Dim SplitIDs() As String = IDList.Split(new Char() {\",\"c}, StringSplitOptions.RemoveEmptyEntries)\n GetIDs = new List(Of Integer)(SplitIDs.Length)\n Dim CurID As Integer\n For Each id As String In SplitIDs\n If Integer.TryParse(id, CurID) Then GetIDs.Add(CurID)\n Next id\nEnd Function\n\nI was hoping to be able to do it in one or two lines of code inline. One line to create the string array and hopefully find something in the framework I didn't already know to handle importing it to a List<int> that could handle the cast intelligently. But if I must move it to a method then I will. And yes, I'm using VB. I just prefer C# for asking questions because they'll get a larger audience and I'm just about as fluent.\n", "You can use string.Split() to split the values once you have extracted them from the URL.\nstring[] splitIds = ids.split(',');\n\n", "You'll just have to foreach through them and int.TryParse each one of them. after that just add to the list.\nNevermind - @Splash beat me to it\n", "List<int> convertIDs = new List<int>;\nstring[] splitIds = ids.split(',');\nforeach(string s in splitIds)\n{\n convertIDs.Add(int.Parse(s));\n}\n\nFor completeness you will want to put try/catches around the for loop (or around the int.Parse() call) and handle the error based on your requirements. You can also do a tryparse() like so:\nList<int> convertIDs = new List<int>;\nstring[] splitIds = ids.split(',');\nforeach(string s in splitIds)\n{\n int i;\n int.TryParse(out i);\n if (i != 0)\n convertIDs.Add(i);\n}\n\n", "To continue on previous answer, quite simply iterating through the array returned by Split and converting to a new array of ints. This sample below in C#:\n string[] splitIds = stringIds.Split(',');\n\n int[] ids = new int[splitIds.Length];\n for (int i = 0; i < ids.Length; i++) {\n ids[i] = Int32.Parse(splitIds[i]);\n }\n\n", "I think the easiest way is to split as shown before, and then loop through the values and try to convert to int.\nclass Program\n{\n static void Main(string[] args)\n {\n string queryString = \"1234,4321,6789\";\n\n int[] ids = ConvertCommaSeparatedStringToIntArray(queryString);\n }\n\n private static int[] ConvertCommaSeparatedStringToIntArray(string csString)\n {\n //splitting string to substrings\n string[] idStrings = csString.Split(',');\n\n //initializing int-array of same length\n int[] ids = new int[idStrings.Length];\n\n //looping all substrings\n for (int i = 0; i < idStrings.Length; i++)\n {\n string idString = idStrings[i];\n\n //trying to convert one substring to int\n int id;\n if (!int.TryParse(idString, out id))\n throw new FormatException(String.Format(\"Query string contained malformed id '{0}'\", idString));\n\n //writing value back to the int-array\n ids[i] = id;\n }\n\n return ids;\n }\n}\n\n" ]
[ 13, 12, 4, 2, 2, 2, 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ ".net", ".net_2.0", "string" ]
stackoverflow_0000063463_.net_.net_2.0_string.txt
Q: What are my options for having the RadioButtonList functionality of ASP.NET in WinForms? Is this type of control only available in a 3rd-party library? Has someone implemented an open source version? A: I believe you can include radio buttons in a grid, though that's more cumbersome than it needs to be. Also, I don't think it'd be that hard to make your own control that creates the radio buttons dynamically using a flowlayout panel.
What are my options for having the RadioButtonList functionality of ASP.NET in WinForms?
Is this type of control only available in a 3rd-party library? Has someone implemented an open source version?
[ "I believe you can include radio buttons in a grid, though that's more cumbersome than it needs to be.\nAlso, I don't think it'd be that hard to make your own control that creates the radio buttons dynamically using a flowlayout panel.\n" ]
[ 1 ]
[]
[]
[ "radio_button", "radiobuttonlist", "webforms", "winforms" ]
stackoverflow_0000063778_radio_button_radiobuttonlist_webforms_winforms.txt
Q: Parsing attributes with regex in Perl Here's a problem I ran into recently. I have attributes strings of the form "x=1 and y=abc and z=c4g and ..." Some attributes have numeric values, some have alpha values, some have mixed, some have dates, etc. Every string is supposed to have "x=someval and y=anotherval" at the beginning, but some don't. I have three things I need to do. Validate the strings to be certain that they have x and y. Actually parse the values for x and y. Get the rest of the string. Given the example at the top, this would result in the following variables: $x = 1; $y = "abc"; $remainder = "z=c4g and ..." My question is: Is there a (reasonably) simple way to parse these and validate with a single regular expression? i.e.: if ($str =~ /someexpression/) { $x = $1; $y = $2; $remainder = $3; } Note that the string may consist of only x and y attributes. This is a valid string. I'll post my solution as an answer, but it doesn't meet my single-regex preference. A: Assuming you also want to do something with the other name=value pairs this is how I would do it ( using Perl version 5.10 ): use 5.10.0; use strict; use warnings; my %hash; while( $string =~ m{ (?: ^ | \G ) # start of string or previous match \s* (?<key> \w+ ) # word characters = (?<value> \S+ ) # non spaces \s* # get to the start of the next match (?: and )? }xgi ){ $hash{$+{key}} = $+{value}; } # to make sure that x & y exist die unless exists $hash{x} and exists $hash{y}; On older Perls ( at least Perl 5.6 ); use strict; use warnings; my %hash; while( $string =~ m{ (?: ^ | \G ) # start of string or previous match \s* ( \w+ ) = ( \S+ ) \s* # get to the start of the next match (?: and )? }xgi ){ $hash{$1} = $2; } # to make sure that x & y exist die unless exists $hash{x} and exists $hash{y}; These have the added benefit of continuing to work if you need to work with more data. A: I'm not the best at regular expressions, but this seems pretty close to what you're looking for: /x=(.+) and y=([^ ]+)( and (.*))?/ Except you use $1, $2, and $4. In use: my @strs = ("x=1 and y=abc and z=c4g and w=v4l", "x=yes and y=no", "z=nox and w=noy"); foreach (@strs) { if ($_ =~ /x=(.+) and y=([^ ]+)( and (.*))?/) { $x = $1; $y = $2; $remainder = $4; print "x: $x; y: $y; remainder: $remainder\n"; } else { print "Failed.\n"; } } Output: x: 1; y: abc; remainder: z=c4g and w=v4l x: yes; y: no; remainder: Failed. This of course leaves out plenty of error checking, and I don't know everything about your inputs, but this seems to work. A: As a fairly simple modification to Rudd's version, /^x=(.+) and y=([^ ]+)(?: and (.*))?/ will allow you to use $1, $2 and $3 (the ?: makes it a noncapturing group), and will ensure that the string starts with "x=" rather than allowing a "not_x=" to match If you have better knowledge of what the x and y values will be, this should be used to tighten the regex further: my @strs = ("x=1 and y=abc and z=c4g and w=v4l", "x=yes and y=no", "z=nox and w=noy", "not-x=nox and y=present", "x=yes and w='there is no and y=something arg here'"); foreach (@strs) { if ($_ =~ /^x=(.+) and y=([^ ]+)(?: and (.*))?/) { $x = $1; $y = $2; $remainder = $3; print "x: {$x}; y: {$y}; remainder: {$remainder}\n"; } else { print "$_ Failed.\n"; } } Output: x: {1}; y: {abc}; remainder: {z=c4g and w=v4l} x: {yes}; y: {no}; remainder: {} z=nox and w=noy Failed. not-x=nox and y=present Failed. x: {yes and w='there is no}; y: {something}; remainder: {} Note that the missing part of the last test is due to the current version of the y test requiring no spaces, if the x test had the same restriction that string would have failed. A: Rudd and Cebjyre have gotten you most of the way there but they both have certain problems: Rudd suggested: /x=(.+) and y=([^ ]+)( and (.*))?/ Cebjyre modified it to: /^x=(.+) and y=([^ ]+)(?: and (.*))?/ The second version is better because it will not confuse "not_x=foo" with "x=foo" but will accept things such as "x=foo z=bar y=baz" and set $1 = "foo z=bar" which is undesirable. This is probably what you are looking for: /^x=(\w+) and y=(\w+)(?: and (.*))?/ This disallows anything between the x= and y= options, places and allows and optional " and..." which will be in $3 A: Here's basically what I did to solve this: ($x_str, $y_str, $remainder) = split(/ and /, $str, 3); if ($x_str !~ /x=(.*)/) { # error } $x = $1; if ($y_str !~ /y=(.*)/) { # error } $y = $1; I've omitted some additional validation and error handling. This technique works, but it's not as concise or pretty as I would have liked. I'm hoping someone will have a better suggestion for me.
Parsing attributes with regex in Perl
Here's a problem I ran into recently. I have attributes strings of the form "x=1 and y=abc and z=c4g and ..." Some attributes have numeric values, some have alpha values, some have mixed, some have dates, etc. Every string is supposed to have "x=someval and y=anotherval" at the beginning, but some don't. I have three things I need to do. Validate the strings to be certain that they have x and y. Actually parse the values for x and y. Get the rest of the string. Given the example at the top, this would result in the following variables: $x = 1; $y = "abc"; $remainder = "z=c4g and ..." My question is: Is there a (reasonably) simple way to parse these and validate with a single regular expression? i.e.: if ($str =~ /someexpression/) { $x = $1; $y = $2; $remainder = $3; } Note that the string may consist of only x and y attributes. This is a valid string. I'll post my solution as an answer, but it doesn't meet my single-regex preference.
[ "Assuming you also want to do something with the other name=value pairs this is how I would do it ( using Perl version 5.10 ):\nuse 5.10.0;\nuse strict;\nuse warnings;\n\nmy %hash;\nwhile(\n $string =~ m{\n (?: ^ | \\G ) # start of string or previous match\n \\s*\n\n (?<key> \\w+ ) # word characters\n =\n (?<value> \\S+ ) # non spaces\n\n \\s* # get to the start of the next match\n (?: and )?\n }xgi\n){\n $hash{$+{key}} = $+{value};\n}\n\n# to make sure that x & y exist\ndie unless exists $hash{x} and exists $hash{y};\n\nOn older Perls ( at least Perl 5.6 );\nuse strict;\nuse warnings;\n\nmy %hash;\nwhile(\n $string =~ m{\n (?: ^ | \\G ) # start of string or previous match\n \\s*\n\n ( \\w+ ) = ( \\S+ )\n\n \\s* # get to the start of the next match\n (?: and )?\n }xgi\n){\n $hash{$1} = $2;\n}\n\n# to make sure that x & y exist\ndie unless exists $hash{x} and exists $hash{y};\n\nThese have the added benefit of continuing to work if you need to work with more data.\n", "I'm not the best at regular expressions, but this seems pretty close to what you're looking for:\n/x=(.+) and y=([^ ]+)( and (.*))?/\n\nExcept you use $1, $2, and $4. In use:\nmy @strs = (\"x=1 and y=abc and z=c4g and w=v4l\",\n \"x=yes and y=no\",\n \"z=nox and w=noy\");\n\nforeach (@strs) {\n if ($_ =~ /x=(.+) and y=([^ ]+)( and (.*))?/) {\n $x = $1;\n $y = $2;\n $remainder = $4;\n print \"x: $x; y: $y; remainder: $remainder\\n\";\n } else {\n print \"Failed.\\n\";\n }\n}\n\nOutput:\nx: 1; y: abc; remainder: z=c4g and w=v4l\nx: yes; y: no; remainder: \nFailed.\n\nThis of course leaves out plenty of error checking, and I don't know everything about your inputs, but this seems to work.\n", "As a fairly simple modification to Rudd's version,\n/^x=(.+) and y=([^ ]+)(?: and (.*))?/\n\nwill allow you to use $1, $2 and $3 (the ?: makes it a noncapturing group), and will ensure that the string starts with \"x=\" rather than allowing a \"not_x=\" to match\nIf you have better knowledge of what the x and y values will be, this should be used to tighten the regex further:\nmy @strs = (\"x=1 and y=abc and z=c4g and w=v4l\",\n \"x=yes and y=no\",\n \"z=nox and w=noy\",\n \"not-x=nox and y=present\",\n \"x=yes and w='there is no and y=something arg here'\");\n\nforeach (@strs) {\n if ($_ =~ /^x=(.+) and y=([^ ]+)(?: and (.*))?/) {\n $x = $1;\n $y = $2;\n $remainder = $3;\n print \"x: {$x}; y: {$y}; remainder: {$remainder}\\n\";\n } else {\n print \"$_ Failed.\\n\";\n }\n}\n\nOutput:\nx: {1}; y: {abc}; remainder: {z=c4g and w=v4l}\nx: {yes}; y: {no}; remainder: {}\nz=nox and w=noy Failed.\nnot-x=nox and y=present Failed.\nx: {yes and w='there is no}; y: {something}; remainder: {}\n\nNote that the missing part of the last test is due to the current version of the y test requiring no spaces, if the x test had the same restriction that string would have failed.\n", "Rudd and Cebjyre have gotten you most of the way there but they both have certain problems:\nRudd suggested:\n\n/x=(.+) and y=([^ ]+)( and (.*))?/\n\nCebjyre modified it to:\n\n/^x=(.+) and y=([^ ]+)(?: and (.*))?/\n\nThe second version is better because it will not confuse \"not_x=foo\" with \"x=foo\" but will accept things such as \"x=foo z=bar y=baz\" and set $1 = \"foo z=bar\" which is undesirable.\nThis is probably what you are looking for:\n\n/^x=(\\w+) and y=(\\w+)(?: and (.*))?/\n\nThis disallows anything between the x= and y= options, places and allows and optional \" and...\" which will be in $3\n", "Here's basically what I did to solve this:\n($x_str, $y_str, $remainder) = split(/ and /, $str, 3);\n\nif ($x_str !~ /x=(.*)/)\n{\n # error\n}\n\n$x = $1;\n\nif ($y_str !~ /y=(.*)/)\n{\n # error\n}\n\n$y = $1;\n\nI've omitted some additional validation and error handling. This technique works, but it's not as concise or pretty as I would have liked. I'm hoping someone will have a better suggestion for me.\n" ]
[ 3, 1, 1, 1, 0 ]
[]
[]
[ "perl", "regex" ]
stackoverflow_0000010533_perl_regex.txt
Q: Debugger for unix pipe commands As I build *nix piped commands I find that I want to see the output of one stage to verify correctness before building the next stage but I don't want to re-run each stage. Does anyone know of a program that will help with that? It would keep the output of the last stage automatically to use for any new stages. I usually do this by sending the result of each command to a temporary file (i.e. tee or run each command one at a time) but it would be nice for a program to handle this. I envision something like a tabbed interface where each tab is labeled with each pipe command and selecting a tab shows the output (at least a hundred lines) of applying that command to to the previous result. A: Use 'tee' to copy the intermediate results out to some file as well as pass them on to the next stage of the pipe, like so: cat /var/log/syslog | tee /tmp/syslog.out | grep something | tee /tmp/grep.out | sed 's/foo/bar/g' | tee /tmp/sed.out | cat >>/var/log/syslog.cleaned A: You can also use pipes if you need bidirectional communication (i.e. with netcat): mknod backpipe p nc -l -p 80 0<backpipe | tee -a inflow | nc localhost 81 | tee -a outflow 1>backpipe (via) A: tee(1) is your friend. It sends its input to both the specified file and stdout. Stick it between your pipes. For example: ls | tee /tmp/out1 | sort | tee /tmp/out2 | sed 's/foo/bar/g' A: There's also the "pv" command - available in debian / ubuntu repostitories which shows you the throughput of your pipes. An example from the man page : Transferring a file from another process and passing the expected size to pv: cat file | pv -s 12345 | nc -w 1 somewhere.com 3000
Debugger for unix pipe commands
As I build *nix piped commands I find that I want to see the output of one stage to verify correctness before building the next stage but I don't want to re-run each stage. Does anyone know of a program that will help with that? It would keep the output of the last stage automatically to use for any new stages. I usually do this by sending the result of each command to a temporary file (i.e. tee or run each command one at a time) but it would be nice for a program to handle this. I envision something like a tabbed interface where each tab is labeled with each pipe command and selecting a tab shows the output (at least a hundred lines) of applying that command to to the previous result.
[ "Use 'tee' to copy the intermediate results out to some file as well as pass them on to the next stage of the pipe, like so:\ncat /var/log/syslog | tee /tmp/syslog.out | grep something | tee /tmp/grep.out | sed 's/foo/bar/g' | tee /tmp/sed.out | cat >>/var/log/syslog.cleaned\n\n", "You can also use pipes if you need bidirectional communication (i.e. with netcat): \nmknod backpipe p\nnc -l -p 80 0<backpipe | tee -a inflow | nc localhost 81 | tee -a outflow 1>backpipe\n\n(via)\n", "tee(1) is your friend. It sends its input to both the specified file and stdout. \nStick it between your pipes. For example:\nls | tee /tmp/out1 | sort | tee /tmp/out2 | sed 's/foo/bar/g'\n\n", "There's also the \"pv\" command - available in debian / ubuntu repostitories which shows you the throughput of your pipes.\nAn example from the man page :\nTransferring a file from another process and passing the expected size to pv:\n cat file | pv -s 12345 | nc -w 1 somewhere.com 3000\n\n" ]
[ 5, 2, 1, 1 ]
[]
[]
[ "shell", "terminal" ]
stackoverflow_0000063771_shell_terminal.txt
Q: Create new page in Webtop How do I create a new webpage in the Documentum front end Webtop? A: The short answer is that it can not be done. WebTop is Documentum's generic application for browsing their back-end content repository. Think of it as a web-based Windows Explorer on steroids. It's a tool for storing, versioning, and sharing electronic documents (Word, Excel, etc.) - it's not a tool for creating web pages. Documentum's Web Content Management product is called Web Publisher. It is the tool that companies use to allow non-technical business users to create and edit web pages. A: Why WebTop? You should use Web Publisher which is built on WebTop with the specific purpose of managing web content. Is this an OOTB installation? Web Publisher / WebTop requires significant amount of customization in order to start being useful. Do you have templates defined? If so, then just go to File New and select your template. http://www.dmdeveloper.com/ Is a good site with some very good how-to's.
Create new page in Webtop
How do I create a new webpage in the Documentum front end Webtop?
[ "The short answer is that it can not be done. \nWebTop is Documentum's generic application for browsing their back-end content repository. Think of it as a web-based Windows Explorer on steroids. It's a tool for storing, versioning, and sharing electronic documents (Word, Excel, etc.) - it's not a tool for creating web pages.\nDocumentum's Web Content Management product is called Web Publisher. It is the tool that companies use to allow non-technical business users to create and edit web pages.\n", "Why WebTop? You should use Web Publisher which is built on WebTop with the specific purpose of managing web content. Is this an OOTB installation? Web Publisher / WebTop requires significant amount of customization in order to start being useful. Do you have templates defined? If so, then just go to File New and select your template. \nhttp://www.dmdeveloper.com/ Is a good site with some very good how-to's.\n" ]
[ 2, 1 ]
[]
[]
[ "documentum", "webtop" ]
stackoverflow_0000040021_documentum_webtop.txt
Q: How do you view SQL Server 2005 Reporting Services reports from ReportViewer Control in DMZ I want to be able to view a SQL Server 2005 Reporting Services report from an ASP.NET application in a DMZ through a ReportViewer control. The SQLand SSRS server are behind the firewall. A: `So I had to change the way an ASP.NET 2.0 application called reports from pages. Originally, I used JavaScript to open a new window. ViewCostReport.OnClientClick = "window.open('" + Report.GetProjectCostURL(_PromotionID) + "','ProjectCost','resizable=yes')"; The issue I had was that the window.open call would only work within the client network and not on a new web server located in their DMZ. I had to create a new report WebForm that embedded a ReportViewer control to view the reports. The other issue I had is that the Report Server had to be accessed with windows Authentication since it was being used by another application for reports and that app used roles for report access. So off I went to get my ReportViewer control to impersonate a windows user. I found the solution to be this: Create a new class which implements the Microsoft.Reporting.WebForms.IReportServerCredentials interface for accessing the reports. public class ReportCredentials : Microsoft.Reporting.WebForms.IReportServerCredentials { string _userName, _password, _domain; public ReportCredentials(string userName, string password, string domain) { _userName = userName; _password = password; _domain = domain; } public System.Security.Principal.WindowsIdentity ImpersonationUser { get { return null; } } public System.Net.ICredentials NetworkCredentials { get { return new System.Net.NetworkCredential(_userName, _password, _domain); } } public bool GetFormsCredentials(out System.Net.Cookie authCoki, out string userName, out string password, out string authority) { userName = _userName; password = _password; authority = _domain; authCoki = new System.Net.Cookie(".ASPXAUTH", ".ASPXAUTH", "/", "Domain"); return true; } } Then I created an event for the button to call the report: protected void btnReport_Click(object sender, EventArgs e) { ReportParameter[] parm = new ReportParameter[1]; parm[0] =new ReportParameter("PromotionID",_PromotionID); ReportViewer.ShowCredentialPrompts = false; ReportViewer.ServerReport.ReportServerCredentials = new ReportCredentials("Username", "Password", "Domain"); ReportViewer.ProcessingMode = Microsoft.Reporting.WebForms.ProcessingMode.Remote; ReportViewer.ServerReport.ReportServerUrl = new System.Uri("http://ReportServer/ReportServer"); ReportViewer.ServerReport.ReportPath = "/ReportFolder/ReportName"; ReportViewer.ServerReport.SetParameters(parm); ReportViewer.ServerReport.Refresh(); }
How do you view SQL Server 2005 Reporting Services reports from ReportViewer Control in DMZ
I want to be able to view a SQL Server 2005 Reporting Services report from an ASP.NET application in a DMZ through a ReportViewer control. The SQLand SSRS server are behind the firewall.
[ "`So I had to change the way an ASP.NET 2.0 application called reports from pages. Originally, I used JavaScript to open a new window.\nViewCostReport.OnClientClick = \"window.open('\" + Report.GetProjectCostURL(_PromotionID) + \"','ProjectCost','resizable=yes')\";\n\nThe issue I had was that the window.open call would only work within the client network and not on a new web server located in their DMZ. I had to create a new report WebForm that embedded a ReportViewer control to view the reports.\nThe other issue I had is that the Report Server had to be accessed with windows Authentication since it was being used by another application for reports and that app used roles for report access. So off I went to get my ReportViewer control to impersonate a windows user. I found the solution to be this:\nCreate a new class which implements the Microsoft.Reporting.WebForms.IReportServerCredentials interface for accessing the reports.\npublic class ReportCredentials : Microsoft.Reporting.WebForms.IReportServerCredentials\n{\n string _userName, _password, _domain;\n public ReportCredentials(string userName, string password, string domain)\n {\n _userName = userName;\n _password = password;\n _domain = domain;\n }\n\n public System.Security.Principal.WindowsIdentity ImpersonationUser\n {\n get\n {\n return null;\n }\n }\n\n public System.Net.ICredentials NetworkCredentials\n {\n get\n {\n return new System.Net.NetworkCredential(_userName, _password, _domain);\n }\n }\n\n public bool GetFormsCredentials(out System.Net.Cookie authCoki, out string userName, out string password, out string authority)\n {\n userName = _userName;\n password = _password;\n authority = _domain;\n authCoki = new System.Net.Cookie(\".ASPXAUTH\", \".ASPXAUTH\", \"/\", \"Domain\");\n return true;\n }\n}\n\nThen I created an event for the button to call the report:\nprotected void btnReport_Click(object sender, EventArgs e)\n{\n ReportParameter[] parm = new ReportParameter[1];\n parm[0] =new ReportParameter(\"PromotionID\",_PromotionID);\n ReportViewer.ShowCredentialPrompts = false;\n ReportViewer.ServerReport.ReportServerCredentials = new ReportCredentials(\"Username\", \"Password\", \"Domain\");\n ReportViewer.ProcessingMode = Microsoft.Reporting.WebForms.ProcessingMode.Remote;\n ReportViewer.ServerReport.ReportServerUrl = new System.Uri(\"http://ReportServer/ReportServer\");\n ReportViewer.ServerReport.ReportPath = \"/ReportFolder/ReportName\";\n ReportViewer.ServerReport.SetParameters(parm);\n ReportViewer.ServerReport.Refresh();\n}\n\n" ]
[ 4 ]
[]
[]
[ "asp.net", "reporting_services", "reportingservices_2005", "reportviewer" ]
stackoverflow_0000063882_asp.net_reporting_services_reportingservices_2005_reportviewer.txt
Q: iPhone app loading When I load my iPhone app it always loads a black screen first then pops up the main window. This happens even with a simple empty app with a single window loaded. I've noticed that when loading, most apps zoom in on the main window (or scale it to fit the screen, however you want to think about it) and then load the content of the screen, with no black screen (see the Contacts app for an example). How do I achieve this effect? A: Add a Default.png to your project. This should be the image you want shown instead of the black launch screen. A: Also just to save you some time, there is no way to change this image during the runtime of your application. If you look at Apple's Clock application you can see how depending on the last state of the application, the Default.png changes. You cannot do this in your own app because of permission limits. Also, make sure to read the iPhone HIG for best practices on Default.png use, in short, dont use it as a splash screen like Twitteriffic. A: You can also take a screenshot of your app as an aid to creating the Default.png - while holding the Home button, press and release the Lock Sleep/Wake button. The screenshot can be find in your Camery Roll library in the Photos app and can be synced back to your desktop. A: When the app transitions from the launch image to the actual app content, it should not be jarring to a user - content (text/images) can be added to the screen, but content should never change. If all this leaves you with is an empty blue header, a white body, and a blue footer - then that's all you should have. If you have a persistent tab bar on the bottom & a localized app (different text descriptions), then then launch image should appear with icons but no text. (See Clock.app & Facebook.app for examples.) Screenshots can also be taken in XCode using the Screenshot tab in the Organizer window and a plugged-in device.
iPhone app loading
When I load my iPhone app it always loads a black screen first then pops up the main window. This happens even with a simple empty app with a single window loaded. I've noticed that when loading, most apps zoom in on the main window (or scale it to fit the screen, however you want to think about it) and then load the content of the screen, with no black screen (see the Contacts app for an example). How do I achieve this effect?
[ "Add a Default.png to your project. This should be the image you want shown instead of the black launch screen.\n", "Also just to save you some time, there is no way to change this image during the runtime of your application. If you look at Apple's Clock application you can see how depending on the last state of the application, the Default.png changes. You cannot do this in your own app because of permission limits. Also, make sure to read the iPhone HIG for best practices on Default.png use, in short, dont use it as a splash screen like Twitteriffic.\n", "You can also take a screenshot of your app as an aid to creating the Default.png - while holding the Home button, press and release the Lock Sleep/Wake button. The screenshot can be find in your Camery Roll library in the Photos app and can be synced back to your desktop.\n", "When the app transitions from the launch image to the actual app content, it should not be jarring to a user - content (text/images) can be added to the screen, but content should never change. If all this leaves you with is an empty blue header, a white body, and a blue footer - then that's all you should have. If you have a persistent tab bar on the bottom & a localized app (different text descriptions), then then launch image should appear with icons but no text. (See Clock.app & Facebook.app for examples.)\nScreenshots can also be taken in XCode using the Screenshot tab in the Organizer window and a plugged-in device.\n" ]
[ 19, 8, 3, 2 ]
[]
[]
[ "ios", "iphone" ]
stackoverflow_0000063408_ios_iphone.txt
Q: Passing on named variable arguments in python Say I have the following methods: def methodA(arg, **kwargs): pass def methodB(arg, *args, **kwargs): pass In methodA I wish to call methodB, passing on the kwargs. However, it seems that if I define methodA as follows, the second argument will be passed on as positional rather than named variable arguments. def methodA(arg, **kwargs): methodB("argvalue", kwargs) How do I make sure that the **kwargs in methodA gets passed as **kwargs to methodB? A: Put the asterisks before the kwargs variable. This makes Python pass the variable (which is assumed to be a dictionary) as keyword arguments. methodB("argvalue", **kwargs) A: As an aside: When using functions instead of methods, you could also use functools.partial: import functools def foo(arg, **kwargs): ... bar = functools.partial(foo, "argvalue") The last line will define a function "bar" that, when called, will call foo with the first argument set to "argvalue" and all other functions just passed on: bar(5, myarg="value") will call foo("argvalue", 5, myarg="value") Unfortunately that will not work with methods. A: Some experimentation and I figured this one out: def methodA(arg, **kwargs): methodB("argvalue", **kwargs) Seems obvious now...
Passing on named variable arguments in python
Say I have the following methods: def methodA(arg, **kwargs): pass def methodB(arg, *args, **kwargs): pass In methodA I wish to call methodB, passing on the kwargs. However, it seems that if I define methodA as follows, the second argument will be passed on as positional rather than named variable arguments. def methodA(arg, **kwargs): methodB("argvalue", kwargs) How do I make sure that the **kwargs in methodA gets passed as **kwargs to methodB?
[ "Put the asterisks before the kwargs variable. This makes Python pass the variable (which is assumed to be a dictionary) as keyword arguments.\nmethodB(\"argvalue\", **kwargs)\n\n", "As an aside: When using functions instead of methods, you could also use functools.partial:\nimport functools\n\ndef foo(arg, **kwargs):\n ...\n\nbar = functools.partial(foo, \"argvalue\")\n\nThe last line will define a function \"bar\" that, when called, will call foo with the first argument set to \"argvalue\" and all other functions just passed on:\nbar(5, myarg=\"value\")\n\nwill call\nfoo(\"argvalue\", 5, myarg=\"value\")\n\nUnfortunately that will not work with methods.\n", "Some experimentation and I figured this one out:\ndef methodA(arg, **kwargs):\n methodB(\"argvalue\", **kwargs)\nSeems obvious now...\n" ]
[ 34, 2, 1 ]
[]
[]
[ "python", "variadic_functions" ]
stackoverflow_0000051412_python_variadic_functions.txt
Q: How can I show a grey transparent overlay in C#? How can I show a grey transparent overlay in C#? It should overlay other process which are not owned by the application doing the overlay. A: Create a transparent window the size of the whole screen, mark it always-on-top, calculate the regions of your other application windows, and make the non-window regions of the top window grey. I suppose you could just position your own application windows on top of the transparent grey one, with it being above all the other ones, but getting a tricky z-order scenario like that right, especially in conjunction with other apps that might also be doing z-order tricks, is tough. A: Here a little app which do more or less the functionnality you want : http://www.anappaday.com/downloads/2006/09/day-10-jedi-concentrate.html
How can I show a grey transparent overlay in C#?
How can I show a grey transparent overlay in C#? It should overlay other process which are not owned by the application doing the overlay.
[ "Create a transparent window the size of the whole screen, mark it always-on-top, calculate the regions of your other application windows, and make the non-window regions of the top window grey.\nI suppose you could just position your own application windows on top of the transparent grey one, with it being above all the other ones, but getting a tricky z-order scenario like that right, especially in conjunction with other apps that might also be doing z-order tricks, is tough.\n", "Here a little app which do more or less the functionnality you want :\nhttp://www.anappaday.com/downloads/2006/09/day-10-jedi-concentrate.html\n" ]
[ 1, 1 ]
[]
[]
[ "c#" ]
stackoverflow_0000060785_c#.txt
Q: How I hide empty Velocity variable names? I am using Struts + Velocity in a Java application, but after I submit a form, the confirmation page (Velocity template) shows the variable names instead an empty label, like the Age in following example: Name: Fernando Age: {person.age} Sex: Male I would like to know how to hide it! A: You can mark variables as "silent" like this: $!variable If $variable is null, nothing will be rendered. If it is not null, its value will render as it normally would. A: You will also need to be sure and use the proper syntax. Your example is missing the dollar before the variable. It should be $!{person.age}, not just {person.age}.
How I hide empty Velocity variable names?
I am using Struts + Velocity in a Java application, but after I submit a form, the confirmation page (Velocity template) shows the variable names instead an empty label, like the Age in following example: Name: Fernando Age: {person.age} Sex: Male I would like to know how to hide it!
[ "You can mark variables as \"silent\" like this:\n$!variable\n\nIf $variable is null, nothing will be rendered. If it is not null, its value will render as it normally would.\n", "You will also need to be sure and use the proper syntax. Your example is missing the dollar before the variable. It should be $!{person.age}, not just {person.age}.\n" ]
[ 78, 14 ]
[]
[]
[ "java", "struts", "templates", "velocity" ]
stackoverflow_0000023853_java_struts_templates_velocity.txt
Q: Can I submit a Struts form that references POJO (i.e. not just String or boolean) fields? I have a Struts (1.3x) ActionForm that has several String and boolean properties/fields, but also has some POJO fields. so my form looks something like: MyForm extends ActionForm { private String name; private int id; private Thing thing; ...getters/setters... } In the JSP I can reference the POJO's fields thusly: <html:text property="thing.thingName" /> ...and the values display correctly, but if I try to submit the form I get the ServletException: BeanUtils.populate error. There seems to be a lot of information about this general topic on the web, but none really addresses my specific question, which is: shouldn't I be able to submit a form in Struts that contains fields that are POJOs? A: You can, as long as the fields follow the JavaBean conventions and the setter takes something Struts can understand. So Thing needs getThingName() and setThingName(String).
Can I submit a Struts form that references POJO (i.e. not just String or boolean) fields?
I have a Struts (1.3x) ActionForm that has several String and boolean properties/fields, but also has some POJO fields. so my form looks something like: MyForm extends ActionForm { private String name; private int id; private Thing thing; ...getters/setters... } In the JSP I can reference the POJO's fields thusly: <html:text property="thing.thingName" /> ...and the values display correctly, but if I try to submit the form I get the ServletException: BeanUtils.populate error. There seems to be a lot of information about this general topic on the web, but none really addresses my specific question, which is: shouldn't I be able to submit a form in Struts that contains fields that are POJOs?
[ "You can, as long as the fields follow the JavaBean conventions and the setter takes something Struts can understand. \nSo Thing needs getThingName() and setThingName(String).\n" ]
[ 2 ]
[]
[]
[ "java", "jsp", "struts" ]
stackoverflow_0000063935_java_jsp_struts.txt
Q: When did I last talk to my Domain Server? How can my app get a valid "last time connected to domain" timestamp from Windows, even when the app is running offline? Background: I am writing an application that is run on multiple client machines throughout my company. All of these client machines are on one of the AD domains implemented by my company. This application needs to take certain measures if the client machine has not communicated with the AD for a period of time. An example might be that a machine running this app is stolen. After e.g. 4 weeks, the application refuses to work because it detects that the machine has not communicated with its AD domain for 4 weeks. Note that this must not be tied to a user account because the app might be running as a Local Service account. It the computer-domain relationship that I'm interested in. I have considered and rejected using WinNT://<domain>/<machine>$,user because it doesn't work while offline. Also, any LDAP://... lookups won't work while offline. I have also considered and rejected scheduling this query on a dayly basis and storing the timestamp in the registry or a file. This solutions requires too much setup and coding. Besides this value simply MUST be stored locally by Windows. A: I don't believe this value is stored on the client machine. It's stored in Active Directory, and you can get a list of inactive machines using the Dsquery tool. The best option is to have your program do a simple test such as connection to a DC, and then store the timestamp of that action. A: IMHO i dont think the client machine would store a timestamp of the last time it communicated with AD. This information is stored in active directory itself (ie. on the DC) Once a user logs into say a Windows machine the credentials are cached. If that machine is disconnected from the network the credentials will last forever. You can turn this feature off with group policies, so that the machine does not cache any credentials.
When did I last talk to my Domain Server?
How can my app get a valid "last time connected to domain" timestamp from Windows, even when the app is running offline? Background: I am writing an application that is run on multiple client machines throughout my company. All of these client machines are on one of the AD domains implemented by my company. This application needs to take certain measures if the client machine has not communicated with the AD for a period of time. An example might be that a machine running this app is stolen. After e.g. 4 weeks, the application refuses to work because it detects that the machine has not communicated with its AD domain for 4 weeks. Note that this must not be tied to a user account because the app might be running as a Local Service account. It the computer-domain relationship that I'm interested in. I have considered and rejected using WinNT://<domain>/<machine>$,user because it doesn't work while offline. Also, any LDAP://... lookups won't work while offline. I have also considered and rejected scheduling this query on a dayly basis and storing the timestamp in the registry or a file. This solutions requires too much setup and coding. Besides this value simply MUST be stored locally by Windows.
[ "I don't believe this value is stored on the client machine. It's stored in Active Directory, and you can get a list of inactive machines using the Dsquery tool.\nThe best option is to have your program do a simple test such as connection to a DC, and then store the timestamp of that action.\n", "IMHO i dont think the client machine would store a timestamp of the last time it communicated with AD. This information is stored in active directory itself (ie. on the DC)\nOnce a user logs into say a Windows machine the credentials are cached. If that machine is disconnected from the network the credentials will last forever. You can turn this feature off with group policies, so that the machine does not cache any credentials.\n" ]
[ 1, 0 ]
[]
[]
[ "active_directory", "windows" ]
stackoverflow_0000063345_active_directory_windows.txt
Q: Does Java impose any further restrictions on filenames other than the underlying operating system? Does Java impose any extra restrictions of its own. Windows (upto Vista) does not allow names to include \ / < > ? * : I know HOW to validate names (a regular expression). I need to validate filenames entered by users. My application does not need to run on any other platform, though, of course, I would prefer to be platform independent! A: No, you can escape any character that Java doesn't allow in String literals but the filesystem allows. Also, if trying to port an Windows app to Mac or Unix it is best to use: File.separator To determine the correct file separator to use on each platform. A: When you create a new File the inputted arguments will be normalized by a platform specific implementation of the java.io.FileSystem class. There are no Java specific restrictions that I know of. and yes, always use File.separator. A: Java supports any String that can be expressed in Unicode (subject to some ridiculously long maximum length, Integer.MAX_VALUE), and file names are just another kind of String. Of course, this means that you can try and refer to a file using a name that isn't supported by the underlying Operating System. If you do this, you'll get some kind of IOException when you try and use the File reference...
Does Java impose any further restrictions on filenames other than the underlying operating system?
Does Java impose any extra restrictions of its own. Windows (upto Vista) does not allow names to include \ / < > ? * : I know HOW to validate names (a regular expression). I need to validate filenames entered by users. My application does not need to run on any other platform, though, of course, I would prefer to be platform independent!
[ "No, you can escape any character that Java doesn't allow in String literals but the filesystem allows.\nAlso, if trying to port an Windows app to Mac or Unix it is best to use:\nFile.separator\n\nTo determine the correct file separator to use on each platform.\n", "When you create a new File the inputted arguments will be normalized by a platform specific implementation of the java.io.FileSystem class. There are no Java specific restrictions that I know of.\nand yes, always use File.separator.\n", "Java supports any String that can be expressed in Unicode (subject to some ridiculously long maximum length, Integer.MAX_VALUE), and file names are just another kind of String.\nOf course, this means that you can try and refer to a file using a name that isn't supported by the underlying Operating System. If you do this, you'll get some kind of IOException when you try and use the File reference...\n" ]
[ 2, 0, 0 ]
[]
[]
[ "filesystems", "java", "operating_system" ]
stackoverflow_0000063800_filesystems_java_operating_system.txt
Q: Reading data from a log file as a separate application is writing to it I would like to monitor a log file that is being written to by an application. I want to process the file line by line as, or shortly after, it is written. I have not found a way of detecting that a file has been extended after reaching eof. The code needs to work on Mac and PC, and can be in any language, though I am most familiar with C++ and Perl. Does anybody have a suggestion for the best way to do it? A: In Perl, the File::Tail module does exactly what you need. A: A generic enough answer: Most languages, on EOF, return that no data were read. You can re-try reading after an interval, and if the file has grown since, this time the operating system will return data. A: The essense of tail -f is the following loop: open IN, $file; while(1) { my $line = <IN>; if($line) { #process line... } else { sleep(1); seek(IN,0,1); } } close IN; The seek call is to clear the EOF flag. A: You should be able to use read the standard io from tail -f A: I'd have thought outputting the actions via tee, and thence tail'ing (or using the loop above) the file created by tee some use.
Reading data from a log file as a separate application is writing to it
I would like to monitor a log file that is being written to by an application. I want to process the file line by line as, or shortly after, it is written. I have not found a way of detecting that a file has been extended after reaching eof. The code needs to work on Mac and PC, and can be in any language, though I am most familiar with C++ and Perl. Does anybody have a suggestion for the best way to do it?
[ "In Perl, the File::Tail module does exactly what you need.\n", "A generic enough answer:\nMost languages, on EOF, return that no data were read. You can re-try reading after an interval, and if the file has grown since, this time the operating system will return data.\n", "The essense of tail -f is the following loop:\nopen IN, $file;\nwhile(1) {\n my $line = <IN>;\n if($line) {\n #process line...\n } else {\n sleep(1);\n seek(IN,0,1);\n }\n}\nclose IN;\n\nThe seek call is to clear the EOF flag.\n", "You should be able to use read the standard io from tail -f\n", "I'd have thought outputting the actions via tee, and thence tail'ing (or using the loop above) the file created by tee some use.\n" ]
[ 7, 3, 3, 2, 0 ]
[]
[]
[ "c++", "file_io", "logging", "macos", "perl" ]
stackoverflow_0000062832_c++_file_io_logging_macos_perl.txt
Q: Reading model objects mapped in Velocity Templates I have a Struts + Velocity structure like for example, a Person class, whose one property is a Car object (with its own getter/setter methods) and it is mapped to a Velocity form that submits to an Action, using ModelDriven and getModel structure. I what to put a button on the form that shows "View Car" if car property is not null or car.id != 0 or show another button "Choose Car" if car is null or car.id = 0. How do I code this. I tried something like that in the template file: #if($car != null) #ssubmit("name=view" "value=View Car") #else #ssubmit("name=new" "value=Choose Car") #end But I keep getting error about Null value in the #if line. I also created a boolean method hasCar() in Person to try, but I can't access it and I don't know why. And Velocity + Struts tutorials are difficult to find or have good information. Thanks A: You should change the #if line to: #if($car) A: In the upcoming Velocity 1.6 release, you will be able to do #if( $car == $null ) without error messages. This will allow you to distinguish easily between when $car is null and when it is false. To do that now requires #if( $car && $car != false ), which just isn't as friendly.
Reading model objects mapped in Velocity Templates
I have a Struts + Velocity structure like for example, a Person class, whose one property is a Car object (with its own getter/setter methods) and it is mapped to a Velocity form that submits to an Action, using ModelDriven and getModel structure. I what to put a button on the form that shows "View Car" if car property is not null or car.id != 0 or show another button "Choose Car" if car is null or car.id = 0. How do I code this. I tried something like that in the template file: #if($car != null) #ssubmit("name=view" "value=View Car") #else #ssubmit("name=new" "value=Choose Car") #end But I keep getting error about Null value in the #if line. I also created a boolean method hasCar() in Person to try, but I can't access it and I don't know why. And Velocity + Struts tutorials are difficult to find or have good information. Thanks
[ "You should change the #if line to:\n#if($car)\n\n", "In the upcoming Velocity 1.6 release, you will be able to do #if( $car == $null ) without error messages. This will allow you to distinguish easily between when $car is null and when it is false. To do that now requires #if( $car && $car != false ), which just isn't as friendly.\n" ]
[ 6, 2 ]
[]
[]
[ "java", "struts", "velocity" ]
stackoverflow_0000024495_java_struts_velocity.txt
Q: How to make Flex RIA contents accessible to search engines like Google? How would you make the contents of Flex RIA applications accessible to Google, so that Google can index the content and shows links to the right items in your Flex RIA. Consider a online shop, created in Flex, where the offered items shall be indexed by Google. Then a link on Google should open the corresponding product in the RIA. A: Currently the best technique for making an RIA indexable by search engines is called progressive enhancement (or graceful degradation, depending on which way you see it). Basically you create a simple HTML version of the application using the same data as the application loads. This version should be dynamically generated by some kind of backend server technology. This HTML version can be indexed by Google, but each page also contains a check that determines if the visitor is capable of viewing the rich version, and if so replaces the HTML content with the Flash, Flex or Silverlight application, preferably in such a way that the application starts in a state where it shows the same data as the current page. "Replaces" can mean that it just embeds the application on top of the HTML content, or that it redirects the user to a page that embeds it. The former solution is preferable, because the latter can be considered cloaking. One way of keeping the HTML and RIA versions of a shop synchronized is to decide on a URL scheme and make sure that RIA uses some kind of deep linking technique. If a visitor arrives to a specific item via a search engine, say /items/345 the corresponding pseudo-URL in the RIA should be the same, so that you can embed the RIA on top of the page and set that URL as a parameter to make the RIA display that same page as soon as it has loaded. This summer, Google and Yahoo! announced that they would begin using a custom version of Flash Player to index Flash based applications by exploring them "in the same way that a person would". Now, two months later there is still no evidence that this is actually happening. Ryan Stweart had to cancel his Flex SEO competition because it became evident that no one could win. The problem seems to be that event though the technique may very well work (although I'm sceptical), the custom Flash Player needs some kind of network interface to be able to load any referenced resources, like XML data, other SWFs, etc., and this is currently not implemented by Google. This means that for an application that loads all it's data dynamically, like say, all that I can think of, Googlebot will not actually see anything relevant. Yahoo! ignores SWF based content altogether. Oh, and it just so happens that I talk about Flex and SEO on the latest episode of the Flex show =) A: There is a massive thread available here: http://tech.groups.yahoo.com/group/flexcoders/message/58926 But essentially, google already indexes .SWF files (you can test this out yourself by restricting search results to just .SWF files). It can search any text content within the SWF file. However, if the text information in your site comes from a database / web server. Then it won't be able to access this information easily. One example of getting this to work is using an XML file as your index page, then using an XSLT transform to render it using Flex. "Ted On Flex" has good information about this. http://flex.org/consultants
How to make Flex RIA contents accessible to search engines like Google?
How would you make the contents of Flex RIA applications accessible to Google, so that Google can index the content and shows links to the right items in your Flex RIA. Consider a online shop, created in Flex, where the offered items shall be indexed by Google. Then a link on Google should open the corresponding product in the RIA.
[ "Currently the best technique for making an RIA indexable by search engines is called progressive enhancement (or graceful degradation, depending on which way you see it). Basically you create a simple HTML version of the application using the same data as the application loads. This version should be dynamically generated by some kind of backend server technology. This HTML version can be indexed by Google, but each page also contains a check that determines if the visitor is capable of viewing the rich version, and if so replaces the HTML content with the Flash, Flex or Silverlight application, preferably in such a way that the application starts in a state where it shows the same data as the current page. \"Replaces\" can mean that it just embeds the application on top of the HTML content, or that it redirects the user to a page that embeds it. The former solution is preferable, because the latter can be considered cloaking.\nOne way of keeping the HTML and RIA versions of a shop synchronized is to decide on a URL scheme and make sure that RIA uses some kind of deep linking technique. If a visitor arrives to a specific item via a search engine, say /items/345 the corresponding pseudo-URL in the RIA should be the same, so that you can embed the RIA on top of the page and set that URL as a parameter to make the RIA display that same page as soon as it has loaded.\nThis summer, Google and Yahoo! announced that they would begin using a custom version of Flash Player to index Flash based applications by exploring them \"in the same way that a person would\". Now, two months later there is still no evidence that this is actually happening. Ryan Stweart had to cancel his Flex SEO competition because it became evident that no one could win. The problem seems to be that event though the technique may very well work (although I'm sceptical), the custom Flash Player needs some kind of network interface to be able to load any referenced resources, like XML data, other SWFs, etc., and this is currently not implemented by Google. This means that for an application that loads all it's data dynamically, like say, all that I can think of, Googlebot will not actually see anything relevant. Yahoo! ignores SWF based content altogether.\nOh, and it just so happens that I talk about Flex and SEO on the latest episode of the Flex show =)\n", "There is a massive thread available here:\nhttp://tech.groups.yahoo.com/group/flexcoders/message/58926\nBut essentially, google already indexes .SWF files (you can test this out yourself by restricting search results to just .SWF files). It can search any text content within the SWF file.\nHowever, if the text information in your site comes from a database / web server. Then it won't be able to access this information easily.\nOne example of getting this to work is using an XML file as your index page, then using an XSLT transform to render it using Flex. \"Ted On Flex\" has good information about this.\nhttp://flex.org/consultants\n" ]
[ 5, 1 ]
[]
[]
[ "apache_flex", "google_search", "googlebot", "ria" ]
stackoverflow_0000063232_apache_flex_google_search_googlebot_ria.txt
Q: How to view web pages at different resolutions I am developing a web site and need to see how it will look at different resolutions. The catch is that it must work on our Intranet. Is there a free solution? A: For Firefox, Web Developer Toolbar (https://addons.mozilla.org/en-US/firefox/addon/60) A: Type in the address bar of your favorite browser: javascript:resizeTo(1024,768) Then adjust to your desired resolution. You can even save these as bookmarklets in your favorites/bookmarks. A: For Internet Explorer there's the Internet Explorer Developer Toolbar. It lets you select resolutions quite easily. A: This may not work if you are limited to test internally but Browsercam is a fantastic service when you want to check how well your website performs on various OS/browser combinations. It takes the guesswork out of browsertesting. If you must stay within yout internal network then why don't you setup a virtual PC with the software you need? It's very easy to maintan a few sets of virtual PCs and simply boot the ones you need to test with. And of course you can test with various add-ons etc. using this method. A: I use a product called UltraMon. Technically, it's a product that allows you to have an easier management of your multiple monitor's. The cool thing (and what is important to this question) is that you can set up multiple "Display Profile's". I have two set up: My default 1280*1024 on both monitors One monitor at 1280*1024 and the other at 1024*768 It allows you to setup as many profiles as you want and I just switch between them to check different resolutions.
How to view web pages at different resolutions
I am developing a web site and need to see how it will look at different resolutions. The catch is that it must work on our Intranet. Is there a free solution?
[ "For Firefox, Web Developer Toolbar (https://addons.mozilla.org/en-US/firefox/addon/60)\n", "Type in the address bar of your favorite browser: javascript:resizeTo(1024,768)\nThen adjust to your desired resolution. You can even save these as bookmarklets in your favorites/bookmarks.\n", "For Internet Explorer there's the Internet Explorer Developer Toolbar. It lets you select resolutions quite easily.\n", "This may not work if you are limited to test internally but Browsercam is a fantastic service when you want to check how well your website performs on various OS/browser combinations. It takes the guesswork out of browsertesting.\nIf you must stay within yout internal network then why don't you setup a virtual PC with the software you need? It's very easy to maintan a few sets of virtual PCs and simply boot the ones you need to test with. And of course you can test with various add-ons etc. using this method.\n", "I use a product called UltraMon. Technically, it's a product that allows you to have an easier management of your multiple monitor's. The cool thing (and what is important to this question) is that you can set up multiple \"Display Profile's\". I have two set up:\n\nMy default 1280*1024 on both monitors\nOne monitor at 1280*1024 and the other at 1024*768\n\nIt allows you to setup as many profiles as you want and I just switch between them to check different resolutions.\n" ]
[ 8, 3, 2, 0, 0 ]
[ "Also on Internet Explorer 7 is IE7Pro. It also provides some gadgets that aren't in the Developer Toolbar. I have both installed, and use both quite often.\n" ]
[ -1 ]
[ "internet_explorer", "resolution" ]
stackoverflow_0000062232_internet_explorer_resolution.txt
Q: How can I determine the type of a blessed reference in Perl? In Perl, an object is just a reference to any of the basic Perl data types that has been blessed into a particular class. When you use the ref() function on an unblessed reference, you are told what data type the reference points to. However, when you call ref() on a blessed reference, you are returned the name of the package that reference has been blessed into. I want to know the actual underlying type of the blessed reference. How can I determine this? A: Scalar::Util::reftype() is the cleanest solution. The Scalar::Util module was added to the Perl core in version 5.7 but is available for older versions (5.004 or later) from CPAN. You can also probe with UNIVERSAL::isa(): $x->isa('HASH') # if $x is known to be an object UNIVERSAL::isa($x, 'HASH') # if $x might not be an object or reference Obviously, you'd also have to check for ARRAY and SCALAR types. The UNIVERSAL module (which serves as the base class for all objects) has been part of the core since Perl 5.003. Another way -- easy but a little dirty -- is to stringify the reference. Assuming that the class hasn't overloaded stringification you'll get back something resembling Class=HASH(0x1234ABCD), which you can parse to extract the underlying data type: my $type = ($object =~ /=(.+)\(0x[0-9a-f]+\)$/i); A: You probably shouldn't do this. The underlying type of an object is an implementation detail you shouldn't mess with. Why would you want to know this? A: And my first thought on this was: "Objects in Perl are always hash refs, so what the hack?" But, Scalar::Util::reftype is the answer. Thanks for putting the question here. Here is a code snippet to prove this.. (in case it is of any use to anyone). $> perl -e 'use strict; use warnings "all"; my $x = [1]; bless ($x, "ABC::Def"); use Data::Dumper; print Dumper $x; print ref($x) . "\n"; use Scalar::Util "reftype"; print reftype($x) . "\n"'` Output: $VAR1 = bless( [ 1 ], 'ABC::Def' ); ABC::Def ARRAY
How can I determine the type of a blessed reference in Perl?
In Perl, an object is just a reference to any of the basic Perl data types that has been blessed into a particular class. When you use the ref() function on an unblessed reference, you are told what data type the reference points to. However, when you call ref() on a blessed reference, you are returned the name of the package that reference has been blessed into. I want to know the actual underlying type of the blessed reference. How can I determine this?
[ "Scalar::Util::reftype() is the cleanest solution. The Scalar::Util module was added to the Perl core in version 5.7 but is available for older versions (5.004 or later) from CPAN.\nYou can also probe with UNIVERSAL::isa():\n$x->isa('HASH') # if $x is known to be an object\nUNIVERSAL::isa($x, 'HASH') # if $x might not be an object or reference\n\nObviously, you'd also have to check for ARRAY and SCALAR types. The UNIVERSAL module (which serves as the base class for all objects) has been part of the core since Perl 5.003.\nAnother way -- easy but a little dirty -- is to stringify the reference. Assuming that the class hasn't overloaded stringification you'll get back something resembling Class=HASH(0x1234ABCD), which you can parse to extract the underlying data type:\nmy $type = ($object =~ /=(.+)\\(0x[0-9a-f]+\\)$/i);\n\n", "You probably shouldn't do this. The underlying type of an object is an implementation detail you shouldn't mess with. Why would you want to know this?\n", "And my first thought on this was: \"Objects in Perl are always hash refs, so what the hack?\"\nBut, Scalar::Util::reftype is the answer. Thanks for putting the question here.\nHere is a code snippet to prove this.. (in case it is of any use to anyone).\n\n$> perl -e 'use strict; use warnings \"all\";\n my $x = [1]; bless ($x, \"ABC::Def\");\n use Data::Dumper; print Dumper $x;\n print ref($x) . \"\\n\";\n use Scalar::Util \"reftype\"; print reftype($x) . \"\\n\"'`\n\nOutput:\n\n$VAR1 = bless( [\n 1\n ], 'ABC::Def' );\nABC::Def\nARRAY\n\n" ]
[ 20, 6, 2 ]
[]
[]
[ "perl", "reference", "types" ]
stackoverflow_0000011085_perl_reference_types.txt
Q: How can I fix an issue in IE where borders don't show up when the mouse isn't hovered over an image I am trying to create a rather simple effect on a set of images. When an image doesn't have the mouse over it, I'd like it to have a simple, gray border. When it does have an image over it, I'd like it to have a different, "selected", border. The following CSS works great in Firefox: .myImage a img { border: 1px solid grey; padding: 3px; } .myImage a:hover img { border: 3px solid blue; padding: 1px; } However, in IE, borders do not appear when the mouse isn't hovered over the image. My Google-fu tells me there is a bug in IE that is causing this problem. Unfortunately, I can't seem to locate a way to fix that bug. A: Try using a different colour. I'm not sure IE understands 'grey' (instead, use 'gray'). A: The following works in IE7, IE6, and FF3. The key was to use a:link:hover. IE6 turned the A element into a block element which is why I added the float stuff to shrink-wrap the contents. Note that it's in Standards mode. Dont' know what would happen in quirks mode. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title></title> <style type="text/css"> a, a:visited, a:link, a *, a:visited *, a:link * { border: 0; } .myImage a { float: left; clear: both; border: 0; margin: 3px; padding: 1px; } .myImage a:link:hover { float: left; clear: both; border: 3px solid blue; padding: 1px; margin: 0; display:block; } </style> </head> <body> <div class="myImage"><a href="#"><img src="http://stackoverflow.com/Content/Img/stackoverflow-logo-250.png"></a></div> <div class="myImage"><a href="#"><img src="http://stackoverflow.com/Content/Img/stackoverflow-logo-250.png"></a></div> </body> </html> A: In my experience IE doesn't work well with pseudo-classes. I think the most universal way to handle this is to use Javascript to apply the CSS class to the element. CSS: .standard_border { border: 1px solid grey; padding: 3px; } .hover_border { border: 3px solid blue; padding: 1px; } Inline Javascript: <img src="image.jpg" alt="" class="standard_border" onmouseover="this.className='hover_border'" onmouseout="this.className='standard_border'" /> A: IE has problems with the :hover pseudo-class on anything other than anchor elements so you need to change the element the hover is affecting to the anchor itself. So, if you added a class like "image" to your anchor and altered your markup to something like this: <div class="myImage"><a href="..." class="image"><img .../></a></div> You could then alter your CSS to look like this: .myImage a.image { border: 1px solid grey; padding: 3px; } .myImage a.image:hover { border: 3px solid blue; padding: 1px; } Which should mimic the desired effect by placing the border on the anchor instead of the image. Just as a note, you may need something like the following in your CSS to eliminate the image's default border: .myImage a img { border: none; } A: Try using the background instead of the border. It is not the same but it works in IE (take a look at the menu on my site: www.monex-finance.net). A: <!--[if lt IE 7]> <script src="http://ie7-js.googlecode.com/svn/version/2.0(beta3)/IE7.js" type="text/javascript"></script> <![endif]--> put that in your header, should fix some of the ie bugs.
How can I fix an issue in IE where borders don't show up when the mouse isn't hovered over an image
I am trying to create a rather simple effect on a set of images. When an image doesn't have the mouse over it, I'd like it to have a simple, gray border. When it does have an image over it, I'd like it to have a different, "selected", border. The following CSS works great in Firefox: .myImage a img { border: 1px solid grey; padding: 3px; } .myImage a:hover img { border: 3px solid blue; padding: 1px; } However, in IE, borders do not appear when the mouse isn't hovered over the image. My Google-fu tells me there is a bug in IE that is causing this problem. Unfortunately, I can't seem to locate a way to fix that bug.
[ "Try using a different colour. I'm not sure IE understands 'grey' (instead, use 'gray').\n", "The following works in IE7, IE6, and FF3. The key was to use a:link:hover. IE6 turned the A element into a block element which is why I added the float stuff to shrink-wrap the contents.\nNote that it's in Standards mode. Dont' know what would happen in quirks mode.\n<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\" \"http://www.w3.org/TR/html4/loose.dtd\">\n<html>\n <head>\n <title></title>\n <style type=\"text/css\">\n a, a:visited, a:link, a *, a:visited *, a:link * { border: 0; }\n .myImage a\n {\n float: left;\n clear: both;\n border: 0;\n margin: 3px;\n padding: 1px;\n }\n .myImage a:link:hover\n {\n float: left;\n clear: both;\n border: 3px solid blue;\n padding: 1px;\n margin: 0;\n display:block;\n }\n </style>\n </head>\n <body>\n <div class=\"myImage\"><a href=\"#\"><img src=\"http://stackoverflow.com/Content/Img/stackoverflow-logo-250.png\"></a></div>\n <div class=\"myImage\"><a href=\"#\"><img src=\"http://stackoverflow.com/Content/Img/stackoverflow-logo-250.png\"></a></div>\n </body>\n</html>\n\n", "In my experience IE doesn't work well with pseudo-classes. I think the most universal way to handle this is to use Javascript to apply the CSS class to the element.\nCSS:\n.standard_border\n{\n border: 1px solid grey;\n padding: 3px;\n}\n.hover_border\n{\n border: 3px solid blue;\n padding: 1px;\n}\n\nInline Javascript:\n<img src=\"image.jpg\" alt=\"\" class=\"standard_border\" onmouseover=\"this.className='hover_border'\" onmouseout=\"this.className='standard_border'\" />\n\n", "IE has problems with the :hover pseudo-class on anything other than anchor elements so you need to change the element the hover is affecting to the anchor itself. So, if you added a class like \"image\" to your anchor and altered your markup to something like this:\n<div class=\"myImage\"><a href=\"...\" class=\"image\"><img .../></a></div>\n\nYou could then alter your CSS to look like this:\n.myImage a.image\n{\n border: 1px solid grey;\n padding: 3px;\n}\n.myImage a.image:hover\n{\n border: 3px solid blue;\n padding: 1px;\n}\n\nWhich should mimic the desired effect by placing the border on the anchor instead of the image. Just as a note, you may need something like the following in your CSS to eliminate the image's default border:\n.myImage a img {\n border: none;\n}\n\n", "Try using the background instead of the border.\nIt is not the same but it works in IE (take a look at the menu on my site: www.monex-finance.net).\n", "<!--[if lt IE 7]>\n<script src=\"http://ie7-js.googlecode.com/svn/version/2.0(beta3)/IE7.js\" type=\"text/javascript\"></script>\n<![endif]-->\n\nput that in your header, should fix some of the ie bugs.\n" ]
[ 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "css", "image", "internet_explorer" ]
stackoverflow_0000063885_css_image_internet_explorer.txt
Q: Draining Standard Error in Java When launching a process from Java, both stderr and stdout can block on output if I don't read from the pipes. Currently I have a thread that pro-actively reads from one and the main thread blocks on the other. Is there an easy way to join the two streams or otherwise cause the subprocess to continue while not losing the data in stderr? A: Set the redirectErrorStream property on ProcessBuilder to send stderr output to stdout: ProcessBuilder builder = new ProcessBuilder(command); builder.redirectErrorStream(true); You should then create a thread to deal with the process stream, something like the following: Process p = builder.start(); InputHandler outHandler = new InputHandler(p.getInputStream()); Where InputHandler is defined as: private static class InputHandler extends Thread { private final InputStream is; private final ByteArrayOutputStream os; public InputHandler(InputStream input) { this.is = input; this.os = new ByteArrayOutputStream(); } public void run() { try { int c; while ((c = is.read()) != -1) { os.write(c); } } catch (Throwable t) { throw new IllegalStateException(t); } } public String getOutput() { try { os.flush(); } catch (Throwable t) { throw new IllegalStateException(t); } return os.toString(); } } Alternatively, just create two InputHandlers for the InputStream and ErrorStream. Knowing that the program will block if you don't read them is 90% of the battle :) A: Just have two threads, one reading from stdout, one from stderr?
Draining Standard Error in Java
When launching a process from Java, both stderr and stdout can block on output if I don't read from the pipes. Currently I have a thread that pro-actively reads from one and the main thread blocks on the other. Is there an easy way to join the two streams or otherwise cause the subprocess to continue while not losing the data in stderr?
[ "Set the redirectErrorStream property on ProcessBuilder to send stderr output to stdout:\nProcessBuilder builder = new ProcessBuilder(command);\nbuilder.redirectErrorStream(true);\n\nYou should then create a thread to deal with the process stream, something like the following:\nProcess p = builder.start();\n\nInputHandler outHandler = new InputHandler(p.getInputStream());\n\nWhere InputHandler is defined as:\nprivate static class InputHandler extends Thread {\n\n private final InputStream is;\n\n private final ByteArrayOutputStream os;\n\n public InputHandler(InputStream input) {\n this.is = input;\n this.os = new ByteArrayOutputStream();\n }\n\n public void run() {\n try {\n int c;\n while ((c = is.read()) != -1) {\n os.write(c);\n }\n } catch (Throwable t) {\n throw new IllegalStateException(t);\n }\n }\n\n public String getOutput() {\n try {\n os.flush();\n } catch (Throwable t) {\n throw new IllegalStateException(t);\n }\n return os.toString();\n }\n\n}\n\nAlternatively, just create two InputHandlers for the InputStream and ErrorStream. Knowing that the program will block if you don't read them is 90% of the battle :)\n", "Just have two threads, one reading from stdout, one from stderr?\n" ]
[ 4, 0 ]
[]
[]
[ "java", "multithreading", "process_management" ]
stackoverflow_0000064000_java_multithreading_process_management.txt
Q: Hide directories in wxGenericDirCtrl I am using a wxGenericDirCtrl, and I would like to know if there is a way to hide directories, I'd especially like to hide siblings of parent nodes. For example if my directory structure looks like this: +-a | +-b | | | +-whatever | +-c | | | +-d | | | +-e | | | +-f | +-g | +-whatever If my currently selected directory is /a/c/d is there any way to hide b and g, so that the tree looks like this in my ctrl: +-a | +-c | +-[d] | +-e | +-f I'm currently working with a directory structure that has lots and lots directories that are irrelevant to most users, so it would be nice to be able to clean it up. Edit: If it makes a difference, I am using wxPython, and so far, I have only tested my code on linux using the GTK backend, but I do plan to make it multi-platform and using it on Windows and Mac using the native backends. A: Listing/walking directories in Python is very easy, so I would recommend trying to "roll your own" using one of the simple tree controls (such as TreeCtrl or CustomTreeCtrl). It should really be quite easy to call the directory listing code when some directory is expanded and return the result. A: I don't think that's possible. It would be relatively easy to add this functionality to the underlying C++ wxWidgets control, but since you're using wxPython, you'd then have to rebuild that as well which is a tremendous issue.
Hide directories in wxGenericDirCtrl
I am using a wxGenericDirCtrl, and I would like to know if there is a way to hide directories, I'd especially like to hide siblings of parent nodes. For example if my directory structure looks like this: +-a | +-b | | | +-whatever | +-c | | | +-d | | | +-e | | | +-f | +-g | +-whatever If my currently selected directory is /a/c/d is there any way to hide b and g, so that the tree looks like this in my ctrl: +-a | +-c | +-[d] | +-e | +-f I'm currently working with a directory structure that has lots and lots directories that are irrelevant to most users, so it would be nice to be able to clean it up. Edit: If it makes a difference, I am using wxPython, and so far, I have only tested my code on linux using the GTK backend, but I do plan to make it multi-platform and using it on Windows and Mac using the native backends.
[ "Listing/walking directories in Python is very easy, so I would recommend trying to \"roll your own\" using one of the simple tree controls (such as TreeCtrl or CustomTreeCtrl). It should really be quite easy to call the directory listing code when some directory is expanded and return the result.\n", "I don't think that's possible.\nIt would be relatively easy to add this functionality to the underlying C++ wxWidgets control, but since you're using wxPython, you'd then have to rebuild that as well which is a tremendous issue.\n" ]
[ 1, 0 ]
[]
[]
[ "wxpython", "wxwidgets" ]
stackoverflow_0000052844_wxpython_wxwidgets.txt
Q: Does emacs have something like vi's "set number"? Does emacs have something like vi's “set number”, so that each line starts with its line number? A: Take a look at this article. It explains various ways to add line numbers to emacs: http://www.emacswiki.org/cgi-bin/wiki/LineNumbers A: Try adding linum.el to your emacs dir / .emacs file.
Does emacs have something like vi's "set number"?
Does emacs have something like vi's “set number”, so that each line starts with its line number?
[ "Take a look at this article. It explains various ways to add line numbers to emacs:\nhttp://www.emacswiki.org/cgi-bin/wiki/LineNumbers\n", "Try adding linum.el to your emacs dir / .emacs file.\n" ]
[ 2, 1 ]
[]
[]
[ "editor", "emacs" ]
stackoverflow_0000064293_editor_emacs.txt
Q: UserControl Property of type Enum displays in designer as bool or not at all I have a usercontrol that has several public properties. These properties automatically show up in the properties window of the VS2005 designer under the "Misc" category. Except two of the properties which are enumerations don't show up correctly. The first on uses the following enum: public enum VerticalControlAlign { Center, Top, Bottom } This does not show up in the designer at all. The second uses this enum: public enum AutoSizeMode { None, KeepInControl } This one shows up, but the designer seems to think it's a bool and only shows True and False. And when you build a project using the controls it will say that it can't convert type bool to AutoSizeMode. Also, these enums are declared globably to the Namespace, so they are accessible everywhere. Any ideas? A: I made a little test with your problem (I'm not sure if I understood it correctly), and these properties shows up in the designer correctly, and all enums are shown appropriately. If this isn't what you're looking for, then please explain yourself further. Don't get hang up on the _Ugly part thrown in there. I just used it for a quick test. using System.ComponentModel; using System.Windows.Forms; namespace SampleApplication { public partial class CustomUserControl : UserControl { public CustomUserControl() { InitializeComponent(); } /// <summary> /// We're hiding AutoSizeMode in UserControl here. /// </summary> public new enum AutoSizeMode { None, KeepInControl } public enum VerticalControlAlign { Center, Top, Bottom } /// <summary> /// Note that you cannot have a property /// called VerticalControlAlign if it is /// already defined in the scope. /// </summary> [DisplayName("VerticalControlAlign")] [Category("stackoverflow.com")] [Description("Sets the vertical control align")] public VerticalControlAlign VerticalControlAlign_Ugly { get { return m_align; } set { m_align = value; } } private VerticalControlAlign m_align; /// <summary> /// Note that you cannot have a property /// called AutoSizeMode if it is /// already defined in the scope. /// </summary> [DisplayName("AutoSizeMode")] [Category("stackoverflow.com")] [Description("Sets the auto size mode")] public AutoSizeMode AutoSizeMode_Ugly { get { return m_autoSize; } set { m_autoSize = value; } } private AutoSizeMode m_autoSize; } } A: For starters, the second enum, AutoSizeMode is declared in System.Windows.Forms. So that might cause the designer some issues. Secondly, you might find the following page on MSDN useful: http://msdn.microsoft.com/en-us/library/tk67c2t8.aspx A: Some things to try (designer mode in VS2005 I have found to be somewhat flaky): Open your web.config and add: batch="false" to your <compilation> tag. Try setting defaults to your enums: public enum VerticalControlAlign { Center = 0, Top = 1, Bottom = 2 } A: You do not need to make your enums global in order for them to be visible in the designer. Clarify please: if you add another value to your AutoSizeMode enum, does it still appear as a boolean? If (instead) you change the name of enum, does it still appear as a boolean?
UserControl Property of type Enum displays in designer as bool or not at all
I have a usercontrol that has several public properties. These properties automatically show up in the properties window of the VS2005 designer under the "Misc" category. Except two of the properties which are enumerations don't show up correctly. The first on uses the following enum: public enum VerticalControlAlign { Center, Top, Bottom } This does not show up in the designer at all. The second uses this enum: public enum AutoSizeMode { None, KeepInControl } This one shows up, but the designer seems to think it's a bool and only shows True and False. And when you build a project using the controls it will say that it can't convert type bool to AutoSizeMode. Also, these enums are declared globably to the Namespace, so they are accessible everywhere. Any ideas?
[ "I made a little test with your problem (I'm not sure if I understood it correctly), and these properties shows up in the designer correctly, and all enums are shown appropriately. If this isn't what you're looking for, then please explain yourself further. \nDon't get hang up on the _Ugly part thrown in there. I just used it for a quick test.\nusing System.ComponentModel;\nusing System.Windows.Forms;\n\nnamespace SampleApplication\n{\n public partial class CustomUserControl : UserControl\n {\n public CustomUserControl()\n {\n InitializeComponent();\n }\n\n /// <summary>\n /// We're hiding AutoSizeMode in UserControl here.\n /// </summary>\n public new enum AutoSizeMode { None, KeepInControl }\n public enum VerticalControlAlign { Center, Top, Bottom }\n\n /// <summary>\n /// Note that you cannot have a property \n /// called VerticalControlAlign if it is \n /// already defined in the scope.\n /// </summary>\n [DisplayName(\"VerticalControlAlign\")]\n [Category(\"stackoverflow.com\")]\n [Description(\"Sets the vertical control align\")]\n public VerticalControlAlign VerticalControlAlign_Ugly\n {\n get { return m_align; }\n set { m_align = value; }\n }\n private VerticalControlAlign m_align; \n\n /// <summary>\n /// Note that you cannot have a property \n /// called AutoSizeMode if it is \n /// already defined in the scope.\n /// </summary>\n [DisplayName(\"AutoSizeMode\")]\n [Category(\"stackoverflow.com\")]\n [Description(\"Sets the auto size mode\")]\n public AutoSizeMode AutoSizeMode_Ugly\n {\n get { return m_autoSize; }\n set { m_autoSize = value; }\n }\n private AutoSizeMode m_autoSize; \n }\n}\n\n", "For starters, the second enum, AutoSizeMode is declared in System.Windows.Forms. So that might cause the designer some issues.\nSecondly, you might find the following page on MSDN useful:\nhttp://msdn.microsoft.com/en-us/library/tk67c2t8.aspx\n", "Some things to try (designer mode in VS2005 I have found to be somewhat flaky):\n\nOpen your web.config and add: batch=\"false\" to your <compilation> tag.\nTry setting defaults to your enums:\npublic enum VerticalControlAlign\n{\n Center = 0,\n Top = 1,\n Bottom = 2\n}\n\n\n", "You do not need to make your enums global in order for them to be visible in the designer.\nClarify please: \n\nif you add another value to your AutoSizeMode enum, does it still appear as a boolean? \nIf (instead) you change the name of enum, does it still appear as a boolean?\n\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "c#", "enums", "user_controls", "visual_studio" ]
stackoverflow_0000064139_c#_enums_user_controls_visual_studio.txt
Q: Copying databases to remote locations Our EPOS system copies data by compressing the database into a zip file, and manually copying to each till, using shared directories. Each branched is liked to the main location, using VPN which can be problematic, but is required for the file sharing to work correctly. Since our database system currently does not support replication, is there another solution for copying data or should we migrate our software to another database? A: Replication is the "right" way to go, so if migrating to another database is an option (is it really?), that's the best route. You might consider a utility that queries all the tables for raw data (in CSV?), sending that to files. Then at least you don't have to take the database down to do the backup.
Copying databases to remote locations
Our EPOS system copies data by compressing the database into a zip file, and manually copying to each till, using shared directories. Each branched is liked to the main location, using VPN which can be problematic, but is required for the file sharing to work correctly. Since our database system currently does not support replication, is there another solution for copying data or should we migrate our software to another database?
[ "Replication is the \"right\" way to go, so if migrating to another database is an option (is it really?), that's the best route.\nYou might consider a utility that queries all the tables for raw data (in CSV?), sending that to files. Then at least you don't have to take the database down to do the backup.\n" ]
[ 1 ]
[]
[]
[ "database", "file_sharing", "point_of_sale", "vpn" ]
stackoverflow_0000064314_database_file_sharing_point_of_sale_vpn.txt
Q: Is there a way to "diff" two XMLs element-wise? I'm needing to check the differences between two XMLs but not "blindly", Given that both use the same DTD, I'm actually interested in verifying wether they have the same amount of elements or if there's differences. A: xmldiff from Logilab diffxml A commercial one include in XMLSpy A: oXygen has good XML diff (and merge) support.
Is there a way to "diff" two XMLs element-wise?
I'm needing to check the differences between two XMLs but not "blindly", Given that both use the same DTD, I'm actually interested in verifying wether they have the same amount of elements or if there's differences.
[ "\nxmldiff from Logilab\ndiffxml \nA commercial one include in XMLSpy\n\n", "oXygen has good XML diff (and merge) support.\n" ]
[ 1, 0 ]
[]
[]
[ "comparison", "diff", "dtd", "xml" ]
stackoverflow_0000063756_comparison_diff_dtd_xml.txt