content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Throwing exceptions in ASP.NET C# Is there a difference between just saying throw; and throw ex; assuming ex is the exception you're catching? A: throw ex; will erase your stacktrace. Don't do this unless you mean to clear the stacktrace. Just use throw; A: Here is a simple code snippet that will help illustrate the difference. The difference being that throw ex will reset the stack trace as if the line "throw ex;" were the source of the exception. Code: using System; namespace StackOverflowMess { class Program { static void TestMethod() { throw new NotImplementedException(); } static void Main(string[] args) { try { //example showing the output of throw ex try { TestMethod(); } catch (Exception ex) { throw ex; } } catch (Exception ex) { Console.WriteLine(ex.ToString()); } Console.WriteLine(); Console.WriteLine(); try { //example showing the output of throw try { TestMethod(); } catch (Exception ex) { throw; } } catch (Exception ex) { Console.WriteLine(ex.ToString()); } Console.ReadLine(); } } } Output (notice the different stack trace): System.NotImplementedException: The method or operation is not implemented. at StackOverflowMess.Program.Main(String[] args) in Program.cs:line 23 System.NotImplementedException: The method or operation is not implemented. at StackOverflowMess.Program.TestMethod() in Program.cs:line 9 at StackOverflowMess.Program.Main(String[] args) in Program.cs:line 43 A: You have two options throw; or throw the orginal exceptional as an innerexception of a new exception. Depending on what you need.
Throwing exceptions in ASP.NET C#
Is there a difference between just saying throw; and throw ex; assuming ex is the exception you're catching?
[ "throw ex; will erase your stacktrace. Don't do this unless you mean to clear the stacktrace. Just use throw;\n", "Here is a simple code snippet that will help illustrate the difference. The difference being that throw ex will reset the stack trace as if the line \"throw ex;\" were the source of the exception.\nCode:\nusing System;\n\nnamespace StackOverflowMess\n{\n class Program\n {\n static void TestMethod()\n {\n throw new NotImplementedException();\n }\n\n static void Main(string[] args)\n {\n try\n {\n //example showing the output of throw ex\n try\n {\n TestMethod();\n }\n catch (Exception ex)\n {\n throw ex;\n }\n }\n catch (Exception ex)\n {\n Console.WriteLine(ex.ToString());\n }\n\n Console.WriteLine();\n Console.WriteLine();\n\n try\n {\n //example showing the output of throw\n try\n {\n TestMethod();\n }\n catch (Exception ex)\n {\n throw;\n }\n }\n catch (Exception ex)\n {\n Console.WriteLine(ex.ToString());\n }\n\n Console.ReadLine();\n }\n }\n}\n\nOutput (notice the different stack trace):\n\nSystem.NotImplementedException: The method or operation is not implemented.\nat StackOverflowMess.Program.Main(String[] args) in Program.cs:line 23\nSystem.NotImplementedException: The method or operation is not implemented.\nat StackOverflowMess.Program.TestMethod() in Program.cs:line 9\nat StackOverflowMess.Program.Main(String[] args) in Program.cs:line 43\n\n", "You have two options throw; or throw the orginal exceptional as an innerexception of a new exception. Depending on what you need.\n" ]
[ 44, 18, 2 ]
[]
[]
[ ".net", "c#", "exception" ]
stackoverflow_0000088490_.net_c#_exception.txt
Q: How to replace $*=1 with an alternative now $* is no longer supported I'm a complete perl novice, am running a perl script using perl 5.10 and getting this warning: $* is no longer supported at migrate.pl line 380. Can anyone describe what $* did and what the recommended replacement of it is now? Alternatively if you could point me to documentation that describes this that would be great. The script I'm running is to migrate a source code database from vss to svn and can be found here: http://www.x2systems.com/files/migrate.pl.txt The two snippets of code that use it are: $* = 1; $/ = ':'; $cmd = $SSCMD . " Dir -I- \"$proj\""; $_ = `$cmd`; # what this next expression does is to merge wrapped lines like: # $/DeviceAuthority/src/com/eclyptic/networkdevicedomain/deviceinterrogator/excep # tion: # into: # $/DeviceAuthority/src/com/eclyptic/networkdevicedomain/deviceinterrogator/exception: s/\n((\w*\-*\.*\w*\/*)+\:)/$1/g; $* = 0; and then some ways later on: $cmd = $SSCMD . " get -GTM -W -I-Y -GL\"$localdir\" -V$version \"$file\" 2>&1"; $out = `$cmd`; # get rid of stupid VSS warning messages $* = 1; $out =~ s/\n?Project.*rebuilt\.//g; $out =~ s/\n?File.*rebuilt\.//g; $out =~ s/\n.*was moved out of this project.*rebuilt\.//g; $out =~ s/\nContinue anyway.*Y//g; $* = 0; many thanks, Rory A: From perlvar: Use of $* is deprecated in modern Perl, supplanted by the /s and /m modifiers on pattern matching. If you have access to the place where it's being matched just add it to the end: $haystack =~ m/.../sm; If you only have access to the string, you can surround the expression with qr/(?ms-ix:$expr)/; Or in your case: s/\n((\w*\-*\.*\w*\/*)+\:)/$1/gsm; A: From Perl 5.8 version of perlvar: Set to a non-zero integer value to do multi-line matching within a string [...] Use of $* is deprecated in modern Perl, supplanted by the /s and /m modifiers on pattern matching. While using /s and /m is much better, you need to set the modifiers (appropriately!) for each regular expression. perlvar also says "This variable influences the interpretation of only ^ and $." which gives the impression that it's equivalent to /m only and not /s. Note that $* is a global variable. Because the change to it is not made local with the local keyword, it will affect all regular expressions in the program, not just those that follow it in the block. This will make it more difficult to update the script correctly. A: From perldoc perlvar: $* Set to a non-zero integer value to do multi-line matching within a string, 0 (or undefined) to tell Perl that it can assume that strings contain a single line, for the purpose of optimizing pattern matches. Pattern matches on strings containing multiple newlines can produce confusing results when $* is 0 or undefined. Default is undefined. (Mnemonic: * matches multiple things.) This variable influences the interpretation of only ^ and $. A literal newline can be searched for even when $* == 0. Use of $* is deprecated in modern Perl, supplanted by the /s and /m modifiers on pattern matching. Assigning a non-numerical value to $* triggers a warning (and makes $* act as if $* == 0), while assigning a numerical value to $* makes that an implicit int is applied on the value. A: It turns on multi-line mode. Since perl 5.0 (from 1994), the correct way to do that is adding a m and/or the s modifier to your regexps, like this s/\n?Project.*rebuilt\.//msg A: It was basically a way of saying that in subsequent regexes (s/// or m//), the ^ or $ assertions should be able to match before or after newlines embedded in the string. The recommended equivalent is the m modifier at the end of your regex (e.g., s/\n((\w*-*.*\w*/*)+:)/$1/gm;).
How to replace $*=1 with an alternative now $* is no longer supported
I'm a complete perl novice, am running a perl script using perl 5.10 and getting this warning: $* is no longer supported at migrate.pl line 380. Can anyone describe what $* did and what the recommended replacement of it is now? Alternatively if you could point me to documentation that describes this that would be great. The script I'm running is to migrate a source code database from vss to svn and can be found here: http://www.x2systems.com/files/migrate.pl.txt The two snippets of code that use it are: $* = 1; $/ = ':'; $cmd = $SSCMD . " Dir -I- \"$proj\""; $_ = `$cmd`; # what this next expression does is to merge wrapped lines like: # $/DeviceAuthority/src/com/eclyptic/networkdevicedomain/deviceinterrogator/excep # tion: # into: # $/DeviceAuthority/src/com/eclyptic/networkdevicedomain/deviceinterrogator/exception: s/\n((\w*\-*\.*\w*\/*)+\:)/$1/g; $* = 0; and then some ways later on: $cmd = $SSCMD . " get -GTM -W -I-Y -GL\"$localdir\" -V$version \"$file\" 2>&1"; $out = `$cmd`; # get rid of stupid VSS warning messages $* = 1; $out =~ s/\n?Project.*rebuilt\.//g; $out =~ s/\n?File.*rebuilt\.//g; $out =~ s/\n.*was moved out of this project.*rebuilt\.//g; $out =~ s/\nContinue anyway.*Y//g; $* = 0; many thanks, Rory
[ "From perlvar:\n\nUse of $* is deprecated in modern Perl, supplanted by the /s and /m modifiers on pattern matching.\n\nIf you have access to the place where it's being matched just add it to the end:\n $haystack =~ m/.../sm;\n\nIf you only have access to the string, you can surround the expression with \n qr/(?ms-ix:$expr)/;\n\nOr in your case: \ns/\\n((\\w*\\-*\\.*\\w*\\/*)+\\:)/$1/gsm;\n\n", "From Perl 5.8 version of perlvar:\n\nSet to a non-zero integer value to do\n multi-line matching within a string\n [...] Use of $* is deprecated in\n modern Perl, supplanted by the /s and\n /m modifiers on pattern matching.\n\nWhile using /s and /m is much better, you need to set the modifiers (appropriately!) for each regular expression.\nperlvar also says \"This variable influences the interpretation of only ^ and $.\" which gives the impression that it's equivalent to /m only and not /s.\nNote that $* is a global variable. Because the change to it is not made local with the local keyword, it will affect all regular expressions in the program, not just those that follow it in the block. This will make it more difficult to update the script correctly.\n", "From perldoc perlvar:\n\n$*\nSet to a non-zero integer value to do multi-line matching within a string, 0 (or undefined) to tell Perl that it can assume that strings contain a single line, for the purpose of optimizing pattern matches. Pattern matches on strings containing multiple newlines can produce confusing results when $* is 0 or undefined. Default is undefined. (Mnemonic: * matches multiple things.) This variable influences the interpretation of only ^ and $. A literal newline can be searched for even when $* == 0.\nUse of $* is deprecated in modern Perl, supplanted by the /s and /m modifiers on pattern matching.\nAssigning a non-numerical value to $* triggers a warning (and makes $* act as if $* == 0), while assigning a numerical value to $* makes that an implicit int is applied on the value.\n\n", "It turns on multi-line mode. Since perl 5.0 (from 1994), the correct way to do that is adding a m and/or the s modifier to your regexps, like this\n s/\\n?Project.*rebuilt\\.//msg\n\n", "It was basically a way of saying that in subsequent regexes (s/// or m//), the ^ or $ assertions should be able to match before or after newlines embedded in the string.\nThe recommended equivalent is the m modifier at the end of your regex (e.g., s/\\n((\\w*-*.*\\w*/*)+:)/$1/gm;).\n" ]
[ 13, 4, 4, 1, 1 ]
[]
[]
[ "migrate", "perl" ]
stackoverflow_0000088518_migrate_perl.txt
Q: Building Apps for Motorola Cell Phone I have an L6 phone from motorola, a usb cable to connect it to my computer, and the Tools for Phones software so I can do things like upload my own custom ringtones or download pictures from the phone's camera. I have some ideas for programs I'd like to run on the phone, and it supports java, but I don't see anything in the software that I have for uploading them. Even if they did, I wouldn't know where to get started. Does anyone have any info on how to get started building apps for this or similar phones? A: I've never used Morotolla's SDK but from my limited work in JME the real hook in the 3rd party tools are the emulators. Setting up a JME dev environment quickly is something that Sun got surprisingly right. Just get NetBeans with the JME pack and there is a regular emulator right in the IDE, and then you can hook in other proprietary emulators such as those from Motorolla. Not sure what kind of apps you are looking to do, but if you're interested in games I thought Beginning Mobile Phone Game Programming was a great starting point: A: Perhaps Motorola's own site link A: I have not used the new Motorola development studio, because my experience with Motorola's development tools has not been a joyous one. When working with Motorola devices I tend to stick to the standard emulator (or sometimes the Sony Ericsson emulators as those are the best I have worked with by far). The problem with Motorola's tools is that I always seemed to spend way too much time trying to figure out how to work around them. I would run into emulator specific issues and bugs, and I honestly don't have time to waste trying to figure out why the application runs on the target device but crashes on the emulator. It should be the opposite. A good emulator is very important for mobile development though as that is where you will do 90% of your development, testing and tweaking, only periodically trying it out on the phone. Finally, I agree with bpapa...Netbeans is an excellent IDE for J2ME development and here is a book that I recommend (get the original if possible, not the second edition as the second edition focuses way too much on MIDP 2.0 and assumes you know the basics). http://www.amazon.com/J2ME-Game-Programming-Development/dp/1592001181/ref=pd_bbs_sr_3?ie=UTF8&s=books&qid=1221692983&sr=1-3 A: Yeah, the act of asking the question pointed me in the direction of an answer, and I found this: https://developer.motorola.com/docstools/motodevstudio/ I could still use some pointers from someone of what to expect if anyone has done this before.
Building Apps for Motorola Cell Phone
I have an L6 phone from motorola, a usb cable to connect it to my computer, and the Tools for Phones software so I can do things like upload my own custom ringtones or download pictures from the phone's camera. I have some ideas for programs I'd like to run on the phone, and it supports java, but I don't see anything in the software that I have for uploading them. Even if they did, I wouldn't know where to get started. Does anyone have any info on how to get started building apps for this or similar phones?
[ "I've never used Morotolla's SDK but from my limited work in JME the real hook in the 3rd party tools are the emulators. Setting up a JME dev environment quickly is something that Sun got surprisingly right. Just get NetBeans with the JME pack and there is a regular emulator right in the IDE, and then you can hook in other proprietary emulators such as those from Motorolla.\nNot sure what kind of apps you are looking to do, but if you're interested in games I thought Beginning Mobile Phone Game Programming was a great starting point: \n", "Perhaps Motorola's own site\nlink\n", "I have not used the new Motorola development studio, because my experience with Motorola's development tools has not been a joyous one. When working with Motorola devices I tend to stick to the standard emulator (or sometimes the Sony Ericsson emulators as those are the best I have worked with by far).\nThe problem with Motorola's tools is that I always seemed to spend way too much time trying to figure out how to work around them. I would run into emulator specific issues and bugs, and I honestly don't have time to waste trying to figure out why the application runs on the target device but crashes on the emulator. It should be the opposite.\nA good emulator is very important for mobile development though as that is where you will do 90% of your development, testing and tweaking, only periodically trying it out on the phone.\nFinally, I agree with bpapa...Netbeans is an excellent IDE for J2ME development and here is a book that I recommend (get the original if possible, not the second edition as the second edition focuses way too much on MIDP 2.0 and assumes you know the basics).\nhttp://www.amazon.com/J2ME-Game-Programming-Development/dp/1592001181/ref=pd_bbs_sr_3?ie=UTF8&s=books&qid=1221692983&sr=1-3\n", "Yeah, the act of asking the question pointed me in the direction of an answer, and I found this:\nhttps://developer.motorola.com/docstools/motodevstudio/\nI could still use some pointers from someone of what to expect if anyone has done this before.\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "java", "mobile_phones" ]
stackoverflow_0000052701_java_mobile_phones.txt
Q: kSOAP Marshalling help needed Does anyone have a good complex object marshalling example using the kSOAP package? A: Although this example is not compilable and complete, the basic idea is to have a class that tells kSOAP how to turn an XML tag into an object (i.e. readInstance()) and how to turn an object into an XML tag (i.e. writeInstance()). public class MarshalBase64File implements Marshal { public static Class FILE_CLASS = File.class; public Object readInstance(XmlPullParser parser, String namespace, String name, PropertyInfo expected) throws IOException, XmlPullParserException { return Base64.decode(parser.nextText()); } public void writeInstance(XmlSerializer writer, Object obj) throws IOException { File file = (File)obj; int total = (int)file.length(); FileInputStream in = new FileInputStream(file); byte b[] = new byte[4096]; int pos = 0; int num = b.length; if ((pos + num) > total) { num = total - pos; } int len = in.read(b, 0, num); while ((len != -1) && ((pos + len) < total)) { writer.text(Base64.encode(b, 0, len, null).toString()); pos += len; if ((pos + num) > total) { num = total - pos; } len = in.read(b, 0, num); } if (len != -1) { writer.text(Base64.encode(b, 0, len, null).toString()); } } public void register(SoapSerializationEnvelope cm) { cm.addMapping(cm.xsd, "base64Binary", MarshalBase64File.FILE_CLASS, this); } } Later, when you invoke the SOAP service, you'll map the object type (in this case, File objects) to the marshalling class. The SOAP envelope will automatically match the object type of each argument and, if it is not a built-in type, invoke the associated marshaller to convert it to/from XML. public class MarshalDemo { public String storeFile(File file) throws IOException, XmlPullParserException { SoapObject soapObj = new SoapObject("http://www.example.com/ws/service/file/1.0", "storeFile"); soapObj.addProperty("file", file); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); new MarshalBase64File().register(envelope); envelope.encodingStyle = SoapEnvelope.ENC; envelope.setOutputSoapObject(soapObj); HttpTransport ht = new HttpTransport(new URL(server, "/soap/file")); ht.call("http://www.example.com/ws/service/file/1.0/storeFile", envelope); String retVal = ""; SoapObject writeResponse = (SoapObject)envelope.bodyIn; Object obj = writeResponse.getProperty("statusString"); if (obj instanceof SoapPrimitive) { SoapPrimitive statusString = (SoapPrimitive)obj; String content = statusString.toString(); retVal = content; } return retVal; } } In this case, I am using Base64 encoding to marshal File objects.
kSOAP Marshalling help needed
Does anyone have a good complex object marshalling example using the kSOAP package?
[ "Although this example is not compilable and complete, the basic idea is to have a class that tells kSOAP how to turn an XML tag into an object (i.e. readInstance()) and how to turn an object into an XML tag (i.e. writeInstance()).\npublic class MarshalBase64File implements Marshal {\n\n public static Class FILE_CLASS = File.class;\n\n public Object readInstance(XmlPullParser parser, String namespace, String name, PropertyInfo expected)\n throws IOException, XmlPullParserException {\n return Base64.decode(parser.nextText());\n }\n\n public void writeInstance(XmlSerializer writer, Object obj) throws IOException {\n File file = (File)obj;\n int total = (int)file.length();\n FileInputStream in = new FileInputStream(file);\n byte b[] = new byte[4096];\n int pos = 0;\n int num = b.length;\n if ((pos + num) > total) {\n num = total - pos;\n }\n int len = in.read(b, 0, num);\n while ((len != -1) && ((pos + len) < total)) {\n writer.text(Base64.encode(b, 0, len, null).toString());\n pos += len;\n if ((pos + num) > total) {\n num = total - pos;\n }\n len = in.read(b, 0, num);\n }\n if (len != -1) {\n writer.text(Base64.encode(b, 0, len, null).toString());\n }\n }\n\n public void register(SoapSerializationEnvelope cm) {\n cm.addMapping(cm.xsd, \"base64Binary\", MarshalBase64File.FILE_CLASS, this);\n }\n}\n\nLater, when you invoke the SOAP service, you'll map the object type (in this case, File objects) to the marshalling class. The SOAP envelope will automatically match the object type of each argument and, if it is not a built-in type, invoke the associated marshaller to convert it to/from XML.\npublic class MarshalDemo {\n\n public String storeFile(File file) throws IOException, XmlPullParserException {\n SoapObject soapObj = new SoapObject(\"http://www.example.com/ws/service/file/1.0\", \"storeFile\");\n soapObj.addProperty(\"file\", file);\n\n SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11);\n new MarshalBase64File().register(envelope);\n envelope.encodingStyle = SoapEnvelope.ENC;\n envelope.setOutputSoapObject(soapObj);\n\n HttpTransport ht = new HttpTransport(new URL(server, \"/soap/file\"));\n ht.call(\"http://www.example.com/ws/service/file/1.0/storeFile\", envelope);\n\n String retVal = \"\";\n SoapObject writeResponse = (SoapObject)envelope.bodyIn;\n Object obj = writeResponse.getProperty(\"statusString\");\n if (obj instanceof SoapPrimitive) {\n SoapPrimitive statusString = (SoapPrimitive)obj;\n String content = statusString.toString();\n retVal = content;\n }\n return retVal;\n }\n}\n\nIn this case, I am using Base64 encoding to marshal File objects.\n" ]
[ 4 ]
[]
[]
[ "java_me", "ksoap", "soap" ]
stackoverflow_0000077131_java_me_ksoap_soap.txt
Q: Why does Imake interpret certain words in a Imakefile to numerical values? I've found it very difficult to find any existing documentation on this. What I'm trying to find out is why Imake would interpret a word such as unix, linux or i386 to a number 1 in the produced Make-file? I'm sure it is a function of indicating whether or not your on that system or not. I've not been able to find that this is a #define set somewhere, so is this something that's built in? A: imake produces Makefiles by running cpp, the C preprocessor, which usually has a variety of builtin definitions. You can get a list by running gcc -E -dM emptyfile.c
Why does Imake interpret certain words in a Imakefile to numerical values?
I've found it very difficult to find any existing documentation on this. What I'm trying to find out is why Imake would interpret a word such as unix, linux or i386 to a number 1 in the produced Make-file? I'm sure it is a function of indicating whether or not your on that system or not. I've not been able to find that this is a #define set somewhere, so is this something that's built in?
[ "imake produces Makefiles by running cpp, the C preprocessor, which usually has a variety of builtin definitions. You can get a list by running \ngcc -E -dM emptyfile.c\n\n" ]
[ 2 ]
[]
[]
[ "linux", "makefile", "unix" ]
stackoverflow_0000086047_linux_makefile_unix.txt
Q: In Python, how do you take tokenized input such as with the C++? In C++, I can have take input like this: cin >> a >> b >> c; And a can be int, b can be float, and c can be whatever... How do I do the same in python? input() and raw_input(), the way I'm using them, don't seem to be giving me the desired results. A: You generally shouldn't use input() in production code. If you want an int and then a float, try this: >>> line = raw_input().split() >>> a = int(line[0]) >>> b = float(line[1]) >>> c = " ".join(line[2:]) It all depends on what exactly you're trying to accomplish, but remember that readability counts. Obscure one-liners may seem cool but in the face of maintainability, try to choose something sensible :) (P.S.: Don't forget to check for errors with try: ... except (ValueError, IndexError):) A: Since the C++ cin reads from sys.stdin, you'll often do something more like the following. import sys tokens= sys.stdin.read().split() try: a= int(token[0]) b= float(token[1]) except ValueError, e: print e # handle the invalid input A: Depending upon what you are doing, something like the getopt module could be useful, but only in certain situations and I'm not sure if it would apply in yours.
In Python, how do you take tokenized input such as with the C++?
In C++, I can have take input like this: cin >> a >> b >> c; And a can be int, b can be float, and c can be whatever... How do I do the same in python? input() and raw_input(), the way I'm using them, don't seem to be giving me the desired results.
[ "You generally shouldn't use input() in production code. If you want an int and then a float, try this:\n>>> line = raw_input().split()\n>>> a = int(line[0])\n>>> b = float(line[1])\n>>> c = \" \".join(line[2:])\n\nIt all depends on what exactly you're trying to accomplish, but remember that readability counts. Obscure one-liners may seem cool but in the face of maintainability, try to choose something sensible :)\n(P.S.: Don't forget to check for errors with try: ... except (ValueError, IndexError):)\n", "Since the C++ cin reads from sys.stdin, you'll often do something more like the following.\nimport sys\ntokens= sys.stdin.read().split()\ntry:\n a= int(token[0])\n b= float(token[1])\nexcept ValueError, e:\n print e # handle the invalid input\n\n", "Depending upon what you are doing, something like the getopt module could be useful, but only in certain situations and I'm not sure if it would apply in yours.\n" ]
[ 4, 3, 0 ]
[]
[]
[ "c++", "input", "python" ]
stackoverflow_0000088554_c++_input_python.txt
Q: VB6/Microsoft Access/DAO to VB.NET/SQL Server... Got Advice? I can make a DAO recordset in VB6/Access do anything - add data, clean data, move data, get data dressed in the morning and take it to school. But I don't even know where to start in .NET. I'm not having any problems retrieving data from the database, but what do real people do when they need to edit data and put it back? What's the easiest and most direct way to edit, update and append data into related tables in .NET and SQL Server? A: try to use oledbConnection , oledbCommand and oledbDataReader from System.data.oledb if you are using sqlserver DB, then use SqlConnection , sqlCommand and sqlDataReader from System.data.SqlClient A: A natural progression IMO from DAO is ADO.net. I think you would find it pretty easy to pick up having the understanding/foundation of DAO. It uses DataAdapters and DataSets similar to recordsets. Modifying Data in ADO.NET. I would suggest looking into Linq when you get a chance. A: The DataSet class is the place to start. As the linked article says, the steps for creating a DataSet, modifying it, then updating the database are typically: Build and fill each DataTable in a DataSet with data from a data source using a DataAdapter. Change the data in individual DataTable objects by adding, updating, or deleting DataRow objects. Invoke the GetChanges method to create a second DataSet that features only the changes to the data. Call the Update method of the DataAdapter, passing the second DataSet as an argument. Invoke the Merge method to merge the changes from the second DataSet into the first. Invoke the AcceptChanges on the DataSet. Alternatively, invoke RejectChanges to cancel the changes. A: Is there a reason why ms-access was added as a tag here? It seems to me that the question has nothing but the most trivial relevance to Access, since once you're working with .NET, Access is completely out of the picture.
VB6/Microsoft Access/DAO to VB.NET/SQL Server... Got Advice?
I can make a DAO recordset in VB6/Access do anything - add data, clean data, move data, get data dressed in the morning and take it to school. But I don't even know where to start in .NET. I'm not having any problems retrieving data from the database, but what do real people do when they need to edit data and put it back? What's the easiest and most direct way to edit, update and append data into related tables in .NET and SQL Server?
[ "try to use oledbConnection , oledbCommand and oledbDataReader\nfrom System.data.oledb\n\nif you are using sqlserver DB, then use SqlConnection , sqlCommand and sqlDataReader\nfrom System.data.SqlClient\n\n", "A natural progression IMO from DAO is ADO.net. I think you would find it pretty easy to pick up having the understanding/foundation of DAO. It uses DataAdapters and DataSets similar to recordsets. Modifying Data in ADO.NET.\nI would suggest looking into Linq when you get a chance. \n", "The DataSet class is the place to start. As the linked article says, the steps for creating a DataSet, modifying it, then updating the database are typically:\n\nBuild and fill each DataTable in a DataSet with data from a data source using a DataAdapter.\nChange the data in individual DataTable objects by adding, updating, or deleting DataRow objects.\nInvoke the GetChanges method to create a second DataSet that features only the changes to the data.\nCall the Update method of the DataAdapter, passing the second DataSet as an argument.\nInvoke the Merge method to merge the changes from the second DataSet into the first.\nInvoke the AcceptChanges on the DataSet. Alternatively, invoke RejectChanges to cancel the changes.\n\n", "Is there a reason why ms-access was added as a tag here? It seems to me that the question has nothing but the most trivial relevance to Access, since once you're working with .NET, Access is completely out of the picture.\n" ]
[ 0, 0, 0, 0 ]
[]
[]
[ "dao", "sql_server", "vb.net", "vb6", "vb6_migration" ]
stackoverflow_0000086129_dao_sql_server_vb.net_vb6_vb6_migration.txt
Q: How can I avoid the warning fom an unused parameter in PLSQ? Sometimes, in PL SQL you want to add a parameter to a Package, Function or Procedure in order to prepare future functionality. For example: create or replace function doGetMyAccountMoney( Type_Of_Currency IN char := 'EUR') return number is Result number(12,2); begin Result := 10000; IF char <> 'EUR' THEN -- ERROR NOT IMPLEMENTED YET END IF; return(Result); end doGetMyAccountMoney;also It can lead to lots of warnings like Compilation errors for FUNCTION APPUEMP_PRAC.DOGETMYACCOUNTMONEY Error: Hint: Parameter 'Currency' is declared but never used in 'doGetMyAccountMoney' Line: 1 What would be the best way to avoid those warnings? A: I believe that this is controlled by the parameter PLSQL_WARNINGS, documented for 10gR2 here: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams166.htm#REFRN10249 A: If you didn't have the ability to alter the warning levels, you could just bind the parameter value to a dummy value and document that they are for future use. A: Well, your example has several errors. Most importantly, you would need to change "char" to "Currency" in the IF statement; which as far as I can see would avoid the warning as well. A: Disable non-severe PL/SQL warnings: ALTER SESSION SET PLSQL_WARNINGS='ENABLE:SEVERE'; A: Well, are you sure you have the name and the right in the correct order in that declaration? It complains about a parameter named 'Currency', but you aren't actually using it, are you? On the other hand, you are using something called char, what is that? Or perhaps my knowledge of PL/SQL is way off, if so, leave a comment and I'll delete this.
How can I avoid the warning fom an unused parameter in PLSQ?
Sometimes, in PL SQL you want to add a parameter to a Package, Function or Procedure in order to prepare future functionality. For example: create or replace function doGetMyAccountMoney( Type_Of_Currency IN char := 'EUR') return number is Result number(12,2); begin Result := 10000; IF char <> 'EUR' THEN -- ERROR NOT IMPLEMENTED YET END IF; return(Result); end doGetMyAccountMoney;also It can lead to lots of warnings like Compilation errors for FUNCTION APPUEMP_PRAC.DOGETMYACCOUNTMONEY Error: Hint: Parameter 'Currency' is declared but never used in 'doGetMyAccountMoney' Line: 1 What would be the best way to avoid those warnings?
[ "I believe that this is controlled by the parameter PLSQL_WARNINGS, documented for 10gR2 here: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams166.htm#REFRN10249\n", "If you didn't have the ability to alter the warning levels, you could just bind the parameter value to a dummy value and document that they are for future use.\n", "Well, your example has several errors. Most importantly, you would need to change \"char\" to \"Currency\" in the IF statement; which as far as I can see would avoid the warning as well.\n", "Disable non-severe PL/SQL warnings:\nALTER SESSION SET PLSQL_WARNINGS='ENABLE:SEVERE';\n\n", "Well, are you sure you have the name and the right in the correct order in that declaration?\nIt complains about a parameter named 'Currency', but you aren't actually using it, are you?\nOn the other hand, you are using something called char, what is that?\nOr perhaps my knowledge of PL/SQL is way off, if so, leave a comment and I'll delete this.\n" ]
[ 3, 2, 1, 1, 0 ]
[]
[]
[ "function", "oracle", "parameters", "plsql" ]
stackoverflow_0000084661_function_oracle_parameters_plsql.txt
Q: Is There Any Advantage in Passing a UI wrapper to a view Most of the MVC samples I have seen pass an instance of the view to the controller like this public class View { Controller controller = new Controller(this); } Is there any advantage to passing a class which provides access to just the the properties and events the controller is interested in, like this: public class UIWrapper { private TextBox textBox; public TextBox TextBox { get {return textBox;} } public UIWrapper(ref TextBox textBox) { this.textBox = textBox; } public class View { UIWrapper wrapper = new UIWrapper(this); Controller controller = new Controller(wrapper) } A: There's a good series of posts by Jeremy Miller in relation to MVC/MVP triad. In particular you might be interested in part 6 it goes into detail about the comms between the view and the controller. A: It depends on your architecture. If you're all on the same tier, then you can go without the wrapper, though I'd probably pass an interface that View implements to Controller. Functionally, and from a coupling perspective, the interface approach and the wrapper approach are equivalent. However, if UI is on one tier and the controller is on another, then passing/serializing an entire View object might be awkward, or even inefficient. In cases like this, you might want to pass a DTO back and forth, which would be easier to serialize and probably more efficient. I tend to favor the DTO approach, since if your architecture scales up, all you need to do is serialize and de-serialize. Also, if your View is complicated and has lots of pieces of data to pass back and forth with the Controller, you might start fighting the Long Parameter List smell with Introduce Parameter Object anyway, which is essentially a DTO. One more thing for you to chew on: for a complicated View, Controller will end up needing to map lots of pieces of data to various text boxes and other UI controls in the View. The same thing in reverse: View ends up grabbing data from a lot of controls and passing them to various methods in your Controller. I like separating these two. I give Views a DTO and I consider it the View's job to know where to "plug in" and "plug out" each piece of data. Both View and Controller are now coupled only to the DTO, as far as data is concerned anyway.
Is There Any Advantage in Passing a UI wrapper to a view
Most of the MVC samples I have seen pass an instance of the view to the controller like this public class View { Controller controller = new Controller(this); } Is there any advantage to passing a class which provides access to just the the properties and events the controller is interested in, like this: public class UIWrapper { private TextBox textBox; public TextBox TextBox { get {return textBox;} } public UIWrapper(ref TextBox textBox) { this.textBox = textBox; } public class View { UIWrapper wrapper = new UIWrapper(this); Controller controller = new Controller(wrapper) }
[ "There's a good series of posts by Jeremy Miller in relation to MVC/MVP triad. In particular you might be interested in part 6 it goes into detail about the comms between the view and the controller.\n", "It depends on your architecture. If you're all on the same tier, then you can go without the wrapper, though I'd probably pass an interface that View implements to Controller. Functionally, and from a coupling perspective, the interface approach and the wrapper approach are equivalent.\nHowever, if UI is on one tier and the controller is on another, then passing/serializing an entire View object might be awkward, or even inefficient. In cases like this, you might want to pass a DTO back and forth, which would be easier to serialize and probably more efficient.\nI tend to favor the DTO approach, since if your architecture scales up, all you need to do is serialize and de-serialize. \nAlso, if your View is complicated and has lots of pieces of data to pass back and forth with the Controller, you might start fighting the Long Parameter List smell with Introduce Parameter Object anyway, which is essentially a DTO.\nOne more thing for you to chew on: for a complicated View, Controller will end up needing to map lots of pieces of data to various text boxes and other UI controls in the View. The same thing in reverse: View ends up grabbing data from a lot of controls and passing them to various methods in your Controller. \nI like separating these two. I give Views a DTO and I consider it the View's job to know where to \"plug in\" and \"plug out\" each piece of data. Both View and Controller are now coupled only to the DTO, as far as data is concerned anyway.\n" ]
[ 1, 1 ]
[]
[]
[ "design_patterns", "oop" ]
stackoverflow_0000088454_design_patterns_oop.txt
Q: Converting floating point exceptions into C++ exceptions Is it possible to convert floating point exceptions (signals) into C++ exceptions on x86 Linux? This is for debugging purposes, so nonportability and imperfection is okay (e.g., if it isn't 100% guaranteed that all destructors are called). A: If your C++ standard library implementation supports the TR1 functions fetestexcept, feraiseexcept and feclearexcept (mine doesn't yet so I can't test this) you can detect five kinds of floating point errors and then you can throw whatever exceptions you want. See here for a description of these functions. I also recommend section 12.3, "Managing the Floating Point Environment," of the book The C++ Standard Library Extensions: A Tutorial and Reference by Pete Becker, ISBN-13: 9780321412997, for an excellent description of these functions with sample code. A: Due to the way signals and exceptions work, you can't do it immediately when the signal is thrown - exceptions rely on certain aspects of the stack that aren't true when a signal handler gets called. You can set a global variable in the signal handler, and then check this at key points in the program and throw an exception if it's set. This doesn't give you the exact information about the thrown exception, though. A: the gcc option -fnon-call-exceptions might be of some use to you. Couldn't find any documentation on it though so your mileage may vary. A: I don't have a ready made solution, but one thing you could look at are signals (not sure whether you can safely throw C++ exceptions from them, but it should help for debugging anyway.) You could install a signal handler for SIGFPE, and use that for your debugging purposes. A: The basic idea will be for you to install the appropriate signal handlers for floating point exceptions. Inside your signal handler, you can throw an exception (or send a user-defined signal to another process which will raise the exception, or send a message to another thread for something similar, etc. etc. etc). There are any number of ways to actually throw the exception - the main thing is to handle the signal.
Converting floating point exceptions into C++ exceptions
Is it possible to convert floating point exceptions (signals) into C++ exceptions on x86 Linux? This is for debugging purposes, so nonportability and imperfection is okay (e.g., if it isn't 100% guaranteed that all destructors are called).
[ "If your C++ standard library implementation supports the TR1 functions\nfetestexcept, feraiseexcept and feclearexcept (mine doesn't yet so I can't test this) you can detect five kinds of floating point errors and then you can throw whatever exceptions you want.\nSee here for a description of these functions.\nI also recommend section 12.3, \"Managing the Floating Point Environment,\" of the book The C++ Standard Library Extensions: A Tutorial and Reference by Pete Becker, ISBN-13: 9780321412997, for an excellent description of these functions with sample code.\n\n", "Due to the way signals and exceptions work, you can't do it immediately when the signal is thrown - exceptions rely on certain aspects of the stack that aren't true when a signal handler gets called.\nYou can set a global variable in the signal handler, and then check this at key points in the program and throw an exception if it's set. This doesn't give you the exact information about the thrown exception, though.\n", "the gcc option -fnon-call-exceptions might be of some use to you. Couldn't find any documentation on it though so your mileage may vary.\n", "I don't have a ready made solution, but one thing you could look at are signals (not sure whether you can safely throw C++ exceptions from them, but it should help for debugging anyway.)\nYou could install a signal handler for SIGFPE, and use that for your debugging purposes.\n", "The basic idea will be for you to install the appropriate signal handlers for floating point exceptions. Inside your signal handler, you can throw an exception (or send a user-defined signal to another process which will raise the exception, or send a message to another thread for something similar, etc. etc. etc). There are any number of ways to actually throw the exception - the main thing is to handle the signal.\n" ]
[ 8, 3, 3, 1, 0 ]
[]
[]
[ "c++", "exception", "floating_point", "signals" ]
stackoverflow_0000085726_c++_exception_floating_point_signals.txt
Q: Get Methods: One vs Many getEmployeeNameByBatchId(int batchID) getEmployeeNameBySSN(Object SSN) getEmployeeNameByEmailId(String emailID) getEmployeeNameBySalaryAccount(SalaryAccount salaryAccount) or getEmployeeName(int typeOfIdentifier, byte[] identifier) -> In this methods the typeOfIdentifier tells if identifier is batchID/SSN/emailID/salaryAccount Which one of the above is better way implement a get method? These methods would be in a Servlet and calls would be made from an API which would be provided to the customers. A: Why not overload the getEmployeeName(??) method? getEmployeeName(int BatchID) getEmployeeName(object SSN)(bad idea) getEmployeeName(String Email) etc. Seems a good 'many' approach to me. A: You could use something like that: interface Employee{ public String getName(); int getBatchId(); } interface Filter{ boolean matches(Employee e); } public Filter byName(final String name){ return new Filter(){ public boolean matches(Employee e) { return e.getName().equals(name); } }; } public Filter byBatchId(final int id){ return new Filter(){ public boolean matches(Employee e) { return e.getBatchId() == id; } }; } public Employee findEmployee(Filter sel){ List<Employee> allEmployees = null; for (Employee e:allEmployees) if (sel.matches(e)) return e; return null; } public void usage(){ findEmployee(byName("Gustav")); findEmployee(byBatchId(5)); } If you do the filtering by an SQL query you would use the Filter interface to compose a WHERE clause. The good thing with this approach is that you can combine two filters easily with: public Filter and(final Filter f1,final Filter f2){ return new Filter(){ public boolean matches(Employee e) { return f1.matches(e) && f2.matches(e); } }; } and use it like that: findEmployee(and(byName("Gustav"),byBatchId(5))); What you get is similar to the Criteria API in Hibernate. A: I'd go with the "many" approach. It seems more intuitive to me and less prone to error. A: I don't like getXByY() - that might be cool in PHP, but I just don't like it in Java (ymmv). I'd go with overloading, unless you have properties of the same datatype. In that case, I'd do something similar to your second option, but instead of using ints, I'd use an Enum for type safety and clarity. And instead of byte[], I'd use Object (because of autoboxing, this also works for primitives). A: The methods are perfect example for usage of overloading. getEmployeeName(int batchID) getEmployeeName(Object SSN) getEmployeeName(String emailID) getEmployeeName(SalaryAccount salaryAccount) If the methods have common processing inside, just write one more getEmplyeeNameImpl(...) and extract there the common code to avoid duplication A: First option, no question. Be explicit. It will greatly aid in maintainability and there's really no downside. A: @Stephan: it is difficult to overload a case like this (in general) because the parameter types might not be discriminative, e.g., getEmployeeNameByBatchId(int batchId) getEmployeeNameByRoomNumber(int roomNumber) See also the two methods getEmployeeNameBySSN, getEmployeeNameByEmailId in the original posting. A: Sometimes it can be more conveniant to use the specification pattern. Eg: GetEmployee(ISpecification<Employee> specification) And then start defining your specifications... NameSpecification : ISpecification<Employee> { private string name; public NameSpecification(string name) { this.name = name; } public bool IsSatisFiedBy(Employee employee) { return employee.Name == this.name; } } NameSpecification spec = new NameSpecification("Tim"); Employee tim = MyService.GetEmployee(spec); A: I will use explicit method names. Everyone that maintains that code and me later will understand what that method is doing without having to write xml comments. A: I would use the first option, or overload it in this case, seeing as you have 4 different parameter signatures. However, being specific helps with understanding the code 3 months from now. A: The first is probably the best in Java, considering it is typesafe (unlike the other). Additionally, for "normal" types, the second solution seems to only provide cumbersome usage for the user. However, since you are using Object as the type for SSN (which has a semantic meaning beyond Object), you probably won't get away with that type of API. All-in-all, in this particular case I would have used the approach with many getters. If all identifiers have their own class type, I might have gone the second route, but switching internally on the class instead of a provided/application-defined type identifier. A: Is the logic inside each of those methods largely the same? If so, the single method with identifier parameter may make more sense (simple and reducing repeated code). If the logic/procedures vary greatly between types, a method per type may be preferred. A: As others suggested the first option seems to be the good one. The second might make sense when you're writing a code, but when someone else comes along later on, it's harder to figure out how to use code. ( I know, you have comments and you can always dig deep into the code, but GetemployeeNameById is more self-explanatory) Note: Btw, usage of Enums might be something to consider in some cases. A: In a trivial case like this, I would go with overloading. That is: getEmployeeName( int batchID ); getEmployeeName( Object SSN ); etc. Only in special cases would I specify the argument type in the method name, i.e. if the type of argument is difficult to determine, if there are several types of arguments tha has the same data type (batchId and employeeId, both int), or if the methods for retrieving the employee is radically different for each argument type. I can't see why I'd ever use this getEmployeeName(int typeOfIdentifier, byte[] identifier) as it requires both callee and caller to cast the value based on typeOfIdentifier. Bad design. A: If you rewrite the question you can end up asking: "SELECT name FROM ... " "SELECT SSN FROM ... " "SELECT email FROM ... " vs. "SELECT * FROM ..." And I guess the answer to this is easy and everyone knows it. What happens if you change the Employee class? E.g.: You have to remove the email and add a new filter like department. With the second solution you have a huge risk of not noticing any errors if you just change the order of the int identifier "constants". With the first solution you will always notice if you are using the method in some long forgotten classes you would otherwise forget to modify to the new identifier. A: I personally prefer to have the explicit naming "...ByRoomNumber" because if you end up with many "overloads" you will eventually introduce unwanted errors. Being explicit is imho the best way. A: I agree with Stephan: One task, one method name, even if you can do it multiple ways. Method overloading feature was provided exactly for your case. getEmployeeName(int BatchID) getEmployeeName(String Email) etc. And avoid your second solution at all cost. It smells like "thy olde void * of C". Likewise, passing a Java "Object" is almost as poor style as a C "void *". A: If you have a good design you should be able to determine if you can use the overloading approach or if you're going to run into a problem where if you overload you're going to end up having two methods with the same parameter type. Overloading seems like the best way initially, but if you end up not being able to add a method in future and messing things up with naming it's going to be a hassle. Personally I'd for for the approach of a unique name per method, that way you don't run into problems later with trying to overload the same parameter Object methods. Also, if someone extended your class in the future and implemented another void getEmployeeName(String name) it wouldn't override yours. To summarise, go with a unique method name for each method, overloading can only cause problems in the long run. A: The decoupling between the search process and the search criteria jrudolf proposes in his example is excellent. I wonder why isnt it the most voted solution. Do i miss something? A: I'd go with Query Objects. They work well for accessing tables directly. If you are confined to stored procedures, they lose some of their power, but you can still make it work.
Get Methods: One vs Many
getEmployeeNameByBatchId(int batchID) getEmployeeNameBySSN(Object SSN) getEmployeeNameByEmailId(String emailID) getEmployeeNameBySalaryAccount(SalaryAccount salaryAccount) or getEmployeeName(int typeOfIdentifier, byte[] identifier) -> In this methods the typeOfIdentifier tells if identifier is batchID/SSN/emailID/salaryAccount Which one of the above is better way implement a get method? These methods would be in a Servlet and calls would be made from an API which would be provided to the customers.
[ "Why not overload the getEmployeeName(??) method? \n getEmployeeName(int BatchID)\ngetEmployeeName(object SSN)(bad idea)\ngetEmployeeName(String Email)\netc.\nSeems a good 'many' approach to me.\n", "You could use something like that:\ninterface Employee{\n public String getName();\n int getBatchId();\n}\ninterface Filter{\n boolean matches(Employee e);\n}\npublic Filter byName(final String name){\n return new Filter(){\n public boolean matches(Employee e) {\n return e.getName().equals(name);\n }\n };\n}\npublic Filter byBatchId(final int id){\n return new Filter(){\n public boolean matches(Employee e) {\n return e.getBatchId() == id;\n }\n };\n}\npublic Employee findEmployee(Filter sel){\n List<Employee> allEmployees = null;\n for (Employee e:allEmployees)\n if (sel.matches(e))\n return e;\n return null;\n}\npublic void usage(){\n findEmployee(byName(\"Gustav\"));\n findEmployee(byBatchId(5));\n}\n\nIf you do the filtering by an SQL query you would use the Filter interface to compose a WHERE clause.\nThe good thing with this approach is that you can combine two filters easily with:\npublic Filter and(final Filter f1,final Filter f2){\n return new Filter(){\n public boolean matches(Employee e) {\n return f1.matches(e) && f2.matches(e);\n }\n };\n}\n\nand use it like that:\nfindEmployee(and(byName(\"Gustav\"),byBatchId(5)));\n\nWhat you get is similar to the Criteria API in Hibernate.\n", "I'd go with the \"many\" approach. It seems more intuitive to me and less prone to error.\n", "I don't like getXByY() - that might be cool in PHP, but I just don't like it in Java (ymmv).\nI'd go with overloading, unless you have properties of the same datatype. In that case, I'd do something similar to your second option, but instead of using ints, I'd use an Enum for type safety and clarity. And instead of byte[], I'd use Object (because of autoboxing, this also works for primitives).\n", "The methods are perfect example for usage of overloading. \ngetEmployeeName(int batchID)\ngetEmployeeName(Object SSN)\ngetEmployeeName(String emailID)\ngetEmployeeName(SalaryAccount salaryAccount)\n\nIf the methods have common processing inside, just write one more getEmplyeeNameImpl(...) and extract there the common code to avoid duplication\n", "First option, no question. Be explicit. It will greatly aid in maintainability and there's really no downside.\n", "@Stephan: it is difficult to overload a case like this (in general) because the parameter types might not be discriminative, e.g.,\n\ngetEmployeeNameByBatchId(int batchId)\ngetEmployeeNameByRoomNumber(int roomNumber)\n\nSee also the two methods getEmployeeNameBySSN, getEmployeeNameByEmailId in the original posting.\n", "Sometimes it can be more conveniant to use the specification pattern.\nEg: GetEmployee(ISpecification<Employee> specification)\nAnd then start defining your specifications...\nNameSpecification : ISpecification<Employee>\n{\n private string name;\n public NameSpecification(string name) { this.name = name; }\n public bool IsSatisFiedBy(Employee employee) { return employee.Name == this.name; }\n}\n\nNameSpecification spec = new NameSpecification(\"Tim\");\nEmployee tim = MyService.GetEmployee(spec);\n", "I will use explicit method names. Everyone that maintains that code and me later will understand what that method is doing without having to write xml comments.\n", "I would use the first option, or overload it in this case, seeing as you have 4 different parameter signatures. However, being specific helps with understanding the code 3 months from now.\n", "The first is probably the best in Java, considering it is typesafe (unlike the other). Additionally, for \"normal\" types, the second solution seems to only provide cumbersome usage for the user. However, since you are using Object as the type for SSN (which has a semantic meaning beyond Object), you probably won't get away with that type of API.\nAll-in-all, in this particular case I would have used the approach with many getters. If all identifiers have their own class type, I might have gone the second route, but switching internally on the class instead of a provided/application-defined type identifier.\n", "Is the logic inside each of those methods largely the same?\nIf so, the single method with identifier parameter may make more sense (simple and reducing repeated code).\nIf the logic/procedures vary greatly between types, a method per type may be preferred.\n", "As others suggested the first option seems to be the good one. The second might make sense when you're writing a code, but when someone else comes along later on, it's harder to figure out how to use code. ( I know, you have comments and you can always dig deep into the code, but GetemployeeNameById is more self-explanatory)\nNote: Btw, usage of Enums might be something to consider in some cases.\n", "In a trivial case like this, I would go with overloading. That is:\ngetEmployeeName( int batchID );\ngetEmployeeName( Object SSN );\n\netc.\n\nOnly in special cases would I specify the argument type in the method name, i.e. if the type of argument is difficult to determine, if there are several types of arguments tha has the same data type (batchId and employeeId, both int), or if the methods for retrieving the employee is radically different for each argument type.\nI can't see why I'd ever use this\ngetEmployeeName(int typeOfIdentifier, byte[] identifier)\n\nas it requires both callee and caller to cast the value based on typeOfIdentifier. Bad design.\n", "If you rewrite the question you can end up asking:\n\"SELECT name FROM ... \"\n\"SELECT SSN FROM ... \"\n\"SELECT email FROM ... \"\nvs.\n\"SELECT * FROM ...\"\nAnd I guess the answer to this is easy and everyone knows it.\nWhat happens if you change the Employee class? E.g.: You have to remove the email and add a new filter like department. With the second solution you have a huge risk of not noticing any errors if you just change the order of the int identifier \"constants\".\nWith the first solution you will always notice if you are using the method in some long forgotten classes you would otherwise forget to modify to the new identifier.\n", "I personally prefer to have the explicit naming \"...ByRoomNumber\" because if you end up with many \"overloads\" you will eventually introduce unwanted errors. Being explicit is imho the best way.\n", "I agree with Stephan: One task, one method name, even if you can do it multiple ways.\nMethod overloading feature was provided exactly for your case.\n\ngetEmployeeName(int BatchID)\ngetEmployeeName(String Email)\netc.\n\nAnd avoid your second solution at all cost. It smells like \"thy olde void * of C\". Likewise, passing a Java \"Object\" is almost as poor style as a C \"void *\".\n", "If you have a good design you should be able to determine if you can use the overloading approach or if you're going to run into a problem where if you overload you're going to end up having two methods with the same parameter type.\nOverloading seems like the best way initially, but if you end up not being able to add a method in future and messing things up with naming it's going to be a hassle.\nPersonally I'd for for the approach of a unique name per method, that way you don't run into problems later with trying to overload the same parameter Object methods. Also, if someone extended your class in the future and implemented another void getEmployeeName(String name) it wouldn't override yours.\nTo summarise, go with a unique method name for each method, overloading can only cause problems in the long run.\n", "The decoupling between the search process and the search criteria jrudolf proposes in his example is excellent. I wonder why isnt it the most voted solution. Do i miss something?\n", "I'd go with Query Objects. They work well for accessing tables directly. If you are confined to stored procedures, they lose some of their power, but you can still make it work.\n" ]
[ 9, 7, 3, 3, 2, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "stick all your options in an enum, the have something like the following\nGetEmployeeName(Enum identifier)\n{\n switch (identifier)\n case eBatchID:\n {\n // Do stuff\n }\n case eSSN:\n {\n }\n case eEmailId:\n {\n }\n case eSalary:\n {\n }\n default:\n {\n // No match\n return 0;\n }\n}\n\nenum Identifier\n{\n eBatchID,\n eSSN,\n eEmailID,\n eSalary\n}\n\n", "You are thinking C/C++.\nUse objects instead of an identifier byte (or int).\nMy Bad, the overload approach is better and using the SSN as a primary key is not so good\npublic ??? getEmployeeName(Object obj){\n\nif (obj instanceof Integer){\n\n ...\n\n} else if (obj instanceof String){\n\n...\n\n} else if .... // and so on\n\n\n} else throw SomeMeaningFullRuntimeException()\n\nreturn employeeName\n}\n\nI think it is better to use Unchecked Exceptions to signaling incorrect input.\nDocument it so the customer knows what objects to expect. Or create your own wrappers. I prefer the first option.\n" ]
[ -1, -2 ]
[ "jakarta_ee", "java", "oop" ]
stackoverflow_0000080892_jakarta_ee_java_oop.txt
Q: JSF selectItem label formatting Trying to keep all the presentation stuff in the xhtml on this project and I need to format some values in a selectItem tag have a BigDecimal value and need to make it look like currency. Is there anyway to apply a <f:convertNumber pattern="$#,##0.00"/> Inside a <f:selectItem> tag? Any way to do this or a work around that doesn't involve pushing this into the java code? A: After doing some more research here I'm pretty convinced this isn't possible with the current implementation of JSF. There just isn't an opportunity to transform the value. http://java.sun.com/javaee/javaserverfaces/1.2/docs/tlddocs/f/selectItem.html The tld shows the itemLabel property as being a ValueExpression and the body content of <f:selectItem> as being empty. So nothing is allowed to exist inside one of these tags, and the label has to point to a verbatim value in the Java model. So it has be be formatted coming out of the Java model. A: being a beginner to jsf i had a similar problem, maybe my solution is helpful, maybe its not in the "jsf spirit" i just created a custom taglib and extended the class (in my case org.apache.myfaces.component.html.ext.HtmlCommandButton) and overrided the setters to apply custom parameters. so instead of <t:commandButton/> i used <mytags:commandButton/>, which is as flexible as i want. A: You could setup a converter with that pattern, but that sounds like the exact opposite to what you want.
JSF selectItem label formatting
Trying to keep all the presentation stuff in the xhtml on this project and I need to format some values in a selectItem tag have a BigDecimal value and need to make it look like currency. Is there anyway to apply a <f:convertNumber pattern="$#,##0.00"/> Inside a <f:selectItem> tag? Any way to do this or a work around that doesn't involve pushing this into the java code?
[ "After doing some more research here I'm pretty convinced this isn't possible with the current implementation of JSF. There just isn't an opportunity to transform the value.\nhttp://java.sun.com/javaee/javaserverfaces/1.2/docs/tlddocs/f/selectItem.html\nThe tld shows the itemLabel property as being a ValueExpression and the body content of <f:selectItem> as being empty. So nothing is allowed to exist inside one of these tags, and the label has to point to a verbatim value in the Java model. So it has be be formatted coming out of the Java model.\n", "being a beginner to jsf i had a similar problem, maybe my solution is helpful, maybe its not in the \"jsf spirit\"\ni just created a custom taglib and extended the class (in my case org.apache.myfaces.component.html.ext.HtmlCommandButton) and overrided the setters to apply custom parameters.\nso instead of <t:commandButton/> i used <mytags:commandButton/>, which is as flexible as i want.\n", "You could setup a converter with that pattern, but that sounds like the exact opposite to what you want.\n" ]
[ 4, 2, 0 ]
[]
[]
[ "java", "jsf" ]
stackoverflow_0000086531_java_jsf.txt
Q: What's the difference between XML-RPC and SOAP? I've never really understand why a web service implementer would choose one over the other. Is XML-RPC generally found in older systems? Any help in understanding this would be greatly appreciated. A: Differences? SOAP is more powerful, and is much preferred by software tool vendors (MSFT .NET, Java Enterprise edition, that sort of things). SOAP was for a long time (2001-2007ish) seen as the protocol of choice for SOA. xml-rpc not so much. REST is the new SOA darling, although it's not a protocol. SOAP is more verbose, but more capable. SOAP is not supported in some of the older stuff. For example, no SOAP libs for classic ASP (that I could find). SOAP is not well supported in python. XML-RPC has great support in python, in the standard library. SOAP supports document-level transfer, whereas xml-rpc is more about values transfer, although it can transfer structures such as structs, lists, etc. xm-rpc is really about program to program language agnostic transfer. It primarily goes over http/https. SOAP messages can go over email as well. xml-rpc is more unixy. It lets you do things simply, and when you know what you're doing, it's very fast to deploy quality web services, even when using terminal text editors. Doing SOAP that way is a zoo; you really need a good IDE to make it feasible. Knowing SOAP, though, will look much better on your resume/CV if you're vying for a Fortune 500 IT job. xml-rpc has some issues with non-ascii character sets. XML-RPC does not support named parameters. They must be in correct order. Not sure about SOAP, but think so. A: Just to add to the other answers, I would encourage you to look at actual textual representations of SOAP and XML-RPC calls, perhaps by capturing one with Ethereal. The whole, "XML-RPC is simpler" argument doesn't make much sense until you see how incredibly verbose a SOAP call is. Many of the fairly popular web sites out there shy away from SOAP as their API due to just the amount of bandwidth it would consume if people started using it extensively. A: Kate Rhodes has a great essay on the differences at http://weblog.masukomi.org/2006/11/21/xml-rpc-vs-soap
What's the difference between XML-RPC and SOAP?
I've never really understand why a web service implementer would choose one over the other. Is XML-RPC generally found in older systems? Any help in understanding this would be greatly appreciated.
[ "Differences?\nSOAP is more powerful, and is much preferred by software tool vendors (MSFT .NET, Java Enterprise edition, that sort of things).\nSOAP was for a long time (2001-2007ish) seen as the protocol of choice for SOA. xml-rpc not so much. REST is the new SOA darling, although it's not a protocol.\nSOAP is more verbose, but more capable. \nSOAP is not supported in some of the older stuff. For example, no SOAP libs for classic ASP (that I could find).\nSOAP is not well supported in python. XML-RPC has great support in python, in the standard library.\nSOAP supports document-level transfer, whereas xml-rpc is more about values transfer, although it can transfer structures such as structs, lists, etc. \nxm-rpc is really about program to program language agnostic transfer. It primarily goes over http/https. SOAP messages can go over email as well.\nxml-rpc is more unixy. It lets you do things simply, and when you know what you're doing, it's very fast to deploy quality web services, even when using terminal text editors. Doing SOAP that way is a zoo; you really need a good IDE to make it feasible.\nKnowing SOAP, though, will look much better on your resume/CV if you're vying for a Fortune 500 IT job.\nxml-rpc has some issues with non-ascii character sets. \nXML-RPC does not support named parameters. They must be in correct order. Not sure about SOAP, but think so.\n", "Just to add to the other answers, I would encourage you to look at actual textual representations of SOAP and XML-RPC calls, perhaps by capturing one with Ethereal. The whole, \"XML-RPC is simpler\" argument doesn't make much sense until you see how incredibly verbose a SOAP call is. Many of the fairly popular web sites out there shy away from SOAP as their API due to just the amount of bandwidth it would consume if people started using it extensively.\n", "Kate Rhodes has a great essay on the differences at http://weblog.masukomi.org/2006/11/21/xml-rpc-vs-soap\n" ]
[ 92, 14, 6 ]
[]
[]
[ "soap", "web_services", "xml", "xml_rpc" ]
stackoverflow_0000080112_soap_web_services_xml_xml_rpc.txt
Q: What can prevent an MS Access 2000 form from closing? My Access 2000 DB causes me problems - sometimes (haven't pinpointed the cause) the "book" form won't close. Clicking its close button does nothing, File -> Close does nothing, even closing Access results in no action. I don't have an OnClose handler for this form. The only workaround I can find involves opening the Vba editor, making a change to the code for that form (even adding a space and then immediately deleting the space), and then going back to close the "book" form, closing it, and saying "no, I don't want to save the changes". Only then will it close. Any help? A: Here's a forum post describing, I think, the same problem you face. Excerpt belows states a workaround. What I do is to put code on the close button that reassigns the sourceobject of any subforms to a blank form, such as: me!subParts.sourceobject = "subBlank" 'subBlank is my form that is totally blank, free of code and controls, etc. docmd.close acForm, "fParts", acSaveNo The above 2 lines is the only way I've found to prevent the Access prompt from popping up. http://bytes.com/forum/thread681889.html A: Another alternative is (Me.Checkbox) or my preferred syntax: (Me!Checkbox) It seems to me that there is much confusion in the posts in this topic. The answer that was chosen by the original poster cites an article where the user had a prompt to save design changes to the form, but the problem described here seems like it's a failure of the form to close, not a save issue (the save issue came up only in the workaround describing going to the VBE and making a code change). I wonder if the original user might have incorrect VBE options set? If you open the VBE and go to TOOLS | OPTIONS, on the GENERAL tab, you'll see several choices about error handling. BREAK ON UNHANDLED ERRORS or BREAK IN CLASS MODULE should be chosen, but it's important to recognize that if you use the former, you may not see certain kinds of errors. There's not really enough detail to diagnose much more, other than the fact that the reference to the checkbox control seemed to have been causing the problem, but there are a number of Access coding best practices that can help you avoid some of these oddities. The code-related recommendations in Tony Toews's Best Practices page are a good place to start. A: That sure is weird. Do you have any timer controls on the form? If you do, try disabling it in the OnClose. A: There is a possibility that the message box that asks if you want to save changes is being displayed behind the form. I believe that this message box is modal so you must click yes or no before you can do anything with the form which is why you can't close it. A: Does your form have an unload event? That can be canceled, and if it is, the form won't close when it's in form view. It will only close in design view, which, when you edit the vba code is what the form does in the Access window when you're editing the code. A: Does your form have a checkbox, toggle button or option button? There's a bug in Access 2000 where Access won't close if you test the value without explicitly using the Value property in the vba code, like this: If Me.chkbox Then versus: If Me.chkbox.Value Then
What can prevent an MS Access 2000 form from closing?
My Access 2000 DB causes me problems - sometimes (haven't pinpointed the cause) the "book" form won't close. Clicking its close button does nothing, File -> Close does nothing, even closing Access results in no action. I don't have an OnClose handler for this form. The only workaround I can find involves opening the Vba editor, making a change to the code for that form (even adding a space and then immediately deleting the space), and then going back to close the "book" form, closing it, and saying "no, I don't want to save the changes". Only then will it close. Any help?
[ "Here's a forum post describing, I think, the same problem you face. Excerpt belows states a workaround.\n\nWhat I do is to put code on the close button that reassigns the sourceobject\n of any subforms to a blank form, such as:\nme!subParts.sourceobject = \"subBlank\" 'subBlank is my form that is\n totally blank, free of code and controls, etc.\n docmd.close acForm, \"fParts\", acSaveNo\nThe above 2 lines is the only way I've found to prevent the Access prompt\n from popping up.\nhttp://bytes.com/forum/thread681889.html\n\n", "Another alternative is\n(Me.Checkbox) \n\nor my preferred syntax:\n(Me!Checkbox)\n\nIt seems to me that there is much confusion in the posts in this topic. The answer that was chosen by the original poster cites an article where the user had a prompt to save design changes to the form, but the problem described here seems like it's a failure of the form to close, not a save issue (the save issue came up only in the workaround describing going to the VBE and making a code change).\nI wonder if the original user might have incorrect VBE options set? If you open the VBE and go to TOOLS | OPTIONS, on the GENERAL tab, you'll see several choices about error handling. BREAK ON UNHANDLED ERRORS or BREAK IN CLASS MODULE should be chosen, but it's important to recognize that if you use the former, you may not see certain kinds of errors.\nThere's not really enough detail to diagnose much more, other than the fact that the reference to the checkbox control seemed to have been causing the problem, but there are a number of Access coding best practices that can help you avoid some of these oddities. The code-related recommendations in Tony Toews's Best Practices page are a good place to start.\n", "That sure is weird. Do you have any timer controls on the form? If you do, try disabling it in the OnClose.\n", "There is a possibility that the message box that asks if you want to save changes is being displayed behind the form. I believe that this message box is modal so you must click yes or no before you can do anything with the form which is why you can't close it.\n", "Does your form have an unload event? That can be canceled, and if it is, the form won't close when it's in form view. It will only close in design view, which, when you edit the vba code is what the form does in the Access window when you're editing the code.\n", "Does your form have a checkbox, toggle button or option button? There's a bug in Access 2000 where Access won't close if you test the value without explicitly using the Value property in the vba code, like this:\nIf Me.chkbox Then\n\nversus:\nIf Me.chkbox.Value Then\n\n" ]
[ 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "forms", "ms_access" ]
stackoverflow_0000084332_forms_ms_access.txt
Q: Resources of techniques use for collision detection in 2D? What are in your opinion the best resources (books or web pages) describing algorithms or techniques to use for collision detection in a 2D environment? I'm just eager to learn different techniques to make more sophisticated and efficient games. ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: Collision detection is often a two phase process. Some sort of "broad phase" algorithm for determinining if two objects even have a chance of overlapping (to try to avoid n^2 compares) followed by a "narrow phase" collision detection algorithm, which is based on the geometry requirements of your application. Sweep and Prune is a well established efficient broad phase algorithm (with a handful of variants that may or may not suit your application) for objects undergoing relatively physical movement (things that move crazy fast or have vastly different sizes and bounding regions might make this unsuitable). The Bullet library has a 3d implementation for reference. Narrow phase collision can often be as simple as "CircleIntersectCircle." Again the Bullet libraries have good reference implementations. In 3d land when more precise detection is required for arbitrary objects, GJK is among the current cream of the crop - nothing in my knowledge would prevent it from being adapted to 2d (but it might end up slower than just brute forcing all your edges ;) Finally, after you have collision detection, you are often in need of some sort of collision response. Box 2d is a good starting point for a physical response solution. A: Personally, I love the work of Paul Bourke. Also, Paul Nettle used to write on the topic. He has a full 3D collision detection library, but you may be more interested in the ideas behind such libraries (which are very applicable to 2D). For that, see General Collision Detection for Games Using Ellipsoids. A: Metanet Software has published some relevant tutorials. Metanet develops N (Flash-based, for Windows, Mac, Linux) and N+ (for the X360, DS, and PSP). A: The book 'Real-Time Collision Detection' by Christer Ericson (ISBN: 1-55860-732-3) is a recent (2005) and widely praised book which should give you some good answers. It starts with a basic primer of some of the maths you will need to know, and then goes into various types of bounding volumes (spheres, axis-aligned bounding boxes, oriented bounding boxes) commonly used in collision detection. Next up for discussion are numerous algorithms for detecting collisions between various combinations of primitives, such as lines, triangles, spheres, polygons, planes, bounding volumes etc. Also of importance is the coverage of some of the major methods of spatial division and organisation of your objects (volume hierarchies, BSP trees, Octrees, etc.). This essentially speeds up collision detection, as it allows you to subdivide your objects so you can avoid unnecessary comparisons between objects (e.g. I know from my data structures that object A is too far away to hit object B, so I won't even do a distance check). It also includes some coverage of how to actually check for collisions between moving objects (intervals, etc) but be aware that even though this is a fairly hefty book and covers the material well, it is for collision detection, not resolution or response. So it will help you determine whether two objects have collided, but not really what to do about it, i.e. how to resolve it. The intersection tests will usually give you the data you need to make such decisions, but in terms of the general problem of writing a solver, which uses collision detection routines to detect collisions and then decide what to do about them, this book does not cover that in depth. A: If your objects are represented as points in 2D space you can use line intersection to determine if two objects have collided. You can use similar logic to check if an object is inside another object (and thus they have collided even any of their lines are not currently intersecting). The math to do this is quite simple, and should be covered by any textbook on basic geometry. Detecting if an object has passed completely through an object might be a bit more tricky though.
Resources of techniques use for collision detection in 2D?
What are in your opinion the best resources (books or web pages) describing algorithms or techniques to use for collision detection in a 2D environment? I'm just eager to learn different techniques to make more sophisticated and efficient games. ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
[ "Collision detection is often a two phase process. Some sort of \"broad phase\" algorithm for determinining if two objects even have a chance of overlapping (to try to avoid n^2 compares) followed by a \"narrow phase\" collision detection algorithm, which is based on the geometry requirements of your application.\nSweep and Prune is a well established efficient broad phase algorithm (with a handful of variants that may or may not suit your application) for objects undergoing relatively physical movement (things that move crazy fast or have vastly different sizes and bounding regions might make this unsuitable). The Bullet library has a 3d implementation for reference.\nNarrow phase collision can often be as simple as \"CircleIntersectCircle.\" Again the Bullet libraries have good reference implementations. In 3d land when more precise detection is required for arbitrary objects, GJK is among the current cream of the crop - nothing in my knowledge would prevent it from being adapted to 2d (but it might end up slower than just brute forcing all your edges ;)\nFinally, after you have collision detection, you are often in need of some sort of collision response. Box 2d is a good starting point for a physical response solution.\n", "Personally, I love the work of Paul Bourke.\nAlso, Paul Nettle used to write on the topic. He has a full 3D collision detection library, but you may be more interested in the ideas behind such libraries (which are very applicable to 2D). For that, see General Collision Detection for Games Using Ellipsoids.\n", "Metanet Software has published some relevant tutorials. Metanet develops N (Flash-based, for Windows, Mac, Linux) and N+ (for the X360, DS, and PSP).\n", "The book 'Real-Time Collision Detection' by Christer Ericson (ISBN: 1-55860-732-3) is a recent (2005) and widely praised book which should give you some good answers.\nIt starts with a basic primer of some of the maths you will need to know, and then goes into various types of bounding volumes (spheres, axis-aligned bounding boxes, oriented bounding boxes) commonly used in collision detection.\nNext up for discussion are numerous algorithms for detecting collisions between various combinations of primitives, such as lines, triangles, spheres, polygons, planes, bounding volumes etc.\nAlso of importance is the coverage of some of the major methods of spatial division and organisation of your objects (volume hierarchies, BSP trees, Octrees, etc.). This essentially speeds up collision detection, as it allows you to subdivide your objects so you can avoid unnecessary comparisons between objects (e.g. I know from my data structures that object A is too far away to hit object B, so I won't even do a distance check).\nIt also includes some coverage of how to actually check for collisions between moving objects (intervals, etc) but be aware that even though this is a fairly hefty book and covers the material well, it is for collision detection, not resolution or response. So it will help you determine whether two objects have collided, but not really what to do about it, i.e. how to resolve it. The intersection tests will usually give you the data you need to make such decisions, but in terms of the general problem of writing a solver, which uses collision detection routines to detect collisions and then decide what to do about them, this book does not cover that in depth.\n", "If your objects are represented as points in 2D space you can use line intersection to determine if two objects have collided. You can use similar logic to check if an object is inside another object (and thus they have collided even any of their lines are not currently intersecting). The math to do this is quite simple, and should be covered by any textbook on basic geometry. Detecting if an object has passed completely through an object might be a bit more tricky though. \n" ]
[ 14, 2, 2, 1, 0 ]
[]
[]
[ "algorithm", "collision_detection" ]
stackoverflow_0000031158_algorithm_collision_detection.txt
Q: Using DateAdd in umbraco xslt to display next year's date I'm trying to display the date for a year from now in an xslt file using umbraco like so: <xsl:variable name="now" select="umbraco.library:CurrentDate()"/> <xsl:value-of select="umbraco.library:DateAdd($now, 'year', 1)"/> The value-of tag outputs today's date. How can I get the DateAdd to add a year to the current date? A: The constant 'year' is wrong. It expects just 'y'. <xsl:variable name="now" select="umbraco.library:CurrentDate()"/> <xsl:value-of select="umbraco.library:DateAdd($now, 'y', 1)"/>
Using DateAdd in umbraco xslt to display next year's date
I'm trying to display the date for a year from now in an xslt file using umbraco like so: <xsl:variable name="now" select="umbraco.library:CurrentDate()"/> <xsl:value-of select="umbraco.library:DateAdd($now, 'year', 1)"/> The value-of tag outputs today's date. How can I get the DateAdd to add a year to the current date?
[ "The constant 'year' is wrong. It expects just 'y'.\n<xsl:variable name=\"now\" select=\"umbraco.library:CurrentDate()\"/>\n<xsl:value-of select=\"umbraco.library:DateAdd($now, 'y', 1)\"/>\n\n" ]
[ 2 ]
[]
[]
[ "umbraco", "xslt" ]
stackoverflow_0000087870_umbraco_xslt.txt
Q: What is a heuristic fencepost? And why does gdb seem to "hit" it? A: According to this page, GDB is searching backward in the object code to find the beginning of a function, and it is hitting an imposed limit. If you can set the fence post limit to 0 or increase it, you might avoid the error, but it will take longer to run.
What is a heuristic fencepost?
And why does gdb seem to "hit" it?
[ "According to this page, GDB is searching backward in the object code to find the beginning of a function, and it is hitting an imposed limit. If you can set the fence post limit to 0 or increase it, you might avoid the error, but it will take longer to run.\n" ]
[ 3 ]
[]
[]
[ "gdb" ]
stackoverflow_0000088923_gdb.txt
Q: VS2008 Crashing on Project Load I am unable to load any existing projects after starting VS2008. When I try to open an existing project VS2008 will crash. It looks like it is crashing when trying to load a floating window in VS but I cant tell which one. When I launch the debugger on the crashed instance I get the following message which is not very useful. Unhandled exception at 0x00740078 in devenv.exe: 0xC0000005: Access violation writing location 0x7e429ed9. I have previously had SP1 installed but have now removed this. I also have used Resharper 4.0 but have uninstalled this as well and am still getting the problem. Does anyone have tips on how to solve VS2008 crashing problems? I really dont want to have to do a reinstall of the product. As a work around I have found that if I create a new class library project it will fail because of a write to protected memory error. If i try and create another new class library project it will work and then I can load an existing project. A: You can also try running devenv.exe with the /ResetSettings argument (which will reset any custom settings you have) or with the /SafeMode flag. /SafeMode won't help you fix your problem but it will at least narrow down the issue to the things that are different between safe and regular mode. A: Try renaming the: projectName.csproj.user file solutionName.suo file solutionName.ncb file ... and see if the project opens.
VS2008 Crashing on Project Load
I am unable to load any existing projects after starting VS2008. When I try to open an existing project VS2008 will crash. It looks like it is crashing when trying to load a floating window in VS but I cant tell which one. When I launch the debugger on the crashed instance I get the following message which is not very useful. Unhandled exception at 0x00740078 in devenv.exe: 0xC0000005: Access violation writing location 0x7e429ed9. I have previously had SP1 installed but have now removed this. I also have used Resharper 4.0 but have uninstalled this as well and am still getting the problem. Does anyone have tips on how to solve VS2008 crashing problems? I really dont want to have to do a reinstall of the product. As a work around I have found that if I create a new class library project it will fail because of a write to protected memory error. If i try and create another new class library project it will work and then I can load an existing project.
[ "You can also try running devenv.exe with the /ResetSettings argument (which will reset any custom settings you have) or with the /SafeMode flag.\n/SafeMode won't help you fix your problem but it will at least narrow down the issue to the things that are different between safe and regular mode.\n", "Try renaming the:\nprojectName.csproj.user file\nsolutionName.suo file\nsolutionName.ncb file\n... and see if the project opens. \n" ]
[ 3, 1 ]
[]
[]
[ "visual_studio_2008" ]
stackoverflow_0000088797_visual_studio_2008.txt
Q: Do Delphi class vars have global or thread local storage? My guess is that class variables ("class var") are truly global in storage (that is, one instance for the entire application). But I am wondering whether this is the case, or whether they are thread in storage (eg similar to a "threadvar") - once instance per thread. Anyone know? Edit: changed "scope" to "storage" as this is in fact the correct terminology, and what I am after (thanks Barry) A: Class variables are scoped according to their member visibility attributes, and have global storage, not thread storage. Scope is a syntactic concept, and relates to what identifiers are visible from where. It is the storage of the variable that is of concern here. A: Yes, class variables are globally scoped. Have a look in the RTL source for details of how threadvars are implemented. Under Win32 each thread can have a block of memory allocated automatically to it on thread creation. This extra data area is what is used to contain your threadvars. A: Class variables are just like classes: global and unique for the application.
Do Delphi class vars have global or thread local storage?
My guess is that class variables ("class var") are truly global in storage (that is, one instance for the entire application). But I am wondering whether this is the case, or whether they are thread in storage (eg similar to a "threadvar") - once instance per thread. Anyone know? Edit: changed "scope" to "storage" as this is in fact the correct terminology, and what I am after (thanks Barry)
[ "Class variables are scoped according to their member visibility attributes, and have global storage, not thread storage.\nScope is a syntactic concept, and relates to what identifiers are visible from where. It is the storage of the variable that is of concern here.\n", "Yes, class variables are globally scoped. Have a look in the RTL source for details of how threadvars are implemented. Under Win32 each thread can have a block of memory allocated automatically to it on thread creation. This extra data area is what is used to contain your threadvars.\n", "Class variables are just like classes: global and unique for the application.\n" ]
[ 11, 8, 1 ]
[]
[]
[ "delphi", "multithreading" ]
stackoverflow_0000082113_delphi_multithreading.txt
Q: Disable (Politely) a website when the sql server is offline I work at a college and have been developing an ASP.NET site with many, many reports about students, attendance stats... The basis for the data is an MSSQL server DB which is the back end to our student management system. This has a regular maintenance period on Thursday mornings for an unknown length of time (dependent on what has to be done). Most of the staff are aware of this but the less regular users seem to be forever ringing me up. What is the easiest way to disable the site during maintenance obviously I can just try a DB query to test if it is up but am unsure of the best way to for instance redirect all users to a "The website is down for maintenance" message, bearing in mind they could have started a session prior to the website going down. Hopefully, something can be implemented globally rather than per page. A: Drop an html file called "app_offline.htm" into the root of your virtual directory. Simple as that. Scott Guthrie on the subject and friendly errors. A: You could display a message to people who have logged in saying "the site will be down for maintenance in xxx minutes" then run a service to log everyone out after xxx minutes. Then set a flag somewhere that every page can access, and at the top of every page(or just the template page) you test if that flag is set, if it is, send a redirect header to a site is down for maintenance page. A: The "offline.html" page won't work if the user was already navigating within the site, or if he's accessing the site from a bookmark/external link to a specific page. The solution I use is to create a second web site with the same address (IP or host header(s)), but have it disabled by default. When the website is down, a script deactivates the "real" web site and enables the "maintenance" website instead. When it comes back online, another script switches back to the "real" web site. The "maintenance" web site is located in a different root directory, with a single page with the message (and any required images/css files) To have the same message shown on any page, the "maintenance" web site is set up with a 404 error handler that will redirect any request to the same "website is down for maintenance" page. A: What happens now when the site is down and someone tries to hit it? Does ADO.NET throw a specific exception you could catch and then redirect to the "website down" page? You could add a "Global.asax" file to the project, and in its code-behind add an "Application_Error" event handler. It would fire whenever an exception is thrown and goes uncaught, from anywhere in your web app. For example, in C#: protected void Application_Error(object sender, EventArgs e) { Exception e = Server.GetLastError().GetBaseException(); if(e is SqlException) { Server.ClearError(); Server.Transfer("~/offline.aspx"); } } You could also check the Number property on the exception, though I'm not sure which number(s) would indicate it was unable to connect to the database server. You could test this while it's down, find the SQL error number and look it up online to see if it's specifically what you really want to be checking for. EDIT: I see what you're saying, petebob. A: I would suggest doing it in Application_PreRequestHandlerExecute instead of after an error occurs. Generally, it'd be best not to enter normal processing if you know your database isn't available. I typically use something like below void Application_PreRequestHandlerExecute(Object sender, EventArgs e) { string sPage = Request.ServerVariables["SCRIPT_NAME"]; if (!sPage.EndsWith("Maintenance.aspx", StringComparison.OrdinalIgnoreCase)) { //test the database connection //if it fails then redirect the user to Maintenance.aspx string connStr = ConfigurationManager.ConnectionString["ConnectionString"].ConnectionString; SqlConnection conn = new SqlConnection(connStr); try { conn.Open(); } catch(Exception ex) { Session["DBException"] = ex; Response.Redirect("Maintenance.aspx"); } finally { conn.Close(); } } } A: Thanks for the replies so far I should point out I'm not the one that does the maintenance nor does I have access all the time to IIS. Also, I prefer options where I do nothing as like all programmers I am a bit lazy. I know one way is to check a flag on every page but I'm hoping to avoid it. Could I not do something with the global.asax page, in fact, I think posting has engaged my brain: Think I could put in Application_BeginRequest a bit of code to check the SQL state then redirect: HttpContext context = HttpContext.Current; if (!isOnline()) { context.Response.ClearContent(); context.Response.Write("<script language='javascript'>" + "top.location='" + Request.ApplicationPath + "/public/Offline.aspx';</scr" + "ipt>"); } Or something like that may not be perfect not tested yet as I'm not at work. Comments appreciated. A: A slightly more elegant version of the DB check on every page would be to do the check in the Global.asax file or to create a master page that all the other pages inherit from. The suggestion of having an online site and an offline site is really good, but only really applicable if you have a limited number of sites to manage on the server. EDIT: Damn, the other answers with these suggestions came up after I loaded the page. I need to remember to refresh before replying :) A: James code forgets to close the connection, should probably be: try { conn.Open(); } catch(Exception ex) { Session["DBException"] = ex; Response.Redirect("Maintenance.aspx"); } finally { conn.Close(); }
Disable (Politely) a website when the sql server is offline
I work at a college and have been developing an ASP.NET site with many, many reports about students, attendance stats... The basis for the data is an MSSQL server DB which is the back end to our student management system. This has a regular maintenance period on Thursday mornings for an unknown length of time (dependent on what has to be done). Most of the staff are aware of this but the less regular users seem to be forever ringing me up. What is the easiest way to disable the site during maintenance obviously I can just try a DB query to test if it is up but am unsure of the best way to for instance redirect all users to a "The website is down for maintenance" message, bearing in mind they could have started a session prior to the website going down. Hopefully, something can be implemented globally rather than per page.
[ "Drop an html file called \"app_offline.htm\" into the root of your virtual directory. Simple as that.\nScott Guthrie on the subject and friendly errors.\n", "You could display a message to people who have logged in saying \"the site will be down for maintenance in xxx minutes\" then run a service to log everyone out after xxx minutes. Then set a flag somewhere that every page can access, and at the top of every page(or just the template page) you test if that flag is set, if it is, send a redirect header to a site is down for maintenance page.\n", "The \"offline.html\" page won't work if the user was already navigating within the site, or if he's accessing the site from a bookmark/external link to a specific page.\nThe solution I use is to create a second web site with the same address (IP or host header(s)), but have it disabled by default. When the website is down, a script deactivates the \"real\" web site and enables the \"maintenance\" website instead. When it comes back online, another script switches back to the \"real\" web site.\nThe \"maintenance\" web site is located in a different root directory, with a single page with the message (and any required images/css files)\nTo have the same message shown on any page, the \"maintenance\" web site is set up with a 404 error handler that will redirect any request to the same \"website is down for maintenance\" page.\n", "What happens now when the site is down and someone tries to hit it? Does ADO.NET throw a specific exception you could catch and then redirect to the \"website down\" page?\nYou could add a \"Global.asax\" file to the project, and in its code-behind add an \"Application_Error\" event handler. It would fire whenever an exception is thrown and goes uncaught, from anywhere in your web app. For example, in C#:\nprotected void Application_Error(object sender, EventArgs e)\n{\n Exception e = Server.GetLastError().GetBaseException();\n if(e is SqlException)\n { \n Server.ClearError();\n Server.Transfer(\"~/offline.aspx\");\n }\n} \n\nYou could also check the Number property on the exception, though I'm not sure which number(s) would indicate it was unable to connect to the database server. You could test this while it's down, find the SQL error number and look it up online to see if it's specifically what you really want to be checking for.\nEDIT: I see what you're saying, petebob.\n", "I would suggest doing it in Application_PreRequestHandlerExecute instead of after an error occurs. Generally, it'd be best not to enter normal processing if you know your database isn't available. I typically use something like below\nvoid Application_PreRequestHandlerExecute(Object sender, EventArgs e)\n{\n string sPage = Request.ServerVariables[\"SCRIPT_NAME\"];\n if (!sPage.EndsWith(\"Maintenance.aspx\", StringComparison.OrdinalIgnoreCase))\n {\n //test the database connection\n //if it fails then redirect the user to Maintenance.aspx\n string connStr = ConfigurationManager.ConnectionString[\"ConnectionString\"].ConnectionString;\n SqlConnection conn = new SqlConnection(connStr);\n try\n {\n conn.Open();\n }\n catch(Exception ex)\n {\n Session[\"DBException\"] = ex;\n Response.Redirect(\"Maintenance.aspx\");\n }\n finally\n {\n conn.Close();\n }\n }\n}\n\n", "Thanks for the replies so far I should point out I'm not the one that does the maintenance nor does I have access all the time to IIS. Also, I prefer options where I do nothing as like all programmers I am a bit lazy.\nI know one way is to check a flag on every page but I'm hoping to avoid it. Could I not do something with the global.asax page, in fact, I think posting has engaged my brain:\nThink I could put in Application_BeginRequest a bit of code to check the SQL state then redirect:\nHttpContext context = HttpContext.Current;\n if (!isOnline())\n {\n context.Response.ClearContent();\n context.Response.Write(\"<script language='javascript'>\" + \n\"top.location='\" + Request.ApplicationPath + \"/public/Offline.aspx';</scr\" + \"ipt>\");\n } \n\nOr something like that may not be perfect not tested yet as I'm not at work. Comments appreciated.\n", "A slightly more elegant version of the DB check on every page would be to do the check in the Global.asax file or to create a master page that all the other pages inherit from.\nThe suggestion of having an online site and an offline site is really good, but only really applicable if you have a limited number of sites to manage on the server.\nEDIT: Damn, the other answers with these suggestions came up after I loaded the page. I need to remember to refresh before replying :)\n", "James code forgets to close the connection, should probably be:\ntry\n{\n conn.Open();\n}\ncatch(Exception ex)\n{\n Session[\"DBException\"] = ex;\n Response.Redirect(\"Maintenance.aspx\");\n}\nfinally\n{\n conn.Close();\n}\n\n" ]
[ 12, 1, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "asp.net", "maintenance_mode", "sql" ]
stackoverflow_0000088775_asp.net_maintenance_mode_sql.txt
Q: How to program call divert settings on Windows Mobile? Does anyone know how to get/set the call divert settings in codes running on Windows mobile 5/6? I am new to windows mobile development and wonder if there is anyway to do it using C# and .NET CF? A: I assume you mean call forwarding? In general terms, the Telephony API (TAPI) is used for programmatically controlling the phone interface. Call forwarding is specifically handled by TSPI_lineForward. Microsoft does not offer any built-in or SDK tools for managed developers to use TAPI, and the structures TAPI uses are cumbersome and difficult to P/Invoke. There are a some 3rd-party libraries that do provide some level of TAPI interaction that you might also investigate. A: Thank you very much for your help. I do mean call forwarding and what I would like to do is to have a simple application, perhaps with only 2 big buttons. When pressed, one should forward the incoming calls to my work phone and the other should forward them to my home phone. Being a (desktop application) developer myself of course I would like to have created my own solution for it. I once tried the TAPI wrapper provided by Microsoft to try to dial the GSM codes and it just won't work in codes when I tried to 'dial' the GSM codes... Perhaps I should spend more time studying the TAPI on mobile devices.
How to program call divert settings on Windows Mobile?
Does anyone know how to get/set the call divert settings in codes running on Windows mobile 5/6? I am new to windows mobile development and wonder if there is anyway to do it using C# and .NET CF?
[ "I assume you mean call forwarding? In general terms, the Telephony API (TAPI) is used for programmatically controlling the phone interface. Call forwarding is specifically handled by TSPI_lineForward.\nMicrosoft does not offer any built-in or SDK tools for managed developers to use TAPI, and the structures TAPI uses are cumbersome and difficult to P/Invoke. There are a some 3rd-party libraries that do provide some level of TAPI interaction that you might also investigate.\n", "Thank you very much for your help. I do mean call forwarding and what I would like to do is to have a simple application, perhaps with only 2 big buttons. When pressed, one should forward the incoming calls to my work phone and the other should forward them to my home phone. Being a (desktop application) developer myself of course I would like to have created my own solution for it. I once tried the TAPI wrapper provided by Microsoft to try to dial the GSM codes and it just won't work in codes when I tried to 'dial' the GSM codes... Perhaps I should spend more time studying the TAPI on mobile devices.\n" ]
[ 2, 0 ]
[]
[]
[ "windows_mobile" ]
stackoverflow_0000080654_windows_mobile.txt
Q: Bash One Liner: copy template_*.txt to foo_*.txt? Say I have three files (template_*.txt): template_x.txt template_y.txt template_z.txt I want to copy them to three new files (foo_*.txt). foo_x.txt foo_y.txt foo_z.txt Is there some simple way to do that with one command, e.g. cp --enableAwesomeness template_*.txt foo_*.txt A: for f in template_*.txt; do cp $f foo_${f#template_}; done A: [01:22 PM] matt@Lunchbox:~/tmp/ba$ ls template_x.txt template_y.txt template_z.txt [01:22 PM] matt@Lunchbox:~/tmp/ba$ for i in template_*.txt ; do mv $i foo${i:8}; done [01:22 PM] matt@Lunchbox:~/tmp/ba$ ls foo_x.txt foo_y.txt foo_z.txt A: My preferred way: for f in template_*.txt do cp $f ${f/template/foo} done The "I-don't-remember-the-substitution-syntax" way: for i in x y z do cp template_$i foo_$ done A: This should work: for file in template_*.txt ; do cp $file `echo $file | sed 's/template_\(.*\)/foo_\1/'` ; done A: I don't know of anything in bash or on cp, but there are simple ways to do this sort of thing using (for example) a perl script: ($op = shift) || die "Usage: rename perlexpr [filenames]\n"; for (@ARGV) { $was = $_; eval $op; die $@ if $@; rename($was,$_) unless $was eq $_; } Then: rename s/template/foo/ *.txt A: for i in template_*.txt; do cp -v "$i" "`echo $i | sed 's%^template_%foo_%'`"; done Probably breaks if your filenames have funky characters in them. Remove the '-v' when (if) you get confidence that it works reliably. A: The command mmv (available in Debian or Fink or easy to compile yourself) was created precisely for this task. With the plain Bash solution, I always have to look up the documentation about variable expansion. But mmv is much simpler to use, quite close to "awesomeness"! ;-) Your example would be: mcp "template_*.txt" "foo_#1.txt" mmv can handle more complex patterns as well and it has some sanity checks, for example, it will make sure none of the files in the destination set appear in the source set (so you can't accidentally overwrite files). A: Yet another way to do it: $ ls template_*.txt | sed -e 's/^template\(.*\)$/cp template\1 foo\1/' | ksh -sx I've always been impressed with the ImageMagick convert program that does what you expect with image formats: $ convert rose.jpg rose.png It has a sister program that allows batch conversions: $ mogrify -format png *.jpg Obviously these are limited to image conversions, but they have interesting command line interfaces.
Bash One Liner: copy template_*.txt to foo_*.txt?
Say I have three files (template_*.txt): template_x.txt template_y.txt template_z.txt I want to copy them to three new files (foo_*.txt). foo_x.txt foo_y.txt foo_z.txt Is there some simple way to do that with one command, e.g. cp --enableAwesomeness template_*.txt foo_*.txt
[ "\nfor f in template_*.txt; do cp $f foo_${f#template_}; done\n\n", "[01:22 PM] matt@Lunchbox:~/tmp/ba$\nls\ntemplate_x.txt template_y.txt template_z.txt\n\n[01:22 PM] matt@Lunchbox:~/tmp/ba$\nfor i in template_*.txt ; do mv $i foo${i:8}; done\n\n[01:22 PM] matt@Lunchbox:~/tmp/ba$\nls\nfoo_x.txt foo_y.txt foo_z.txt\n\n", "My preferred way:\nfor f in template_*.txt\ndo\n cp $f ${f/template/foo}\ndone\n\nThe \"I-don't-remember-the-substitution-syntax\" way:\nfor i in x y z\ndo\n cp template_$i foo_$\ndone\n\n", "This should work:\nfor file in template_*.txt ; do cp $file `echo $file | sed 's/template_\\(.*\\)/foo_\\1/'` ; done\n\n", "I don't know of anything in bash or on cp, but there are simple ways to do this sort of thing using (for example) a perl script:\n($op = shift) || die \"Usage: rename perlexpr [filenames]\\n\";\n\nfor (@ARGV) {\n $was = $_;\n eval $op;\n die $@ if $@;\n rename($was,$_) unless $was eq $_;\n}\n\nThen:\nrename s/template/foo/ *.txt\n\n", "for i in template_*.txt; do cp -v \"$i\" \"`echo $i | sed 's%^template_%foo_%'`\"; done\n\nProbably breaks if your filenames have funky characters in them. Remove the '-v' when (if) you get confidence that it works reliably.\n", "The command mmv (available in Debian or Fink or easy to compile yourself) was created precisely for this task. With the plain Bash solution, I always have to look up the documentation about variable expansion. But mmv is much simpler to use, quite close to \"awesomeness\"! ;-)\nYour example would be:\nmcp \"template_*.txt\" \"foo_#1.txt\"\n\nmmv can handle more complex patterns as well and it has some sanity checks, for example, it will make sure none of the files in the destination set appear in the source set (so you can't accidentally overwrite files).\n", "Yet another way to do it:\n$ ls template_*.txt | sed -e 's/^template\\(.*\\)$/cp template\\1 foo\\1/' | ksh -sx\n\nI've always been impressed with the ImageMagick convert program that does what you expect with image formats:\n$ convert rose.jpg rose.png\n\nIt has a sister program that allows batch conversions:\n$ mogrify -format png *.jpg\n\nObviously these are limited to image conversions, but they have interesting command line interfaces.\n" ]
[ 11, 3, 3, 2, 1, 1, 1, 0 ]
[]
[]
[ "bash" ]
stackoverflow_0000026433_bash.txt
Q: Rebind Access combo box I have an Access 2007 form that is searchable by a combobox. When I add a new record, I need to update the combobox to include the newly added item. I assume that something needs to be done in AfterInsert event of the form but I can't figure out what. How can I rebind the combobox after inserting so that the new item appears in the list? A: The easiest way is to guarantee that the combobox is always up-to-date is to just requery the combobox once it gets the focus. Even if the recordset is then updated somewhere else, your combobox is always up-to-date. A simple TheCombobox.Requery in the OnFocus event should be enough. A: There are two possible answers here that are efficient: use the Form's AfterInsert event to Requery the combo box (as well as the OnDeleteConfirm event). This will be sufficient if the combo box does not display data that the user can update and that needs to be updated if the underlying record is updated. if updates to the data need to be reflected in the combo box, then it would make sense to add a requery in the AfterUpdate events of the controls that are used to edit the data displayed in the combo box. For example, if your combo box lists the names of the people in the table, you'll want to use method #2, and in the AfterUpdate event of Me!txtFirstName and Me!txtLastName, requery the combo box. Since you're doing the same operation in four places, you'll want to write a subroutine to do the requery. So, the sub would look something like this: Private Sub RequerySearchCombo() If Me.Dirty Then Me.Dirty = False Me!MyCombo.Requery End Sub The reason to make sure you requery only when there is actually an update to the data displayed in the combo box is because if you're populating the combo box with the list of the whole table, the requery can take a very long time if you have 10s of 1,000s of records. Another alternative that saves all the requeries would be to have a blank rowsource for the combo box, and populate it only after 1 or 2 characters have been typed, and filter the results that the combo displays based on the typed characters. For that, you'd use the combo box's OnChange event: Private Sub MyCombo_Change() Dim strSQL As String If Len(Me!MyCombo.Text) = 2 Then strSQL = "SELECT MyID, LastName & ', ' & FirstName FROM MyTable " strSQL = strSQL & "WHERE LastName LIKE " & Chr(34) & Me!MyCombo.Text & Chr(34) & "*" Me!MyCombo.Rowsource = strSQL End If End Sub The code above assumes that you're searching for a person's name in a combo box that displays "LastName, FirstName". There's another important caveat: if you're searching a form bound to a full table (or a SQL statement that returns all the records in the table) and using Bookmark navigation to locate the records, this method will not scale very well, as it requires pulling the entire index for the searched fields across the wire. In the case of my imaginary combo box above, you'd be using FindFirst to navigate to the record with the corresponding MyID value, so it's the index for MyID that would have to be pulled (though only as many index pages as necessary to satisfy the search would actually get pulled). This is not an issue for tables with a few thousand records, but beyond about 15-20K, it can be a network bottleneck. In that case, instead of navigating via bookmark, you'd use your combo box to filter the result set down to the single record. This is, of course, extremely efficient, regardless of whether you're using a Jet back end or a server back end. It's highly desirable to start incorporating these kinds of efficiencies into your application as soon as possible. If you do so, it makes it much easier to upsize to a server back end, or makes it pretty painless if you should hit that tipping point with a mass of new data that makes the old method too inefficient to be user-friendly. A: I assume your combobox is a control on a form, not a combobox control in a commandBar. This combobox has a property called rowsource, that can be either a value list (husband;wife;son;girl) or a SQL SELECT instruction (SELECT relationDescription FROM Table_relationType). I assume also that your form recordset has something to do with your combobox recordset. What you'll have to do is, once your form recordset is properly updated (afterUpdate event I think), to reinitialise the rowsource property of the combobox control if the recordsource is an SQL instruction: myComboBoxControl.recordsource = _ "SELECT relationDescription FROM Table_relationType" or if it is a value list myComboBoxControl.recordsource = myComboBoxControl.recordsource & ";nephew" But over all I find your request very strange. Do you have a reflexive (parent-child) relationship on your table? A: I would normally use the NotInList event to add data to a combo with Response = acDataErrAdded To update the combo. The Access 2007 Developers Reference has all the details, including sample code: http://msdn.microsoft.com/en-us/library/bb214329.aspx A: Requery the combo box in the form's after update event and the delete event. Your combo box will be up to date whenever the user makes changes to the recordset, whether it's a new record, and change, or a deletion. Unless users must have everybody else's changes as soon as they're made, don't requery the combo box every time it gets the focus because not only will the user have to wait (which is noticable with large recordsets), it's unnecessary if the recordset hasn't changed. But if that's the case, the whole form needs to be requeried as soon as anybody else makes a change, not just the combo box. This would be a highly unusual scenario. After update: Private Sub Form_AfterUpdate() On Error GoTo Proc_Err Me.cboSearch.Requery Exit Sub Proc_Err: MsgBox Err.Number & vbCrLf & vbCrLf & Err.Description Err.Clear End Sub After delete: Private Sub Form_Delete(Cancel As Integer) On Error GoTo Proc_Err Me.cboSearch.Requery Exit Sub Proc_Err: MsgBox Err.Number & vbCrLf & vbCrLf & Err.Description Err.Clear End Sub
Rebind Access combo box
I have an Access 2007 form that is searchable by a combobox. When I add a new record, I need to update the combobox to include the newly added item. I assume that something needs to be done in AfterInsert event of the form but I can't figure out what. How can I rebind the combobox after inserting so that the new item appears in the list?
[ "The easiest way is to guarantee that the combobox is always up-to-date is to just requery the combobox once it gets the focus. Even if the recordset is then updated somewhere else, your combobox is always up-to-date. A simple TheCombobox.Requery in the OnFocus event should be enough.\n", "There are two possible answers here that are efficient:\n\nuse the Form's AfterInsert event to Requery the combo box (as well as the OnDeleteConfirm event). This will be sufficient if the combo box does not display data that the user can update and that needs to be updated if the underlying record is updated.\nif updates to the data need to be reflected in the combo box, then it would make sense to add a requery in the AfterUpdate events of the controls that are used to edit the data displayed in the combo box.\n\nFor example, if your combo box lists the names of the people in the table, you'll want to use method #2, and in the AfterUpdate event of Me!txtFirstName and Me!txtLastName, requery the combo box. Since you're doing the same operation in four places, you'll want to write a subroutine to do the requery. So, the sub would look something like this:\n Private Sub RequerySearchCombo()\n If Me.Dirty Then Me.Dirty = False\n Me!MyCombo.Requery\n End Sub\n\nThe reason to make sure you requery only when there is actually an update to the data displayed in the combo box is because if you're populating the combo box with the list of the whole table, the requery can take a very long time if you have 10s of 1,000s of records.\nAnother alternative that saves all the requeries would be to have a blank rowsource for the combo box, and populate it only after 1 or 2 characters have been typed, and filter the results that the combo displays based on the typed characters. For that, you'd use the combo box's OnChange event:\nPrivate Sub MyCombo_Change()\n Dim strSQL As String\n\n If Len(Me!MyCombo.Text) = 2 Then\n strSQL = \"SELECT MyID, LastName & ', ' & FirstName FROM MyTable \"\n strSQL = strSQL & \"WHERE LastName LIKE \" & Chr(34) & Me!MyCombo.Text & Chr(34) & \"*\"\n Me!MyCombo.Rowsource = strSQL \n End If\nEnd Sub\n\nThe code above assumes that you're searching for a person's name in a combo box that displays \"LastName, FirstName\".\nThere's another important caveat: if you're searching a form bound to a full table (or a SQL statement that returns all the records in the table) and using Bookmark navigation to locate the records, this method will not scale very well, as it requires pulling the entire index for the searched fields across the wire. In the case of my imaginary combo box above, you'd be using FindFirst to navigate to the record with the corresponding MyID value, so it's the index for MyID that would have to be pulled (though only as many index pages as necessary to satisfy the search would actually get pulled). This is not an issue for tables with a few thousand records, but beyond about 15-20K, it can be a network bottleneck.\nIn that case, instead of navigating via bookmark, you'd use your combo box to filter the result set down to the single record. This is, of course, extremely efficient, regardless of whether you're using a Jet back end or a server back end. It's highly desirable to start incorporating these kinds of efficiencies into your application as soon as possible. If you do so, it makes it much easier to upsize to a server back end, or makes it pretty painless if you should hit that tipping point with a mass of new data that makes the old method too inefficient to be user-friendly.\n", "I assume your combobox is a control on a form, not a combobox control in a commandBar. This combobox has a property called rowsource, that can be either a value list (husband;wife;son;girl) or a SQL SELECT instruction (SELECT relationDescription FROM Table_relationType). \nI assume also that your form recordset has something to do with your combobox recordset. What you'll have to do is, once your form recordset is properly updated (afterUpdate event I think), to reinitialise the rowsource property of the combobox control \nif the recordsource is an SQL instruction:\nmyComboBoxControl.recordsource = _\n \"SELECT relationDescription FROM Table_relationType\"\n\nor if it is a value list\nmyComboBoxControl.recordsource = myComboBoxControl.recordsource & \";nephew\"\n\nBut over all I find your request very strange. Do you have a reflexive (parent-child) relationship on your table? \n", "I would normally use the NotInList event to add data to a combo with \n Response = acDataErrAdded\n\nTo update the combo.\nThe Access 2007 Developers Reference has all the details, including sample code:\nhttp://msdn.microsoft.com/en-us/library/bb214329.aspx\n", "Requery the combo box in the form's after update event and the delete event. Your combo box will be up to date whenever the user makes changes to the recordset, whether it's a new record, and change, or a deletion.\nUnless users must have everybody else's changes as soon as they're made, don't requery the combo box every time it gets the focus because not only will the user have to wait (which is noticable with large recordsets), it's unnecessary if the recordset hasn't changed. But if that's the case, the whole form needs to be requeried as soon as anybody else makes a change, not just the combo box. This would be a highly unusual scenario.\nAfter update:\nPrivate Sub Form_AfterUpdate() \n On Error GoTo Proc_Err \n\n Me.cboSearch.Requery \n\n Exit Sub \nProc_Err: \n MsgBox Err.Number & vbCrLf & vbCrLf & Err.Description\n Err.Clear \nEnd Sub\n\nAfter delete:\nPrivate Sub Form_Delete(Cancel As Integer) \n On Error GoTo Proc_Err \n\n Me.cboSearch.Requery \n\n Exit Sub \nProc_Err: \n MsgBox Err.Number & vbCrLf & vbCrLf & Err.Description\n Err.Clear \nEnd Sub\n\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "ms_access" ]
stackoverflow_0000080832_ms_access.txt
Q: What happens to your time slice if you get pre-empted in vxWorks? If you have round robin enabled in Vxworks and your task gets preempted by a higher priority task, what happens to the remaining time slice? A: Your task will resume execution and finish the remainder of the time slice. Note that you will have some jitter that occurs for one time tick, since time slicing has a granularity of 1 clock tick. For example: You have round robin enabled with a 10 clock tick time slice. One clock tick is 10 ms. You expect 100 ms per time slice. You get pre-empted at 5 ms (the middle of your 1st tick). You should run for 95ms more, but VxWorks considers that you still have 10 ticks to go. If the task gets the cpu back at 11ms, you will execute 99ms more. If the task gets the cpu back at 19ms, you will execute 91ms more. Every time you get pre-empted, your task might execute +/- 1 tick in absolute time.
What happens to your time slice if you get pre-empted in vxWorks?
If you have round robin enabled in Vxworks and your task gets preempted by a higher priority task, what happens to the remaining time slice?
[ "Your task will resume execution and finish the remainder of the time slice. \nNote that you will have some jitter that occurs for one time tick, since time slicing has a granularity of 1 clock tick.\nFor example:\nYou have round robin enabled with a 10 clock tick time slice. One clock tick is 10 ms. You expect 100 ms per time slice.\nYou get pre-empted at 5 ms (the middle of your 1st tick). You should run for 95ms more, but VxWorks considers that you still have 10 ticks to go.\nIf the task gets the cpu back at 11ms, you will execute 99ms more.\nIf the task gets the cpu back at 19ms, you will execute 91ms more.\nEvery time you get pre-empted, your task might execute +/- 1 tick in absolute time.\n" ]
[ 3 ]
[]
[]
[ "vxworks" ]
stackoverflow_0000089071_vxworks.txt
Q: Mathematical analysis of a sound sample (as an array of numbers) I need to find the frequency of a sample, stored (in vb) as an array of byte. Sample is a sine wave, known frequency, so I can check), but the numbers are a bit odd, and my maths-foo is weak. Full range of values 0-255. 99% of numbers are in range 235 to 245, but there are some outliers down to 0 and 1, and up to 255 in the remaining 1%. How do I normalise this to remove outliers, (calculating the 235-245 interval as it may change with different samples), and how do I then calculate zero-crossings to get the frequency? Apologies if this description is rubbish! A: The FFT is probably the best answer, but if you really want to do it by your method, try this: To normalize, first make a histogram to count how many occurrances of each value from 0 to 255. Then throw out X percent of the values from each end with something like: for (i=lower=0;i< N*(X/100); lower++) i+=count[lower]; //repeat in other direction for upper Now normalize with A[i] = 255*(A[i]-lower)/(upper-lower)-128 Throw away results outside the -128..127 range. Now you can count zero crossings. To make sure you are not fooled by noise, you might want to keep track of the slope over the last several points, and only count crossings when the average slope is going the right way. A: The standard method to attack this problem is to consider one block of data, hopefully at least twice the actual frequency (taking more data isn't bad, so it's good to overestimate a bit), then take the FFT and guess that the frequency corresponds to the largest number in the resulting FFT spectrum. By the way, very similar problems have been asked here before - you could search for those answers as well. A: Use the Fourier transform, it's much more noise insensitive than counting zero crossings Edit: @WaveyDavey I found an F# library to do an FFT: From here As it turns out, the best free implementation that I've found for F# users so far is still the fantastic FFTW library. Their site has a precompiled Windows DLL. I've written minimal bindings that allow thread-safe access to FFTW from F#, with both guru and simple interfaces. Performance is excellent, 32-bit Windows XP Pro is only up to 35% slower than 64-bit Linux. Now I'm sure you can call F# lib from VB.net, C# etc, that should be in their docs A: If I understood well from your description, what you have is a signal which is a combination of a sine plus a constant plus some random glitches. Say, like x[n] = A*sin(f*n + phi) + B + N[n] where N[n] is the "glitch" noise you want to get rid of. If the glitches are one-sample long, you can remove them using a median filter which has to be bigger than the glitch length. On both sides of the glitch. Glitches of length 1, mean you will have enough with a median of 3 samples of length. y[n] = median3(x[n]) The median is computed so: Take the samples of x you want to filter (x[n-1],x[n],x[n+1]), sort them, and your output is the middle one. Now that the noise signal is away, get rid of the constant signal. I understand the buffer is of a limited and known length, so you can just compute the mean of the whole buffer. Substract it. Now you have your single sinus signal. You can now compute the fundamental frequency by counting zero crossings. Count the amount of samples above 0 in which the former sample was below 0. The period is the total amount of samples of your buffer divided by this, and the frequency is the oposite (1/x) of the period. A: Although I would go with the majority and say that it seems like what you want is an fft solution (fft algorithm is pretty quick), if fft is not the answer for whatever reason you may want to try fitting a sine curve to the data using a fitting program and reading off the fitted frequency. Using Fityk, you can load the data, and fit to a*sin(b*x-c) where 2*pi/b will give you the frequency after fitting. Fityk can be used from a gui, from a command-line for scripting and has a C++ API so could be included in your programs directly. A: I googled for "basic fft". Visual Basic FFT Your question screams FFT, but be careful, using FFT without understanding even a little bit about DSP can lead results that you don't understand or don't know where they come from. A: get the Frequency Analyzer at http://www.relisoft.com/Freeware/index.htm and run it and look at the code.
Mathematical analysis of a sound sample (as an array of numbers)
I need to find the frequency of a sample, stored (in vb) as an array of byte. Sample is a sine wave, known frequency, so I can check), but the numbers are a bit odd, and my maths-foo is weak. Full range of values 0-255. 99% of numbers are in range 235 to 245, but there are some outliers down to 0 and 1, and up to 255 in the remaining 1%. How do I normalise this to remove outliers, (calculating the 235-245 interval as it may change with different samples), and how do I then calculate zero-crossings to get the frequency? Apologies if this description is rubbish!
[ "The FFT is probably the best answer, but if you really want to do it by your method, try this:\nTo normalize, first make a histogram to count how many occurrances of each value from 0 to 255. Then throw out X percent of the values from each end with something like:\nfor (i=lower=0;i< N*(X/100); lower++)\n i+=count[lower];\n//repeat in other direction for upper\n\nNow normalize with \nA[i] = 255*(A[i]-lower)/(upper-lower)-128\n\nThrow away results outside the -128..127 range.\nNow you can count zero crossings. To make sure you are not fooled by noise, you might want to keep track of the slope over the last several points, and only count crossings when the average slope is going the right way.\n", "The standard method to attack this problem is to consider one block of data, hopefully at least twice the actual frequency (taking more data isn't bad, so it's good to overestimate a bit), then take the FFT and guess that the frequency corresponds to the largest number in the resulting FFT spectrum.\nBy the way, very similar problems have been asked here before - you could search for those answers as well.\n", "Use the Fourier transform, it's much more noise insensitive than counting zero crossings\nEdit: @WaveyDavey\nI found an F# library to do an FFT: From here\n\nAs it turns out, the best free\nimplementation that I've found for F#\nusers so far is still the fantastic\nFFTW library. Their site has a\nprecompiled Windows DLL. I've written\nminimal bindings that allow\nthread-safe access to FFTW from F#,\nwith both guru and simple interfaces.\nPerformance is excellent, 32-bit\nWindows XP Pro is only up to 35%\nslower than 64-bit Linux.\n\nNow I'm sure you can call F# lib from VB.net, C# etc, that should be in their docs\n", "If I understood well from your description, what you have is a signal which is a combination of a sine plus a constant plus some random glitches. Say, like\nx[n] = A*sin(f*n + phi) + B + N[n]\n\nwhere N[n] is the \"glitch\" noise you want to get rid of.\nIf the glitches are one-sample long, you can remove them using a median filter which has to be bigger than the glitch length. On both sides of the glitch. Glitches of length 1, mean you will have enough with a median of 3 samples of length.\ny[n] = median3(x[n])\n\nThe median is computed so: Take the samples of x you want to filter (x[n-1],x[n],x[n+1]), sort them, and your output is the middle one. \nNow that the noise signal is away, get rid of the constant signal. I understand the buffer is of a limited and known length, so you can just compute the mean of the whole buffer. Substract it.\nNow you have your single sinus signal. You can now compute the fundamental frequency by counting zero crossings. Count the amount of samples above 0 in which the former sample was below 0. The period is the total amount of samples of your buffer divided by this, and the frequency is the oposite (1/x) of the period.\n", "Although I would go with the majority and say that it seems like what you want is an fft solution (fft algorithm is pretty quick), if fft is not the answer for whatever reason you may want to try fitting a sine curve to the data using a fitting program and reading off the fitted frequency.\nUsing Fityk, you can load the data, and fit to a*sin(b*x-c) where 2*pi/b will give you the frequency after fitting.\nFityk can be used from a gui, from a command-line for scripting and has a C++ API so could be included in your programs directly.\n", "I googled for \"basic fft\". Visual Basic FFT Your question screams FFT, but be careful, using FFT without understanding even a little bit about DSP can lead results that you don't understand or don't know where they come from.\n", "get the Frequency Analyzer at http://www.relisoft.com/Freeware/index.htm and run it and look at the code.\n" ]
[ 7, 5, 3, 2, 1, 0, 0 ]
[]
[]
[ "audio", "audio_analysis", "vb.net" ]
stackoverflow_0000087262_audio_audio_analysis_vb.net.txt
Q: How do i generate a histogram for a given probability distribution (for functional testing a server)? I am trying to automate functional testing of a server using a realistic frequency distribution of requests. (sort of load testing, sort of simulation) I've chosen the Weibull distribution as it "sort of" matches the distribution I've observed (ramps up quickly, drops off quickly but not instantly) I use this distribution to generate the number of requests that should be sent each day between a given start and end date I've hacked together an algorithm in Python that sort of works but it feels kludgy: how_many_days = (end_date - start_date).days freqs = defaultdict(int) for x in xrange(how_many_responses): freqs[int(how_many_days * weibullvariate(0.5, 2))] += 1 timeline = [] day = start_date for i,freq in sorted(freqs.iteritems()): timeline.append((day, freq)) day += timedelta(days=1) return timeline What better ways are there to do this? A: Why don't you try The Grinder 3 to load test your server, it comes with all this and more prebuilt, and it supports python as a scripting language A: Slightly longer but probably more readable rework of your last four lines: samples = [0 for i in xrange(how_many_days + 1)] for s in xrange(how_many_responses): samples[min(int(how_many_days * weibullvariate(0.5, 2)), how_many_days)] += 1 histogram = zip(timeline, samples) print '\n'.join((d.strftime('%Y-%m-%d ') + "*" * c) for d,c in histogram) This always drops the samples within the date range, but you get a corresponding bump at the end of the timeline from all of the samples that are above the [0, 1] range. A: This is quick and probably not that accurate, but if you calculate the PDF yourself, then at least you make it easier to lay several smaller/larger ones on a single timeline. dev is the std deviation in the Guassian noise, which controls the roughness. Note that this is not the 'right' way to generate what you want, but it's easy. import math from datetime import datetime, timedelta, date from random import gauss how_many_responses = 1000 start_date = date(2008, 5, 1) end_date = date(2008, 6, 1) num_days = (end_date - start_date).days + 1 timeline = [start_date + timedelta(i) for i in xrange(num_days)] def weibull(x, k, l): return (k / l) * (x / l)**(k-1) * math.e**(-(x/l)**k) dev = 0.1 samples = [i * 1.25/(num_days-1) for i in range(num_days)] probs = [weibull(i, 2, 0.5) for i in samples] noise = [gauss(0, dev) for i in samples] simdata = [max(0., e + n) for (e, n) in zip(probs, noise)] events = [int(p * (how_many_responses / sum(probs))) for p in simdata] histogram = zip(timeline, events) print '\n'.join((d.strftime('%Y-%m-%d ') + "*" * c) for d,c in histogram) A: Instead of giving the number of requests as a fixed value, why not use a scaling factor instead? At the moment, you're treating requests as a limited quantity, and randomising the days on which those requests fall. It would seem more reasonable to treat your requests-per-day as independent. from datetime import * from random import * timeline = [] scaling = 10 start_date = date(2008, 5, 1) end_date = date(2008, 6, 1) num_days = (end_date - start_date).days + 1 days = [start_date + timedelta(i) for i in range(num_days)] requests = [int(scaling * weibullvariate(0.5, 2)) for i in range(num_days)] timeline = zip(days, requests) timeline A: I rewrote the code above to be shorter (but maybe it's too obfuscated now?) timeline = (start_date + timedelta(days=days) for days in count(0)) how_many_days = (end_date - start_date).days pick_a_day = lambda _:int(how_many_days * weibullvariate(0.5, 2)) days = sorted(imap(pick_a_day, xrange(how_many_responses))) histogram = zip(timeline, (len(list(responses)) for day, responses in groupby(days))) print '\n'.join((d.strftime('%Y-%m-%d ') + "*" * c) for d,c in histogram) A: Another solution is to use Rpy, which puts all of the power of R (including lots of tools for distributions), easily into Python.
How do i generate a histogram for a given probability distribution (for functional testing a server)?
I am trying to automate functional testing of a server using a realistic frequency distribution of requests. (sort of load testing, sort of simulation) I've chosen the Weibull distribution as it "sort of" matches the distribution I've observed (ramps up quickly, drops off quickly but not instantly) I use this distribution to generate the number of requests that should be sent each day between a given start and end date I've hacked together an algorithm in Python that sort of works but it feels kludgy: how_many_days = (end_date - start_date).days freqs = defaultdict(int) for x in xrange(how_many_responses): freqs[int(how_many_days * weibullvariate(0.5, 2))] += 1 timeline = [] day = start_date for i,freq in sorted(freqs.iteritems()): timeline.append((day, freq)) day += timedelta(days=1) return timeline What better ways are there to do this?
[ "Why don't you try The Grinder 3 to load test your server, it comes with all this and more prebuilt, and it supports python as a scripting language\n", "Slightly longer but probably more readable rework of your last four lines:\nsamples = [0 for i in xrange(how_many_days + 1)]\nfor s in xrange(how_many_responses):\n samples[min(int(how_many_days * weibullvariate(0.5, 2)), how_many_days)] += 1\nhistogram = zip(timeline, samples)\nprint '\\n'.join((d.strftime('%Y-%m-%d ') + \"*\" * c) for d,c in histogram)\n\nThis always drops the samples within the date range, but you get a corresponding bump at the end of the timeline from all of the samples that are above the [0, 1] range.\n", "This is quick and probably not that accurate, but if you calculate the PDF yourself, then at least you make it easier to lay several smaller/larger ones on a single timeline. dev is the std deviation in the Guassian noise, which controls the roughness. Note that this is not the 'right' way to generate what you want, but it's easy.\nimport math\nfrom datetime import datetime, timedelta, date\nfrom random import gauss\n\nhow_many_responses = 1000\nstart_date = date(2008, 5, 1)\nend_date = date(2008, 6, 1)\nnum_days = (end_date - start_date).days + 1\ntimeline = [start_date + timedelta(i) for i in xrange(num_days)]\n\ndef weibull(x, k, l):\n return (k / l) * (x / l)**(k-1) * math.e**(-(x/l)**k)\n\ndev = 0.1\nsamples = [i * 1.25/(num_days-1) for i in range(num_days)]\nprobs = [weibull(i, 2, 0.5) for i in samples]\nnoise = [gauss(0, dev) for i in samples]\nsimdata = [max(0., e + n) for (e, n) in zip(probs, noise)]\nevents = [int(p * (how_many_responses / sum(probs))) for p in simdata]\n\nhistogram = zip(timeline, events)\n\nprint '\\n'.join((d.strftime('%Y-%m-%d ') + \"*\" * c) for d,c in histogram)\n\n", "Instead of giving the number of requests as a fixed value, why not use a scaling factor instead? At the moment, you're treating requests as a limited quantity, and randomising the days on which those requests fall. It would seem more reasonable to treat your requests-per-day as independent.\nfrom datetime import *\nfrom random import *\n\ntimeline = []\nscaling = 10\nstart_date = date(2008, 5, 1)\nend_date = date(2008, 6, 1)\n\nnum_days = (end_date - start_date).days + 1\ndays = [start_date + timedelta(i) for i in range(num_days)]\nrequests = [int(scaling * weibullvariate(0.5, 2)) for i in range(num_days)]\ntimeline = zip(days, requests)\ntimeline\n\n", "I rewrote the code above to be shorter (but maybe it's too obfuscated now?)\ntimeline = (start_date + timedelta(days=days) for days in count(0))\nhow_many_days = (end_date - start_date).days\npick_a_day = lambda _:int(how_many_days * weibullvariate(0.5, 2))\ndays = sorted(imap(pick_a_day, xrange(how_many_responses)))\nhistogram = zip(timeline, (len(list(responses)) for day, responses in groupby(days)))\nprint '\\n'.join((d.strftime('%Y-%m-%d ') + \"*\" * c) for d,c in histogram)\n\n", "Another solution is to use Rpy, which puts all of the power of R (including lots of tools for distributions), easily into Python. \n" ]
[ 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "python", "simulation", "statistics", "stress_testing" ]
stackoverflow_0000053786_python_simulation_statistics_stress_testing.txt
Q: Anyone using the Entity Framework *Well*? Has anyone actually shipped an Entity Framework project that does O/R mapping into conceptual classes that are quite different from the tables in the datastore? I mean collapse junction (M:M) tables into other entities to form Conceptual classes that exist in the business domain but are organized as multiple tables in the datastore. All the examples that I see on the MSDN have little use of inheritance, collapsing junction tables into other entities, or collapsing lookup tables into entities. I'd love to hear of or see examples of the below which support all the CRUD operations you would typically expect to do on a business object.: Vehicle table and a Color table. A Color can appear in many Vehicles (1:M). They form the conceptual class UsedCar which has the property Color. Doctor, DoctorPatients, and Patients tables (form a many to many). Doctors have many Patients, Patients can have many Doctors (M:M). Map out the two conceptual classes Doctor (which has a Patients collection) and Patients (which has a Doctors collection). Anyone seen/done this with CSDL AND SSDL in the Entity Framework? The CSDL is no good if it doesn't actaully map to anything! A: I attempted to use the Entity Framework on an existing project (~60 tables, 3 with inheritance) just to see what it was all about. My experience boiled down to: The designer surface is kludgy. The mapping isn’t intuitive and someone must have thought that having several tool windows open at the same time is acceptable. It took a long time to manually create an object and map the right fields – then it was still odd talking to it from the code. While having something handling the database communication is essential, I feel that handing the control over to EF was far more of a fight than doing it manually. Sometimes the designer just doesn’t load until you restart Visual Studio. I’m sure it’s just a bug but restarting VS is annoying. All your work ends up in a single file, I’d hate to merge multiple developer editions. The resultant SQL (watched via the Profiler) wasn’t very good. I didn’t really delve into looking why, but you’d be pressed to write something worse on a first attempt. A: Entity Framework - Vote of no confidence That's all I have to say... A: You mean like this? <edmx:ConceptualModels> <Schema xmlns="http://schemas.microsoft.com/ado/2006/04/edm" Namespace="Model1" Alias="Self"> <EntityContainer Name="Model1Container" > <EntitySet Name="ColorSet" EntityType="Model1.Color" /> <EntitySet Name="DoctorSet" EntityType="Model1.Doctor" /> <EntitySet Name="PatientSet" EntityType="Model1.Patient" /> <EntitySet Name="UsedCarSet" EntityType="Model1.UsedCar" /> <AssociationSet Name="Vehicle_Color" Association="Model1.Vehicle_Color"> <End Role="Colors" EntitySet="ColorSet" /> <End Role="Vehicles" EntitySet="UsedCarSet" /></AssociationSet> <AssociationSet Name="DoctorPatient" Association="Model1.DoctorPatient"> <End Role="Doctor" EntitySet="DoctorSet" /> <End Role="Patient" EntitySet="PatientSet" /></AssociationSet> </EntityContainer> <EntityType Name="Color"> <Key> <PropertyRef Name="ColorID" /></Key> <Property Name="ColorID" Type="Int32" Nullable="false" /> <NavigationProperty Name="Vehicles" Relationship="Model1.Vehicle_Color" FromRole="Colors" ToRole="Vehicles" /></EntityType> <EntityType Name="Doctor"> <Key> <PropertyRef Name="DoctorID" /></Key> <Property Name="DoctorID" Type="Int32" Nullable="false" /> <NavigationProperty Name="Patients" Relationship="Model1.DoctorPatient" FromRole="Doctor" ToRole="Patient" /></EntityType> <EntityType Name="Patient"> <Key> <PropertyRef Name="PatientID" /></Key> <Property Name="PatientID" Type="Int32" Nullable="false" /> <NavigationProperty Name="Doctors" Relationship="Model1.DoctorPatient" FromRole="Patient" ToRole="Doctor" /> </EntityType> <EntityType Name="UsedCar"> <Key> <PropertyRef Name="VehicleID" /></Key> <Property Name="VehicleID" Type="Int32" Nullable="false" /> <NavigationProperty Name="Color" Relationship="Model1.Vehicle_Color" FromRole="Vehicles" ToRole="Colors" /></EntityType> <Association Name="Vehicle_Color"> <End Type="Model1.Color" Role="Colors" Multiplicity="1" /> <End Type="Model1.UsedCar" Role="Vehicles" Multiplicity="*" /></Association> <Association Name="DoctorPatient"> <End Type="Model1.Doctor" Role="Doctor" Multiplicity="*" /> <End Type="Model1.Patient" Role="Patient" Multiplicity="*" /></Association> </Schema> </edmx:ConceptualModels>
Anyone using the Entity Framework *Well*?
Has anyone actually shipped an Entity Framework project that does O/R mapping into conceptual classes that are quite different from the tables in the datastore? I mean collapse junction (M:M) tables into other entities to form Conceptual classes that exist in the business domain but are organized as multiple tables in the datastore. All the examples that I see on the MSDN have little use of inheritance, collapsing junction tables into other entities, or collapsing lookup tables into entities. I'd love to hear of or see examples of the below which support all the CRUD operations you would typically expect to do on a business object.: Vehicle table and a Color table. A Color can appear in many Vehicles (1:M). They form the conceptual class UsedCar which has the property Color. Doctor, DoctorPatients, and Patients tables (form a many to many). Doctors have many Patients, Patients can have many Doctors (M:M). Map out the two conceptual classes Doctor (which has a Patients collection) and Patients (which has a Doctors collection). Anyone seen/done this with CSDL AND SSDL in the Entity Framework? The CSDL is no good if it doesn't actaully map to anything!
[ "I attempted to use the Entity Framework on an existing project (~60 tables, 3 with inheritance) just to see what it was all about. My experience boiled down to:\nThe designer surface is kludgy. The mapping isn’t intuitive and someone must have thought that having several tool windows open at the same time is acceptable. It took a long time to manually create an object and map the right fields – then it was still odd talking to it from the code. While having something handling the database communication is essential, I feel that handing the control over to EF was far more of a fight than doing it manually.\nSometimes the designer just doesn’t load until you restart Visual Studio. I’m sure it’s just a bug but restarting VS is annoying.\nAll your work ends up in a single file, I’d hate to merge multiple developer editions.\nThe resultant SQL (watched via the Profiler) wasn’t very good. I didn’t really delve into looking why, but you’d be pressed to write something worse on a first attempt.\n", "Entity Framework - Vote of no confidence\nThat's all I have to say...\n", "You mean like this?\n<edmx:ConceptualModels>\n <Schema xmlns=\"http://schemas.microsoft.com/ado/2006/04/edm\" Namespace=\"Model1\" Alias=\"Self\">\n <EntityContainer Name=\"Model1Container\" >\n <EntitySet Name=\"ColorSet\" EntityType=\"Model1.Color\" />\n <EntitySet Name=\"DoctorSet\" EntityType=\"Model1.Doctor\" />\n <EntitySet Name=\"PatientSet\" EntityType=\"Model1.Patient\" />\n <EntitySet Name=\"UsedCarSet\" EntityType=\"Model1.UsedCar\" />\n <AssociationSet Name=\"Vehicle_Color\" Association=\"Model1.Vehicle_Color\">\n <End Role=\"Colors\" EntitySet=\"ColorSet\" />\n <End Role=\"Vehicles\" EntitySet=\"UsedCarSet\" /></AssociationSet>\n <AssociationSet Name=\"DoctorPatient\" Association=\"Model1.DoctorPatient\">\n <End Role=\"Doctor\" EntitySet=\"DoctorSet\" />\n <End Role=\"Patient\" EntitySet=\"PatientSet\" /></AssociationSet>\n </EntityContainer>\n <EntityType Name=\"Color\">\n <Key>\n <PropertyRef Name=\"ColorID\" /></Key>\n <Property Name=\"ColorID\" Type=\"Int32\" Nullable=\"false\" />\n <NavigationProperty Name=\"Vehicles\" Relationship=\"Model1.Vehicle_Color\" FromRole=\"Colors\" ToRole=\"Vehicles\" /></EntityType>\n <EntityType Name=\"Doctor\">\n <Key>\n <PropertyRef Name=\"DoctorID\" /></Key>\n <Property Name=\"DoctorID\" Type=\"Int32\" Nullable=\"false\" />\n <NavigationProperty Name=\"Patients\" Relationship=\"Model1.DoctorPatient\" FromRole=\"Doctor\" ToRole=\"Patient\" /></EntityType>\n <EntityType Name=\"Patient\">\n <Key>\n <PropertyRef Name=\"PatientID\" /></Key>\n <Property Name=\"PatientID\" Type=\"Int32\" Nullable=\"false\" />\n <NavigationProperty Name=\"Doctors\" Relationship=\"Model1.DoctorPatient\" FromRole=\"Patient\" ToRole=\"Doctor\" />\n </EntityType>\n <EntityType Name=\"UsedCar\">\n <Key>\n <PropertyRef Name=\"VehicleID\" /></Key>\n <Property Name=\"VehicleID\" Type=\"Int32\" Nullable=\"false\" />\n <NavigationProperty Name=\"Color\" Relationship=\"Model1.Vehicle_Color\" FromRole=\"Vehicles\" ToRole=\"Colors\" /></EntityType>\n <Association Name=\"Vehicle_Color\">\n <End Type=\"Model1.Color\" Role=\"Colors\" Multiplicity=\"1\" />\n <End Type=\"Model1.UsedCar\" Role=\"Vehicles\" Multiplicity=\"*\" /></Association>\n <Association Name=\"DoctorPatient\">\n <End Type=\"Model1.Doctor\" Role=\"Doctor\" Multiplicity=\"*\" />\n <End Type=\"Model1.Patient\" Role=\"Patient\" Multiplicity=\"*\" /></Association>\n </Schema>\n</edmx:ConceptualModels>\n\n" ]
[ 5, 3, 2 ]
[]
[]
[ ".net", "ado.net", "entity_framework", "orm" ]
stackoverflow_0000057718_.net_ado.net_entity_framework_orm.txt
Q: What's the best method to enable or disable a feature in a .net desktop application It can be either at compile time or at run-time using a config file. Is there a more elegant way than simple (and many) if statements? I am targeting especially sets of UI controls that comes for a particular feature. A: Unless your program must squeeze out 100% performance, do it with a config file. It will keep your code cleaner. If one option changes many parts of code, don't write many conditionals, write one conditional that picks which class you delegate to. For instance if a preference picks TCP versus UDP, have your conditional instantiate a TcpProvider or UdpProvider which the rest of your code uses with minimal muss or fuss. A: Compiler directives aren't as flexible, but they are appropriate in some circumstances. For instance, by default when you compile in DEBUG mode in VS.NET, there is a 'DEBUG' symbol defined...so you can do void SomeMethod() { #if DEBUG //do something here #else //do something else #endif } this will result in only one of those blocks being compiled depending if the DEBUG symbol is defined. Also, you can define additional symbols in Project Properties -> Build -> Conditional compilation symbols. Or, via the command line compiler using the /define: switch A: Perhaps I am assuming too much, but: switch (setting) { case "development": dostuff; break case "production": dootherstuff; break; default: dothebeststuff; break; }
What's the best method to enable or disable a feature in a .net desktop application
It can be either at compile time or at run-time using a config file. Is there a more elegant way than simple (and many) if statements? I am targeting especially sets of UI controls that comes for a particular feature.
[ "Unless your program must squeeze out 100% performance, do it with a config file. It will keep your code cleaner.\nIf one option changes many parts of code, don't write many conditionals, write one conditional that picks which class you delegate to. For instance if a preference picks TCP versus UDP, have your conditional instantiate a TcpProvider or UdpProvider which the rest of your code uses with minimal muss or fuss.\n", "Compiler directives aren't as flexible, but they are appropriate in some circumstances.\nFor instance, by default when you compile in DEBUG mode in VS.NET, there is a 'DEBUG' symbol defined...so you can do\nvoid SomeMethod()\n{\n #if DEBUG\n //do something here\n #else\n //do something else\n #endif\n\n}\n\nthis will result in only one of those blocks being compiled depending if the DEBUG symbol is defined.\nAlso, you can define additional symbols in Project Properties -> Build -> Conditional compilation symbols.\nOr, via the command line compiler using the /define: switch\n", "Perhaps I am assuming too much, but:\nswitch (setting) {\n case \"development\": \n dostuff;\n break\n case \"production\":\n dootherstuff;\n break;\n default:\n dothebeststuff;\n break;\n}\n\n" ]
[ 3, 2, 0 ]
[]
[]
[ ".net", "user_controls" ]
stackoverflow_0000088082_.net_user_controls.txt
Q: When to create Interface Builder plug-in for custom view? When do you recommend integrating a custom view into Interface Builder with a plug-in? When skimming through Apple's Interface Builder Plug-In Programming Guide I found: Are your custom objects going to be used by only one application? Do your custom objects rely on state information found only in your application? Would it be problematic to encapsulate your custom views in a standalone library or framework? If you answered yes to any of the preceding questions, your objects may not be good candidates for a plug-in. That answers some of my questions, but I would still like your thoughts on when it's a good idea. What are the benefits and how big of a time investment is it? A: It's perfectly reasonable to push the view and controller classes that your application uses out into a separate framework — embedded in your application wrapper — for which you also produce an Interface Builder plug-in. Among other reasons, classes that are commonly used in your application can then be configured at their point of use in Interface Builder, rather than in scattered -awakeFromNib implementations. It's also the only way you can have your objects expose bindings that can be set up in Interface Builder. It's a bit of coding, but for view and controller classes that are used in more than one place, and which require additional set-up before they're actually used, you'll probably save a bunch of time overall. And your experience developing with your own controller and view classes will be like developing with Cocoa's. A: I think the Apple guidelines sum it up nicely. If you're writing a control that will be used in multiple applications and is completely generic, then creating a custom object is a good idea. You'll be able to visualize the look and set properties directly from Interface Builder. If your control is limited to one application, or is tightly coupled with your data, then moving it into a custom object really won't buy you much. It's not difficult to create a custom view, there are a lot of easy to follow guides out there.
When to create Interface Builder plug-in for custom view?
When do you recommend integrating a custom view into Interface Builder with a plug-in? When skimming through Apple's Interface Builder Plug-In Programming Guide I found: Are your custom objects going to be used by only one application? Do your custom objects rely on state information found only in your application? Would it be problematic to encapsulate your custom views in a standalone library or framework? If you answered yes to any of the preceding questions, your objects may not be good candidates for a plug-in. That answers some of my questions, but I would still like your thoughts on when it's a good idea. What are the benefits and how big of a time investment is it?
[ "It's perfectly reasonable to push the view and controller classes that your application uses out into a separate framework — embedded in your application wrapper — for which you also produce an Interface Builder plug-in.\nAmong other reasons, classes that are commonly used in your application can then be configured at their point of use in Interface Builder, rather than in scattered -awakeFromNib implementations. It's also the only way you can have your objects expose bindings that can be set up in Interface Builder.\nIt's a bit of coding, but for view and controller classes that are used in more than one place, and which require additional set-up before they're actually used, you'll probably save a bunch of time overall. And your experience developing with your own controller and view classes will be like developing with Cocoa's.\n", "I think the Apple guidelines sum it up nicely.\nIf you're writing a control that will be used in multiple applications and is completely generic, then creating a custom object is a good idea. You'll be able to visualize the look and set properties directly from Interface Builder.\nIf your control is limited to one application, or is tightly coupled with your data, then moving it into a custom object really won't buy you much.\nIt's not difficult to create a custom view, there are a lot of easy to follow guides out there.\n" ]
[ 9, 2 ]
[]
[]
[ "cocoa", "interface_builder", "macos", "objective_c" ]
stackoverflow_0000049442_cocoa_interface_builder_macos_objective_c.txt
Q: C#: Why does Settings PropertyValues have 0 items? Assuming there are 5 items in the settings file (MySetting1 to MySetting5), why does PropertyValues have 0 items while Properties has the correct number? Console.WriteLine( Properties.Settings.Default.PropertyValues.Count); // Displays 0 Console.WriteLine( Properties.Settings.Default.Properties.Count); // Displays 5 A: It appears that PropertyValues refers to the number of PropertyValues that have been set. The default values you specify aren't considered set and won't be stored to the user config if you sall Save(). Console.WriteLine(Settings.Default.PropertyValues.Count.ToString()); Console.ReadLine(); Settings.Default.Setting = "abc"; Console.WriteLine(Settings.Default.PropertyValues.Count.ToString()); Console.ReadLine(); results in the following output: 0 1
C#: Why does Settings PropertyValues have 0 items?
Assuming there are 5 items in the settings file (MySetting1 to MySetting5), why does PropertyValues have 0 items while Properties has the correct number? Console.WriteLine( Properties.Settings.Default.PropertyValues.Count); // Displays 0 Console.WriteLine( Properties.Settings.Default.Properties.Count); // Displays 5
[ "It appears that PropertyValues refers to the number of PropertyValues that have been set. The default values you specify aren't considered set and won't be stored to the user config if you sall Save().\nConsole.WriteLine(Settings.Default.PropertyValues.Count.ToString());\nConsole.ReadLine();\nSettings.Default.Setting = \"abc\";\nConsole.WriteLine(Settings.Default.PropertyValues.Count.ToString());\nConsole.ReadLine();\n\nresults in the following output:\n0\n1\n" ]
[ 6 ]
[]
[]
[ "c#", "visual_studio", "visual_studio_2005" ]
stackoverflow_0000089149_c#_visual_studio_visual_studio_2005.txt
Q: Asynchronous file IO in .Net I'm building a toy database in C# to learn more about compiler, optimizer, and indexing technology. I want to maintain maximum parallelism between (at least read) requests for bringing pages into the buffer pool, but I am confused about how best to accomplish this in .NET. Here are some options and the problems I've come across with each: Use System.IO.FileStream and the BeginRead method But, the position in the file isn't an argument to BeginRead, it is a property of the FileStream (set via the Seek method), so I can only issue one request at a time and have to lock the stream for the duration. (Or do I? The documentation is unclear on what would happen if I held the lock only between the Seek and BeginRead calls but released it before calling EndRead. Does anyone know?) I know how to do this, I'm just not sure it is the best way. There seems to be another way, centered around the System.Threading.Overlapped structure and P\Invoke to the ReadFileEx function in kernel32.dll. Unfortunately, there is a dearth of samples, especially in managed languages. This route (if it can be made to work at all) apparently also involves the ThreadPool.BindHandle method and the IO completion threads in the thread pool. I get the impression that this is the sanctioned way of dealing with this scenario under windows, but I don't understand it and I can't find an entry point to the documentation that is helpful to the uninitiated. Something else? In a comment, jacob suggests creating a new FileStream for each read in flight. Read the whole file into memory. This would work if the database was small. The codebase is small, and there are plenty of other inefficiencies, but the database itself isn't. I also want to be sure I am doing all the bookkeeping needed to deal with a large database (which turns out to be a huge part of the complexity: paging, external sorting, ...) and I'm worried it might be too easy to accidentally cheat. Edit Clarification of why I'm suspicious with solution 1: holding a single lock all the way from BeginRead to EndRead means I need to block anyone who wants to initiate a read just because another read is in progress. That feels wrong, because the thread initiating the new read might be able (in general) to do some more work before the results become available. (Actually, just writing this has led me to think up a new solution, I put as a new answer.) A: I'm not sure I see why option 1 wouldn't work for you. Keep in mind that you can't have two different threads trying to use the same FileStream at the same time - doing so will definitely cause you problems. BeginRead/EndRead is meant to let your code continue executing while the potentially expensive IO operation takes places, not to enable some sort of multi-threaded access to a file. So I would suggest that you seek and then do a beginread. A: What we did was to write a small layer around I/O completion ports, ReadFile, and GetQueuedCompletion status in C++/CLI, and then call back into C# when the operation completed. We chose this route over BeginRead and the c# async operation pattern to provide more control over the buffers used to read from the file (or socket). This was a pretty big performance gain over the purely managed approach which allocates new byte[] on the heap with each read. Plus, there are alot more complete C++ examples of using IO Completion ports out on the interwebs A: What if you loaded the resource (file data or whatever) into memory first and then shared it across threads? Since it is a small db. - you won't have as many issues to deal with. A: Use approach #1, but When a request comes in, take lock A. Use it to protect a queue of pending read requests. Add it to the queue and return some new async result. If this results in the first addition to the queue, call step 2 before returning. Release lock A before returning. When a read completes (or called by step 1), take lock A. Use it to protect popping a read request from the queue. Take lock B. Use it to protect the Seek -> BeginRead -> EndRead sequence. Release lock B. Update the async result created by step 1 for this read operation. (Since a read operation completed, call this again.) This solves the problem of not blocking any thread that begins a read just because another read is in progress, but still sequences reads so that the file stream's current position doesn't get messed up.
Asynchronous file IO in .Net
I'm building a toy database in C# to learn more about compiler, optimizer, and indexing technology. I want to maintain maximum parallelism between (at least read) requests for bringing pages into the buffer pool, but I am confused about how best to accomplish this in .NET. Here are some options and the problems I've come across with each: Use System.IO.FileStream and the BeginRead method But, the position in the file isn't an argument to BeginRead, it is a property of the FileStream (set via the Seek method), so I can only issue one request at a time and have to lock the stream for the duration. (Or do I? The documentation is unclear on what would happen if I held the lock only between the Seek and BeginRead calls but released it before calling EndRead. Does anyone know?) I know how to do this, I'm just not sure it is the best way. There seems to be another way, centered around the System.Threading.Overlapped structure and P\Invoke to the ReadFileEx function in kernel32.dll. Unfortunately, there is a dearth of samples, especially in managed languages. This route (if it can be made to work at all) apparently also involves the ThreadPool.BindHandle method and the IO completion threads in the thread pool. I get the impression that this is the sanctioned way of dealing with this scenario under windows, but I don't understand it and I can't find an entry point to the documentation that is helpful to the uninitiated. Something else? In a comment, jacob suggests creating a new FileStream for each read in flight. Read the whole file into memory. This would work if the database was small. The codebase is small, and there are plenty of other inefficiencies, but the database itself isn't. I also want to be sure I am doing all the bookkeeping needed to deal with a large database (which turns out to be a huge part of the complexity: paging, external sorting, ...) and I'm worried it might be too easy to accidentally cheat. Edit Clarification of why I'm suspicious with solution 1: holding a single lock all the way from BeginRead to EndRead means I need to block anyone who wants to initiate a read just because another read is in progress. That feels wrong, because the thread initiating the new read might be able (in general) to do some more work before the results become available. (Actually, just writing this has led me to think up a new solution, I put as a new answer.)
[ "I'm not sure I see why option 1 wouldn't work for you. Keep in mind that you can't have two different threads trying to use the same FileStream at the same time - doing so will definitely cause you problems. BeginRead/EndRead is meant to let your code continue executing while the potentially expensive IO operation takes places, not to enable some sort of multi-threaded access to a file.\nSo I would suggest that you seek and then do a beginread.\n", "What we did was to write a small layer around I/O completion ports, ReadFile, and GetQueuedCompletion status in C++/CLI, and then call back into C# when the operation completed. We chose this route over BeginRead and the c# async operation pattern to provide more control over the buffers used to read from the file (or socket). This was a pretty big performance gain over the purely managed approach which allocates new byte[] on the heap with each read.\nPlus, there are alot more complete C++ examples of using IO Completion ports out on the interwebs\n", "What if you loaded the resource (file data or whatever) into memory first and then shared it across threads? Since it is a small db. - you won't have as many issues to deal with.\n", "Use approach #1, but\n\nWhen a request comes in, take lock A. Use it to protect a queue of pending read requests. Add it to the queue and return some new async result. If this results in the first addition to the queue, call step 2 before returning. Release lock A before returning.\nWhen a read completes (or called by step 1), take lock A. Use it to protect popping a read request from the queue. Take lock B. Use it to protect the Seek -> BeginRead -> EndRead sequence. Release lock B. Update the async result created by step 1 for this read operation. (Since a read operation completed, call this again.)\n\nThis solves the problem of not blocking any thread that begins a read just because another read is in progress, but still sequences reads so that the file stream's current position doesn't get messed up.\n" ]
[ 5, 3, 1, 0 ]
[]
[]
[ ".net", "asynchronous", "file_io", "winapi", "windows" ]
stackoverflow_0000088971_.net_asynchronous_file_io_winapi_windows.txt
Q: How do you handle different Java IDEs and svn? How do you ensure, that you can checkout the code into Eclipse or NetBeans and work there with it? Edit: If you not checking in ide-related files, you have to reconfigure buildpath, includes and all this stuff, each time you checkout the project. I don't know, if ant (especially an ant buildfile which is created/exported from eclipse) will work with an other ide seamlessly. A: We actually maintain a Netbeans and an Eclipse project for our code in SVN right now with no troubles at all. The Netbeans files don't step on the Eclipse files. We have our projects structured like this: sample-project + bin + launches + lib + logs + nbproject + src + java .classpath .project build.xml The biggest points seem to be: Prohibit any absolute paths in the project files for either IDE. Set the project files to output the class files to the same directory. svn:ignore the private directory in the .nbproject directory. svn:ignore the directory used for class file output from the IDEs and any other runtime generated directories like the logs directory above. Have people using both consistently so that differences get resolved quickly. Also maintain a build system independent of the IDEs such as cruisecontrol. Use UTF-8 and correct any encoding issues immediately. We are developing on Fedora 9 32-bit and 64-bit, Vista, and WindowsXP and about half of the developers use one IDE or the other. A few use both and switch back and forth regularly. A: The smart ass answer is "by doing so" - unless you aren't working with multiple IDEs you don't know if you are really prepared for working with multiple IDEs. Honest. :) I always have seen multiple platforms as more cumbersome, as they may use different encoding standards (e.g. Windows may default to ISO-8859-1, Linux to UTF-8) - for me encoding has caused way more issues than IDEs. Some more pointers: You might want to go with Maven (http://maven.apache.org), let it generate IDE specific files and never commit them to source control. In order to be sure that you are generating the correct artefacts, you should have a dedicated server build your deliverables (e.g. cruisecontrol), either with the help of ant, maven or any other tool. These deliverables are the ones that are tested outside of development machines. Great way to make people aware that there is another world outside their own machine. Prohibit any machine specific path to be contained in any IDE specific file found in source control. Always reference external libraries by logical path names, preferable containing their version (if you don't use maven) A: The best thing is probably to not commit any IDE related file (such as Eclipse's .project), that way everyone can checkout the project and do his thing as he wants. That being said, I guess most IDEs have their own config file scheme, so maybe you can commit it all without having any conflict, but it feels messy imo. A: For the most part I'd agree with seldaek, but I'm also inclined to say that you should at least give a file that says what the dependencies are, what Java version to use to compile, etc, and anything extra that a NetBeans/Eclipse developer might need to compile in their IDE. We currently only use Eclipse and so we commit all the Eclipse .classpath .project files to svn which I think is the better solution because then everyone is able too reproduce errors and what-not easily instead of faffing about with IDE specifics. A: I'm of the philosophy that the build should be done with a "lowest common denominator" approach. What goes into source control is what is required to do the build. While I develop exclusively in with Eclipse, my build is with ant at the command line. With respect to source control, I only check in files that are essential to the build from the command line. No Eclipse files. When I setup a new development machine (seems like twice a year), it takes a little effort to get Eclipse to import the project from an ant build file but nothing scary. (In theory, this should work the same for other IDEs, no? Surly they must be able to import from ant?) I've also documented how to setup a bare minimum build environment. A: I use maven, and check in just the pom & source. After checking out a project, I run mvn eclipse:eclipse I tell svn to ignore the generated .project, etc. A: Here's what i do: Only maintain in source control your ant build script and associated classpath. Classpath could either be explicit in the ant script, a property file or managed by ivy. write an ant target to generate the Eclipse .classpath file from the ant classpath Netbeans will use your build script and classpath, just configure it to do so through a free form project. This way you get IDE independent build scripts and happy developers :) There's a blog on netbeans site on how to do 3. but i can't find it right now. I've put some notes on how to do the above on my site - link text (quick and ugly though, sorry) Note that if you're using Ivy (a good idea) and eclipse you might be tempted to use the eclipse ivy plugin. I've used it and found it to be horribly buggy and unreliable. Better to use 2. above.
How do you handle different Java IDEs and svn?
How do you ensure, that you can checkout the code into Eclipse or NetBeans and work there with it? Edit: If you not checking in ide-related files, you have to reconfigure buildpath, includes and all this stuff, each time you checkout the project. I don't know, if ant (especially an ant buildfile which is created/exported from eclipse) will work with an other ide seamlessly.
[ "We actually maintain a Netbeans and an Eclipse project for our code in SVN right now with no troubles at all. The Netbeans files don't step on the Eclipse files. We have our projects structured like this:\nsample-project \n+ bin\n+ launches \n+ lib \n+ logs\n+ nbproject \n+ src \n + java\n.classpath\n.project\nbuild.xml\n\nThe biggest points seem to be:\n\nProhibit any absolute paths in the\nproject files for either IDE. \nSet the project files to output the\nclass files to the same directory.\nsvn:ignore the private\ndirectory in the .nbproject\ndirectory.\nsvn:ignore the directory used for\nclass file output from the IDEs and any other runtime generated directories like the logs directory above.\nHave people using both consistently\nso that differences get resolved\nquickly.\nAlso maintain a build system\nindependent of the IDEs such as\ncruisecontrol.\nUse UTF-8 and correct any encoding issues\nimmediately.\n\nWe are developing on Fedora 9 32-bit and 64-bit, Vista, and WindowsXP and about half of the developers use one IDE or the other. A few use both and switch back and forth regularly.\n", "The smart ass answer is \"by doing so\" - unless you aren't working with multiple IDEs you don't know if you are really prepared for working with multiple IDEs. Honest. :)\nI always have seen multiple platforms as more cumbersome, as they may use different encoding standards (e.g. Windows may default to ISO-8859-1, Linux to UTF-8) - for me encoding has caused way more issues than IDEs.\nSome more pointers:\n\nYou might want to go with Maven (http://maven.apache.org), let it generate IDE specific files and never commit them to source control.\nIn order to be sure that you are generating the correct artefacts, you should have a dedicated server build your deliverables (e.g. cruisecontrol), either with the help of ant, maven or any other tool. These deliverables are the ones that are tested outside of development machines. Great way to make people aware that there is another world outside their own machine.\nProhibit any machine specific path to be contained in any IDE specific file found in source control. Always reference external libraries by logical path names, preferable containing their version (if you don't use maven)\n\n", "The best thing is probably to not commit any IDE related file (such as Eclipse's .project), that way everyone can checkout the project and do his thing as he wants. \nThat being said, I guess most IDEs have their own config file scheme, so maybe you can commit it all without having any conflict, but it feels messy imo.\n", "For the most part I'd agree with seldaek, but I'm also inclined to say that you should at least give a file that says what the dependencies are, what Java version to use to compile, etc, and anything extra that a NetBeans/Eclipse developer might need to compile in their IDE.\nWe currently only use Eclipse and so we commit all the Eclipse .classpath .project files to svn which I think is the better solution because then everyone is able too reproduce errors and what-not easily instead of faffing about with IDE specifics.\n", "I'm of the philosophy that the build should be done with a \"lowest common denominator\" approach. What goes into source control is what is required to do the build. While I develop exclusively in with Eclipse, my build is with ant at the command line. \nWith respect to source control, I only check in files that are essential to the build from the command line. No Eclipse files. When I setup a new development machine (seems like twice a year), it takes a little effort to get Eclipse to import the project from an ant build file but nothing scary. (In theory, this should work the same for other IDEs, no? Surly they must be able to import from ant?)\nI've also documented how to setup a bare minimum build environment. \n", "I use maven, and check in just the pom & source.\nAfter checking out a project, I run mvn eclipse:eclipse\nI tell svn to ignore the generated .project, etc.\n", "Here's what i do:\n\nOnly maintain in source control your ant build script and associated classpath. Classpath could either be explicit in the ant script, a property file or managed by ivy.\nwrite an ant target to generate the Eclipse .classpath file from the ant classpath\nNetbeans will use your build script and classpath, just configure it to do so through a free form project.\n\nThis way you get IDE independent build scripts and happy developers :)\nThere's a blog on netbeans site on how to do 3. but i can't find it right now. I've put some notes on how to do the above on my site - link text (quick and ugly though, sorry)\nNote that if you're using Ivy (a good idea) and eclipse you might be tempted to use the eclipse ivy plugin. I've used it and found it to be horribly buggy and unreliable. Better to use 2. above.\n" ]
[ 9, 7, 1, 0, 0, 0, 0 ]
[]
[]
[ "collaboration", "eclipse", "ide", "java", "svn" ]
stackoverflow_0000081567_collaboration_eclipse_ide_java_svn.txt
Q: CVS Checkout to a directory How do i check out a specific directory from CVS and omit the tree leading up to that directory? EX. Id like to checkout to this directory C:/WebHost/MyWebApp/www My CVS Project directory structure is MyWebApp/Trunk/www How do i omit the Trunk and MyWebApp directories? A: Use cvs -d/cvsroot checkout -d directory project/path/directory. The first -d can be omitted if you set the root with the environment. This is called "shortening the path" and can be avoided with the -N option to checkout. A: CVS is 'tied' to the repository by files in the .CVS folder. Each folder is 'tied' individually. This means you can just check out the full thing (or if you already have the full thing), then cut/paste the www directory out to somewhere else, and it will remain linked to the correct CVS location. A: [Oops, deleted some wrong crap.] yeah, co -d www is what you want. You can also set up modules in the repository, which will let you check out just www as if it were a top-level directory, but you have to do it for every such directory.
CVS Checkout to a directory
How do i check out a specific directory from CVS and omit the tree leading up to that directory? EX. Id like to checkout to this directory C:/WebHost/MyWebApp/www My CVS Project directory structure is MyWebApp/Trunk/www How do i omit the Trunk and MyWebApp directories?
[ "Use cvs -d/cvsroot checkout -d directory project/path/directory. The first -d can be omitted if you set the root with the environment. This is called \"shortening the path\" and can be avoided with the -N option to checkout.\n", "CVS is 'tied' to the repository by files in the .CVS folder. Each folder is 'tied' individually.\nThis means you can just check out the full thing (or if you already have the full thing), then cut/paste the www directory out to somewhere else, and it will remain linked to the correct CVS location.\n", "[Oops, deleted some wrong crap.] yeah, co -d www is what you want.\nYou can also set up modules in the repository, which will let you check out just www as if it were a top-level directory, but you have to do it for every such directory.\n" ]
[ 27, 3, 2 ]
[]
[]
[ "cvs", "tortoisecvs", "vcs_checkout" ]
stackoverflow_0000089181_cvs_tortoisecvs_vcs_checkout.txt
Q: Is there a tool to convert a .vim colour definition file to use in VS.NET 2008 If you go to a site such as: http://www.cs.cmu.edu/~maverick/VimColorSchemeTest/index-c.html It has a bunch of example colour themes for VI. Does anyone know of a tool that would take those files and convert them into .vssettings files to use in Visual Studio? If not, how about some good docs on ether of the formats. A: Here is a Visual Studio theme generator which may help. http://frickinsweet.com/tools/Theme.mvc.aspx
Is there a tool to convert a .vim colour definition file to use in VS.NET 2008
If you go to a site such as: http://www.cs.cmu.edu/~maverick/VimColorSchemeTest/index-c.html It has a bunch of example colour themes for VI. Does anyone know of a tool that would take those files and convert them into .vssettings files to use in Visual Studio? If not, how about some good docs on ether of the formats.
[ "Here is a Visual Studio theme generator which may help.\nhttp://frickinsweet.com/tools/Theme.mvc.aspx\n" ]
[ 1 ]
[]
[]
[ "syntax_highlighting", "vim", "visual_studio" ]
stackoverflow_0000089226_syntax_highlighting_vim_visual_studio.txt
Q: How to hide complete volume? Using Windows Server 2003 in a multi-user environment (via Remote Desktop, using it as an application server), how to mount a (preferably encrypted) volume in a way, that won't show up on any other user's desktop? Tried, and failed approaches: tweaking user rights -display of mounted volume can not be changed. Bestcrypt / truecrypt. Both of them displays the volume for a local administrator A: You're going to be hard-pressed to find a solution for your exact problem. Drive mount points aren't stored on the user level (afaik). There are a couple of workarounds that you can use that aren't guaranteed to be secure: hide access to certain drive letters based on group policy. Not very secure, easy to workaround. Don't mount a seperate volume: use NTFS encryption and simply set security permissions on certain folders. Is there any particualr reason it has to be an entire drive? If you're trying to avoid allowing the local-admin having rights to a local drive, you're pretty much out of luck unless you use a third-party-probably-going-to-fail-horribly solution. You can jury-rig something with Group Policy to disallow local admin access, but it's going to be hard and error prone. If your desired goal is to have separate folders (or volumes) that other users cannot access, store the files on a remote server. That way local administrators on the application server cannot arbitrarily access other peoples folders. (Unless they have Domain Admin or Enterprise Admin rights) You can set up a single big network drive and have different user folders on it, each encrypted using NTFS/other solution and only have read/write rights for that single user. A: There's a key in the Registry that's used to hide mapped drives. If you want to stop any combination of drives appearing in My Computer Add the Binary Value of 'NoDrives' in the registry at "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer" Here is the table of all the values (Note that you can add up values to hide multiple drives, also the value is binary type but must be entered in hexadecimal, so if you add up a few drives, get ready for a little hex math. ) : A 1 00 00 00 B 2 00 00 00 C 4 00 00 00 D 8 00 00 00 E 16 00 00 00 F 32 00 00 00 G 64 00 00 00 H 128 00 00 00 I 00 1 00 00 J 00 2 00 00 K 00 4 00 00 L 00 8 00 00 M 00 16 00 00 N 00 32 00 00 O 00 64 00 00 P 00 128 00 00 Q 00 00 1 00 R 00 00 2 00 S 00 00 4 00 T 00 00 8 00 U 00 00 16 00 V 00 00 32 00 W 00 00 64 00 X 00 00 128 00 Y 00 00 00 1 Z 00 00 00 2 A: Even if the drive letters are hidden - the volumes are still accessible unless you change ACLs on the filesystem itself - why is this so unpalatable? A: NTFS supports mounting volumes inside directories. Example - instead of mounting an external drive as D:, you can mount it under C:\mountedVolumes\externalHardDrive You can then use ACL's on the parent folder (mountedVolumes) to prevent users other than yourself from accessing it. If they can't get into the folder, they can't get into the drive, or see that it's there. It just looks like a folder they can't open. Note: This assumes that you have administrative rights (at least for when you first set this up), and that other people don't (so they can't just take ownership of mountedVolumes and go into the drive anyway)
How to hide complete volume?
Using Windows Server 2003 in a multi-user environment (via Remote Desktop, using it as an application server), how to mount a (preferably encrypted) volume in a way, that won't show up on any other user's desktop? Tried, and failed approaches: tweaking user rights -display of mounted volume can not be changed. Bestcrypt / truecrypt. Both of them displays the volume for a local administrator
[ "You're going to be hard-pressed to find a solution for your exact problem. Drive mount points aren't stored on the user level (afaik). There are a couple of workarounds that you can use that aren't guaranteed to be secure:\n\nhide access to certain drive letters based on group policy. Not very secure, easy to workaround.\nDon't mount a seperate volume: use NTFS encryption and simply set security permissions on certain folders.\n\nIs there any particualr reason it has to be an entire drive? If you're trying to avoid allowing the local-admin having rights to a local drive, you're pretty much out of luck unless you use a third-party-probably-going-to-fail-horribly solution. You can jury-rig something with Group Policy to disallow local admin access, but it's going to be hard and error prone.\nIf your desired goal is to have separate folders (or volumes) that other users cannot access, store the files on a remote server. That way local administrators on the application server cannot arbitrarily access other peoples folders. (Unless they have Domain Admin or Enterprise Admin rights) You can set up a single big network drive and have different user folders on it, each encrypted using NTFS/other solution and only have read/write rights for that single user.\n", "There's a key in the Registry that's used to hide mapped drives.\nIf you want to stop any combination of drives appearing in My Computer\nAdd the Binary Value of 'NoDrives' in the registry at \n\n\"HKEY_CURRENT_USER\\Software\\Microsoft\\Windows\\CurrentVersion\\Policies\\Explorer\"\n\nHere is the table of all the values (Note that you can add up values to hide multiple drives, also the value is binary type but must be entered in hexadecimal, so if you add up a few drives, get ready for a little hex math. ) :\nA 1 00 00 00\nB 2 00 00 00\nC 4 00 00 00\nD 8 00 00 00\nE 16 00 00 00\nF 32 00 00 00\nG 64 00 00 00\nH 128 00 00 00\nI 00 1 00 00\nJ 00 2 00 00\nK 00 4 00 00\nL 00 8 00 00\nM 00 16 00 00\nN 00 32 00 00\nO 00 64 00 00\nP 00 128 00 00\nQ 00 00 1 00\nR 00 00 2 00\nS 00 00 4 00\nT 00 00 8 00\nU 00 00 16 00\nV 00 00 32 00\nW 00 00 64 00\nX 00 00 128 00\nY 00 00 00 1\nZ 00 00 00 2\n\n", "Even if the drive letters are hidden - the volumes are still accessible unless you change ACLs on the filesystem itself - why is this so unpalatable?\n", "NTFS supports mounting volumes inside directories.\nExample - instead of mounting an external drive as D:, you can mount it under C:\\mountedVolumes\\externalHardDrive\nYou can then use ACL's on the parent folder (mountedVolumes) to prevent users other than yourself from accessing it. If they can't get into the folder, they can't get into the drive, or see that it's there. It just looks like a folder they can't open.\nNote: This assumes that you have administrative rights (at least for when you first set this up), and that other people don't (so they can't just take ownership of mountedVolumes and go into the drive anyway)\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "sysadmin", "windows", "windows_server_2003" ]
stackoverflow_0000089233_sysadmin_windows_windows_server_2003.txt
Q: Windows wallpaper: not just BMPs? I've read in a couple of places that the desktop wallpaper can be set to an HTML document. Has anyone had any success changing it programmatically? The following snippet of VB6 helps me set things up for BMPs but when I try to use it for HTML, I get a nice blue background and nothing else. Dim reg As New StdRegistry Public Function CurrentWallpaper() As String CurrentWallpaper = reg.ValueEx(HKEY_CURRENT_USER, "Control Panel\Desktop", "Wallpaper", REG_SZ, "") End Function Public Sub SetWallpaper(cFilename As Variant) reg.ClassKey = HKEY_CURRENT_USER reg.SectionKey = "Control Panel\Desktop" reg.ValueKey = "Wallpaper" reg.ValueType = REG_SZ reg.Default = "" reg.Value = cFilename End Sub Public Sub RefreshDesktop() Dim oShell As Object Set oShell = CreateObject("WScript.Shell") oShell.Run "%windir%\System32\RUNDLL32.EXE user32.dll,UpdatePerUserSystemParameters", 1, True End Sub Perhaps there's some other setting that's required. Any ideas? A: I think you need to make sure "Active Desktop" is turned on. You might try setting HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\ForceActiveDesktopOn to 1 (found here). I haven't tried it, so no guarantees. A: Okay, I've discovered the answer to my question, thanks to egl1044 on Experts Exchange. Essentially, one must talk to the IActiveDesktop object. A good implementation of that, in VB6, can be found at VB6 - JPEGs as wallpapers (without conversion). A: I'm not sure if there's an official API for this, but if you have your heart set on it you could use Sysinternal's Process Monitor and see what registry keys get touched when you set an HTML desktop background via the UI. Then you'd just need to repeat those edits in your code. However, an API call would be far preferable in terms of backward/forward compatibility. A: Getting closer: http://www.microsoft.com/technet/prodtechnol/windows2000serv/reskit/w2rkbook/gp.mspx?mfr=true But it turns out that I was getting sidetracked in Policy space. What I really wanted was to set the desktop in the userspace and let the Policy settings stand. Some helpful stuff was found here: http://blogs.msdn.com/coding4fun/archive/2006/10/31/912569.aspx. This isn't the final solution, however. The control of HTML desktops is still out of reach. Seems that HTML settings are stored in HKCU\Software\Microsoft\Internet Explorer\Desktop\General. However, just storing them here doesn't seem to be enough. I still need to find the mechanism that lets Windows know which set of registry values to use. A: I recomend only BMP format. Do not use ActiveDesctop, because you PC will work slowly after that.
Windows wallpaper: not just BMPs?
I've read in a couple of places that the desktop wallpaper can be set to an HTML document. Has anyone had any success changing it programmatically? The following snippet of VB6 helps me set things up for BMPs but when I try to use it for HTML, I get a nice blue background and nothing else. Dim reg As New StdRegistry Public Function CurrentWallpaper() As String CurrentWallpaper = reg.ValueEx(HKEY_CURRENT_USER, "Control Panel\Desktop", "Wallpaper", REG_SZ, "") End Function Public Sub SetWallpaper(cFilename As Variant) reg.ClassKey = HKEY_CURRENT_USER reg.SectionKey = "Control Panel\Desktop" reg.ValueKey = "Wallpaper" reg.ValueType = REG_SZ reg.Default = "" reg.Value = cFilename End Sub Public Sub RefreshDesktop() Dim oShell As Object Set oShell = CreateObject("WScript.Shell") oShell.Run "%windir%\System32\RUNDLL32.EXE user32.dll,UpdatePerUserSystemParameters", 1, True End Sub Perhaps there's some other setting that's required. Any ideas?
[ "I think you need to make sure \"Active Desktop\" is turned on.\nYou might try setting HKCU\\Software\\Microsoft\\Windows\\CurrentVersion\\Policies\\Explorer\\ForceActiveDesktopOn to 1 (found here).\nI haven't tried it, so no guarantees.\n", "Okay, I've discovered the answer to my question, thanks to egl1044 on Experts Exchange. Essentially, one must talk to the IActiveDesktop object. A good implementation of that, in VB6, can be found at VB6 - JPEGs as wallpapers (without conversion).\n", "I'm not sure if there's an official API for this, but if you have your heart set on it you could use Sysinternal's Process Monitor and see what registry keys get touched when you set an HTML desktop background via the UI. Then you'd just need to repeat those edits in your code. However, an API call would be far preferable in terms of backward/forward compatibility.\n", "Getting closer: http://www.microsoft.com/technet/prodtechnol/windows2000serv/reskit/w2rkbook/gp.mspx?mfr=true\n\nBut it turns out that I was getting sidetracked in Policy space. What I really wanted was to set the desktop in the userspace and let the Policy settings stand. Some helpful stuff was found here: http://blogs.msdn.com/coding4fun/archive/2006/10/31/912569.aspx. \nThis isn't the final solution, however. The control of HTML desktops is still out of reach.\n\nSeems that HTML settings are stored in HKCU\\Software\\Microsoft\\Internet Explorer\\Desktop\\General. However, just storing them here doesn't seem to be enough. I still need to find the mechanism that lets Windows know which set of registry values to use.\n", "I recomend only BMP format. Do not use ActiveDesctop, because you PC will work slowly after that.\n" ]
[ 2, 2, 1, 0, 0 ]
[]
[]
[ "desktop_wallpaper", "registry", "vb6" ]
stackoverflow_0000080307_desktop_wallpaper_registry_vb6.txt
Q: Why I get an "Canvas does not allow drawing" while drawing in TeeChart ActiveX 5 component? I'm using Steema's TeeChart ActiveX 5 component for an application in .NET C#. I do some drawings using the methods Line(), Rectangle() and Circle() through the "Canvas" property of the component. My code for drawing is called on every on every OnBeforeDrawSeries() and OnAfterDraw() events of the component. When there is only a few drawings, it works ok. But when the amount of drawing increases and after a certain number of redraws, I get an MessageBox with an error "Canvas does not allow drawing" and the application quits. I believe this is somehow due to "overloading" the component with drawing calls. Am I using this functionality the wrong way, or can I consider this a BUG in the component? A: I would consider this a bug because I have a similar problem (not with Canvas) with this component and the way it manages the memory. On some machine with small amount of RAM, when we create a lot of graph and display them, we will receive a message box with this message "Not enough storage available to process this command". Once this box appears, it is impossible to close this box because if you click OK, the message box is displayed again and again. So, you need to kill the application to get ride of it. I think the bug is related to the drawing process because when we close the message box, the component tries to repaint the region where the message box was displayed and the error happens again. First, you know that TeeChart ActiveX is now at version 8. Maybe this version resolve this issue. I would suggest also to try the .NET version of TeeChart. From my own experience, TeeChart .NET does not have any memory problem since the memory is managed by the .NET framework.
Why I get an "Canvas does not allow drawing" while drawing in TeeChart ActiveX 5 component?
I'm using Steema's TeeChart ActiveX 5 component for an application in .NET C#. I do some drawings using the methods Line(), Rectangle() and Circle() through the "Canvas" property of the component. My code for drawing is called on every on every OnBeforeDrawSeries() and OnAfterDraw() events of the component. When there is only a few drawings, it works ok. But when the amount of drawing increases and after a certain number of redraws, I get an MessageBox with an error "Canvas does not allow drawing" and the application quits. I believe this is somehow due to "overloading" the component with drawing calls. Am I using this functionality the wrong way, or can I consider this a BUG in the component?
[ "I would consider this a bug because I have a similar problem (not with Canvas) with this component and the way it manages the memory.\nOn some machine with small amount of RAM, when we create a lot of graph and display them, we will receive a message box with this message \"Not enough storage available to process this command\". Once this box appears, it is impossible to close this box because if you click OK, the message box is displayed again and again. So, you need to kill the application to get ride of it. I think the bug is related to the drawing process because when we close the message box, the component tries to repaint the region where the message box was displayed and the error happens again.\nFirst, you know that TeeChart ActiveX is now at version 8. Maybe this version resolve this issue.\nI would suggest also to try the .NET version of TeeChart. From my own experience, TeeChart .NET does not have any memory problem since the memory is managed by the .NET framework.\n" ]
[ 1 ]
[]
[]
[ ".net", "c#", "system.drawing", "teechart" ]
stackoverflow_0000085936_.net_c#_system.drawing_teechart.txt
Q: Whats a good way to trim the GUI of a ASP.NET website? I've been trimming the UI of our website by doing the following in the onload event of that control: btnDelete.isVisible = user.IsInRole("can delete"); This has become very tedious because there are so many controls to check again and again. As soon as I get it all working, designers request to change the UI and then it starts all over. Any suggestions? A: One simple suggestion would be to group controls into panels based on access rights A: Something I have done before has been to create a custom page class (Actually, I do this part on every project) that each ASP.NET Page inherits. This page class contains an IsAdmin property. I then subclass the commonly used controls that may or may not be visible between modes into custom controls, and add code to check the Pages IsAdmin property. All this is maybe an hour of work, but if you build pages using these controls, they manage their mode automatically. Another fun timesaving tip is if you need to flip the page in and out of readonly mode. I added a property to the main base class, and then added a custom control that renders a textbox in one mode, and a label in the other. Again, a little bit of time on the components, but then you can create a readonly version of the page in 2 lines of code...Very worth it. A: You may be thinking of the situation in the wrong way. Instead of thinking of individual controls, think of it in terms of business roles and what they have the ability to do. This goes along with grouping controls into panels for access rights. For example, maybe only managers have the ability to delete and do other things, and you have a role for managers that you check. This way if there are changes, you can just move users into different roles. Business rules should not change drastically. There will always be tweaking as new positions gain more responsibility, but thinking of it in this way should minimize the number of changes to be made. A: A quick and dirty option is using the asp:loginview controls, which can be wired up to user roles. Not as elegant as the custom page class option suggested by Jonathan, and can be a bit of a performance hit if they are all over the page.
Whats a good way to trim the GUI of a ASP.NET website?
I've been trimming the UI of our website by doing the following in the onload event of that control: btnDelete.isVisible = user.IsInRole("can delete"); This has become very tedious because there are so many controls to check again and again. As soon as I get it all working, designers request to change the UI and then it starts all over. Any suggestions?
[ "One simple suggestion would be to group controls into panels based on access rights\n", "Something I have done before has been to create a custom page class (Actually, I do this part on every project) that each ASP.NET Page inherits.\nThis page class contains an IsAdmin property.\nI then subclass the commonly used controls that may or may not be visible between modes into custom controls, and add code to check the Pages IsAdmin property.\nAll this is maybe an hour of work, but if you build pages using these controls, they manage their mode automatically.\nAnother fun timesaving tip is if you need to flip the page in and out of readonly mode. I added a property to the main base class, and then added a custom control that renders a textbox in one mode, and a label in the other.\nAgain, a little bit of time on the components, but then you can create a readonly version of the page in 2 lines of code...Very worth it.\n", "You may be thinking of the situation in the wrong way. Instead of thinking of individual controls, think of it in terms of business roles and what they have the ability to do. This goes along with grouping controls into panels for access rights. For example, maybe only managers have the ability to delete and do other things, and you have a role for managers that you check. This way if there are changes, you can just move users into different roles. Business rules should not change drastically. There will always be tweaking as new positions gain more responsibility, but thinking of it in this way should minimize the number of changes to be made.\n", "A quick and dirty option is using the asp:loginview controls, which can be wired up to user roles.\nNot as elegant as the custom page class option suggested by Jonathan, and can be a bit of a performance hit if they are all over the page.\n" ]
[ 3, 2, 2, 1 ]
[]
[]
[ "asp.net", "security", "user_interface" ]
stackoverflow_0000089285_asp.net_security_user_interface.txt
Q: .NET NumericTextBox Does anyone know why Microsoft does not ship a numeric text box with its .NET framework e.g. a text box which would ensure that the characters entered are always a valid number? It's something which is commonly used across applications of different flavours and indeed something which most GUI libraries (well, those that I know) deliver in some way. While it's not that difficult to write your own, it's not trivial either. So, I'm interested in finding out if anyone can rationalise this omission. edit: Thanks for the suggestions. Whilst masked text boxes and numeric up-downs have their place; I am interested in a control that looks like a text box but automatically performs validation on key press that the input corresponds to a valid number. In my (admittedly limited) experience, this is something which is used quite a bit (we don't always want the static constraints imposed by masked text boxes, just as we don't always want the up-down controls at the side). There are lots of implementations with varying degrees of quality of this on the net and indeed there's even an example of this on the MSDN. edit2: Thanks guys, so it sounds like the numeric up-down is the .NET control to use for numeric input only (and the reason why we don't actually have an explicit numeric text box control). It would have been great if it automatically disallowed the input of non-numeric characters (on keypress, on paste etc) but I guess it's good enough that it performs the validation when the control loses focus. And, one could do the on keypress, on paste validation if one were really keen... A: You could use a MaskedTextBox A: I second Garry Shutlers recommendation of using NumericUpDown. You might not like the up-down-controls, but that is the standard look of a numeric entry control in Windows, and you should think twice about using a different look. If you end up coding your own implementation (or finding one on the web), there are some pitfalls to look out for. Remember that there are many ways for a value to get into a control besides keypresses. The one in your link on MSDN does not even override pasting, so you can easily ctrl-V a non-numeric string into the control. A: There is the NumericUpDown control which is made specifically for the input of numbers and can be used like a TextBox. A: Starting with WinForms 2.0, you have a MaskedTextBox. You can set the mask to whatever you want, i.e. for numbers use the mask all 0s. A: Some of the .NET Framework controls oddly do not expose all the features of the underlying Windows control that they wrap. In this case, for some reason the ES_NUMBER style has not been implemented. You could possibly handle the HandleCreated event (or override OnHandleCreated, as TextBox isn't sealed) and call SetWindowLong to set the ES_NUMBER style on the underlying Edit control. ES_NUMBER is defined as 0x2000 in WinUser.h. A: You can also, derive the TextBox class and grab the keypad event and ensure nothing other than numbers is written. If it were a Web page, the same would have been done to an html text box using Javascript. A: Microsoft leave it to 3rd parties to fill in the gaps regarding missing controls in the toolbox. I imagine time and cost would feature in their rationale. In this case, however, I think that the FilteredTextBox provides the functionality you describe. A: Based on the second edit: The Windows Forms FAQ tells you how to restrict characters in a textbox in question 26.12: 26.12 How can I restrict the characters that my textbox can accept? You can handle the textbox's KeyPress event and if the char passed in is not acceptable, mark the events argument as showing the character has been handled. Below is a derived TextBox that only accepts digits (and control characters such as backspace, ...). Even though the snippet uses a derived textbox, it is not necessary as you can just add the handler to its parent form. See the FAQ for the code example.
.NET NumericTextBox
Does anyone know why Microsoft does not ship a numeric text box with its .NET framework e.g. a text box which would ensure that the characters entered are always a valid number? It's something which is commonly used across applications of different flavours and indeed something which most GUI libraries (well, those that I know) deliver in some way. While it's not that difficult to write your own, it's not trivial either. So, I'm interested in finding out if anyone can rationalise this omission. edit: Thanks for the suggestions. Whilst masked text boxes and numeric up-downs have their place; I am interested in a control that looks like a text box but automatically performs validation on key press that the input corresponds to a valid number. In my (admittedly limited) experience, this is something which is used quite a bit (we don't always want the static constraints imposed by masked text boxes, just as we don't always want the up-down controls at the side). There are lots of implementations with varying degrees of quality of this on the net and indeed there's even an example of this on the MSDN. edit2: Thanks guys, so it sounds like the numeric up-down is the .NET control to use for numeric input only (and the reason why we don't actually have an explicit numeric text box control). It would have been great if it automatically disallowed the input of non-numeric characters (on keypress, on paste etc) but I guess it's good enough that it performs the validation when the control loses focus. And, one could do the on keypress, on paste validation if one were really keen...
[ "You could use a MaskedTextBox\n", "I second Garry Shutlers recommendation of using NumericUpDown. You might not like the up-down-controls, but that is the standard look of a numeric entry control in Windows, and you should think twice about using a different look.\nIf you end up coding your own implementation (or finding one on the web), there are some pitfalls to look out for. Remember that there are many ways for a value to get into a control besides keypresses. The one in your link on MSDN does not even override pasting, so you can easily ctrl-V a non-numeric string into the control.\n", "There is the NumericUpDown control which is made specifically for the input of numbers and can be used like a TextBox.\n", "Starting with WinForms 2.0, you have a MaskedTextBox. You can set the mask to whatever you want, i.e. for numbers use the mask all 0s.\n", "Some of the .NET Framework controls oddly do not expose all the features of the underlying Windows control that they wrap. In this case, for some reason the ES_NUMBER style has not been implemented.\nYou could possibly handle the HandleCreated event (or override OnHandleCreated, as TextBox isn't sealed) and call SetWindowLong to set the ES_NUMBER style on the underlying Edit control. ES_NUMBER is defined as 0x2000 in WinUser.h.\n", "You can also, derive the TextBox class and grab the keypad event and ensure nothing other than numbers is written. \nIf it were a Web page, the same would have been done to an html text box using Javascript. \n", "Microsoft leave it to 3rd parties to fill in the gaps regarding missing controls in the toolbox. I imagine time and cost would feature in their rationale.\nIn this case, however, I think that the FilteredTextBox provides the functionality you describe.\n", "Based on the second edit:\nThe Windows Forms FAQ tells you how to restrict characters in a textbox in question 26.12:\n26.12 How can I restrict the characters that my textbox can accept?\nYou can handle the textbox's KeyPress event and if the char passed in is not acceptable, mark the events argument as showing the character has been handled. Below is a derived TextBox that only accepts digits (and control characters such as backspace, ...). Even though the snippet uses a derived textbox, it is not necessary as you can just add the handler to its parent form.\nSee the FAQ for the code example.\n" ]
[ 4, 4, 3, 3, 1, 0, 0, 0 ]
[]
[]
[ ".net", "numerical", "textbox" ]
stackoverflow_0000081104_.net_numerical_textbox.txt
Q: How many DataTable objects should I use in my C# app? I'm an experienced programmer in a legacy (yet object oriented) development tool and making the switch to C#/.Net. I'm writing a small single user app using SQL server CE 3.5. I've read the conceptual DataSet and related doc and my code works. Now I want to make sure that I'm doing it "right", get some feedback from experienced .Net/SQL Server coders, the kind you don't get from reading the doc. I've noticed that I have code like this in a few places: var myTableDataTable = new MyDataSet.MyTableDataTable(); myTableTableAdapter.Fill(MyTableDataTable); ... // other code In a single user app, would you typically just do this once when the app starts, instantiate a DataTable object for each table and then store a ref to it so you ever just use that single object which is already filled with data? This way you would ever only read the data from the db once instead of potentially multiple times. Or is the overhead of this so small that it just doesn't matter (plus could be counterproductive with large tables)? A: For CE, it's probably a non issue. If you were pushing this app to thousands of users and they were all hitting a centralized DB, you might want to spend some time on optimization. In a single-user instance DB like CE, unless you've got data that says you need to optimize, I wouldn't spend any time worrying about it. Premature optimization, etc. A: The way to decide varys between 2 main few things 1. Is the data going to be accesses constantly 2. Is there a lot of data If you are constanty using the data in the tables, then load them on first use. If you only occasionally use the data, fill the table when you need it and then discard it. For example, if you have 10 gui screens and only use myTableDataTable on 1 of them, read it in only on that screen. A: The choice really doesn't depend on C# itself. It comes down to a balance between: How often do you use the data in your code? Does the data ever change (and do you care if it does)? What's the relative (time) cost of getting the data again, compared to everything else your code does? How much value do you put on performance, versus developer effort/time (for this particular application)? As a general rule: for production applications, where the data doesn't change often, I would probably create the DataTable once and then hold onto the reference as you mention. I would also consider putting the data in a typed collection/list/dictionary, instead of the generic DataTable class, if nothing else because it's easier to let the compiler catch my typing mistakes. For a simple utility you run for yourself that "starts, does its thing and ends", it's probably not worth the effort. You are asking about Windows CE. In that particular care, I would most likely do the query only once and hold onto the results. Mobile OSs have extra constraints in batteries and space that desktop software doesn't have. Basically, a mobile OS makes bullet #4 much more important. Everytime you add another retrieval call from SQL, you make calls to external libraries more often, which means you are probably running longer, allocating and releasing more memory more often (which adds fragmentation), and possibly causing the database to be re-read from Flash memory. it's most likely a lot better to hold onto the data once you have it, assuming that you can (see bullet #2). A: It's easier to figure out the answer to this question when you think about datasets as being a "session" of data. You fill the datasets; you work with them; and then you put the data back or discard it when you're done. So you need to ask questions like this: How current does the data need to be? Do you always need to have the very very latest, or will the database not change that frequently? What are you using the data for? If you're just using it for reports, then you can easily fill a dataset, run your report, then throw the dataset away, and next time just make a new one. That'll give you more current data anyway. Just how much data are we talking about? You've said you're working with a relatively small dataset, so there's not a major memory impact if you load it all in memory and hold it there forever. Since you say it's a single-user app without a lot of data, I think you're safe loading everything in at the beginning, using it in your datasets, and then updating on close. The main thing you need to be concerned with in this scenario is: What if the app exits abnormally, due to a crash, power outage, etc.? Will the user lose all his work? But as it happens, datasets are extremely easy to serialize, so you can fairly easily implement a "save every so often" procedure to serialize the dataset contents to disk so the user won't lose a lot of work.
How many DataTable objects should I use in my C# app?
I'm an experienced programmer in a legacy (yet object oriented) development tool and making the switch to C#/.Net. I'm writing a small single user app using SQL server CE 3.5. I've read the conceptual DataSet and related doc and my code works. Now I want to make sure that I'm doing it "right", get some feedback from experienced .Net/SQL Server coders, the kind you don't get from reading the doc. I've noticed that I have code like this in a few places: var myTableDataTable = new MyDataSet.MyTableDataTable(); myTableTableAdapter.Fill(MyTableDataTable); ... // other code In a single user app, would you typically just do this once when the app starts, instantiate a DataTable object for each table and then store a ref to it so you ever just use that single object which is already filled with data? This way you would ever only read the data from the db once instead of potentially multiple times. Or is the overhead of this so small that it just doesn't matter (plus could be counterproductive with large tables)?
[ "For CE, it's probably a non issue. If you were pushing this app to thousands of users and they were all hitting a centralized DB, you might want to spend some time on optimization. In a single-user instance DB like CE, unless you've got data that says you need to optimize, I wouldn't spend any time worrying about it. Premature optimization, etc.\n", "The way to decide varys between 2 main few things\n1. Is the data going to be accesses constantly\n2. Is there a lot of data\nIf you are constanty using the data in the tables, then load them on first use.\nIf you only occasionally use the data, fill the table when you need it and then discard it.\nFor example, if you have 10 gui screens and only use myTableDataTable on 1 of them, read it in only on that screen.\n", "The choice really doesn't depend on C# itself. It comes down to a balance between:\n\nHow often do you use the data in your code?\nDoes the data ever change (and do you care if it does)?\nWhat's the relative (time) cost of getting the data again, compared to everything else your code does?\nHow much value do you put on performance, versus developer effort/time (for this particular application)?\n\nAs a general rule: for production applications, where the data doesn't change often, I would probably create the DataTable once and then hold onto the reference as you mention. I would also consider putting the data in a typed collection/list/dictionary, instead of the generic DataTable class, if nothing else because it's easier to let the compiler catch my typing mistakes.\nFor a simple utility you run for yourself that \"starts, does its thing and ends\", it's probably not worth the effort.\nYou are asking about Windows CE. In that particular care, I would most likely do the query only once and hold onto the results. Mobile OSs have extra constraints in batteries and space that desktop software doesn't have. Basically, a mobile OS makes bullet #4 much more important.\nEverytime you add another retrieval call from SQL, you make calls to external libraries more often, which means you are probably running longer, allocating and releasing more memory more often (which adds fragmentation), and possibly causing the database to be re-read from Flash memory. it's most likely a lot better to hold onto the data once you have it, assuming that you can (see bullet #2).\n", "It's easier to figure out the answer to this question when you think about datasets as being a \"session\" of data. You fill the datasets; you work with them; and then you put the data back or discard it when you're done. So you need to ask questions like this:\n\nHow current does the data need to be? Do you always need to have the very very latest, or will the database not change that frequently?\nWhat are you using the data for? If you're just using it for reports, then you can easily fill a dataset, run your report, then throw the dataset away, and next time just make a new one. That'll give you more current data anyway.\nJust how much data are we talking about? You've said you're working with a relatively small dataset, so there's not a major memory impact if you load it all in memory and hold it there forever.\n\nSince you say it's a single-user app without a lot of data, I think you're safe loading everything in at the beginning, using it in your datasets, and then updating on close.\nThe main thing you need to be concerned with in this scenario is: What if the app exits abnormally, due to a crash, power outage, etc.? Will the user lose all his work? But as it happens, datasets are extremely easy to serialize, so you can fairly easily implement a \"save every so often\" procedure to serialize the dataset contents to disk so the user won't lose a lot of work.\n" ]
[ 3, 0, 0, 0 ]
[]
[]
[ ".net", "c#", "dataset", "datatable", "sql_server" ]
stackoverflow_0000087514_.net_c#_dataset_datatable_sql_server.txt
Q: How can I kill a process, using VBScript, started by a particular user I have multiple users running attachemate on a Windows 2003 server. I want to kill attachemate.exe started by user_1 without killing attachemate.exe started by user_2. I want to use VBScript. A: You could use this to find out who the process owner is, then once you have that you can use Win32_Process to kill the process by the process ID. MSDN Win32_Process class details MSDN Terminating a process with Win32_Process There is surely a cleaner way to do this, but here's what I came up with. NOTE: This doesn't deal with multiple processes of the same name of course, but I figure you can work that part out with an array to hold them or something like that. :) strComputer = "." strOwner = "A111111" strProcess = "'notepad.exe'" ' Connect to WMI service and Win32_Process filtering by name' Set objWMIService = GetObject("winmgmts:{impersonationLevel=impersonate}!\\" _ & strComputer & "\root\cimv2") Set colProcessbyName = objWMIService.ExecQuery("Select * from Win32_Process Where Name = " _ & strProcess) ' Get the process ID for the process started by the user in question' For Each objProcess in colProcessbyName colProperties = objProcess.GetOwner(strUsername,strUserDomain) if strUsername = strOwner then strProcessID = objProcess.ProcessId end if next ' We have the process ID for the app in question for the user, now we kill it' Set colProcessList = objWMIService.ExecQuery("Select * from Win32_Process where ProcessId =" & strProcessID) For Each objProcess in colProcess objProcess.Terminate() Next A: Shell out to pskill from http://sysinternals.com/ Commandline: pskill -u user_1 attachemate.exe
How can I kill a process, using VBScript, started by a particular user
I have multiple users running attachemate on a Windows 2003 server. I want to kill attachemate.exe started by user_1 without killing attachemate.exe started by user_2. I want to use VBScript.
[ "You could use this to find out who the process owner is, then once you have that you can use Win32_Process to kill the process by the process ID.\nMSDN Win32_Process class details\nMSDN Terminating a process with Win32_Process\nThere is surely a cleaner way to do this, but here's what I came up with. NOTE: This doesn't deal with multiple processes of the same name of course, but I figure you can work that part out with an array to hold them or something like that. :)\nstrComputer = \".\"\nstrOwner = \"A111111\"\nstrProcess = \"'notepad.exe'\"\n\n' Connect to WMI service and Win32_Process filtering by name'\nSet objWMIService = GetObject(\"winmgmts:{impersonationLevel=impersonate}!\\\\\" _\n & strComputer & \"\\root\\cimv2\")\nSet colProcessbyName = objWMIService.ExecQuery(\"Select * from Win32_Process Where Name = \" _\n & strProcess)\n\n' Get the process ID for the process started by the user in question'\nFor Each objProcess in colProcessbyName\n colProperties = objProcess.GetOwner(strUsername,strUserDomain)\n if strUsername = strOwner then\n strProcessID = objProcess.ProcessId\n end if\nnext\n\n' We have the process ID for the app in question for the user, now we kill it'\nSet colProcessList = objWMIService.ExecQuery(\"Select * from Win32_Process where ProcessId =\" & strProcessID)\nFor Each objProcess in colProcess\n objProcess.Terminate()\nNext\n\n", "Shell out to pskill from http://sysinternals.com/\nCommandline: pskill -u user_1 attachemate.exe\n" ]
[ 5, 2 ]
[]
[]
[ "kill", "vbscript", "windows_server_2003" ]
stackoverflow_0000076275_kill_vbscript_windows_server_2003.txt
Q: SQLBindParameter to prepare for SQLPutData using C++ and SQL Native Client I'm trying to use SQLBindParameter to prepare my driver for input via SQLPutData. The field in the database is a TEXT field. My function is crafted based on MS's example here: http://msdn.microsoft.com/en-us/library/ms713824(VS.85).aspx. I've setup the environment, made the connection, and prepared my statement successfully but when I call SQLBindParam (using code below) it consistently fails reporting: [Microsoft][SQL Native Client]Invalid precision value int col_num = 1; SQLINTEGER length = very_long_string.length( ); retcode = SQLBindParameter( StatementHandle, col_num, SQL_PARAM_INPUT, SQL_C_BINARY, SQL_LONGVARBINARY, NULL, NULL, (SQLPOINTER) col_num, NULL, &length ); The above relies on the driver in use returning "N" for the SQL_NEED_LONG_DATA_LEN information type in SQLGetInfo. My driver returns "Y". How do I bind so that I can use SQLPutData? A: Though it doesn't look just like the documentation's example code, I found the following solution to work for what I'm trying to accomplish. Thanks gbjbaanb for making me retest my input combinations to SQLBindParameter. SQLINTEGER length; RETCODE retcode = SQLBindParameter( StatementHandle, col_num, // position of the parameter in the query SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARCHAR, data_length, // size of our data NULL, // decimal precision: not used our data types &my_string, // SQLParamData will return this value later to indicate what data it's looking for so let's pass in the address of our std::string data_length, &length ); // it needs a length buffer // length in the following operation must still exist when SQLExecDirect or SQLExecute is called // in my code, I used a pointer on the heap for this. length = SQL_LEN_DATA_AT_EXEC( data_length ); After a statement is executed, you can use SQLParamData to determine what data SQL wants you to send it as follows: std::string* my_string; // set string pointer to value given to SQLBindParameter retcode = SQLParamData( StatementHandle, (SQLPOINTER*) &my_string ); Finally, use SQLPutData to send the contents of your string to SQL: // send data in chunks until everything is sent SQLINTEGER len; for ( int i(0); i < my_string->length( ); i += CHUNK_SIZE ) { std::string substr = my_string->substr( i, CHUNK_SIZE ); len = substr.length( ); retcode = SQLPutData( StatementHandle, (SQLPOINTER) substr.c_str( ), len ); } A: you're passing NULL as the buffer length, this is an in/out param that shoudl be the size of the col_num parameter. Also, you should pass a value for the ColumnSize or DecimalDigits parameters. http://msdn.microsoft.com/en-us/library/ms710963(VS.85).aspx
SQLBindParameter to prepare for SQLPutData using C++ and SQL Native Client
I'm trying to use SQLBindParameter to prepare my driver for input via SQLPutData. The field in the database is a TEXT field. My function is crafted based on MS's example here: http://msdn.microsoft.com/en-us/library/ms713824(VS.85).aspx. I've setup the environment, made the connection, and prepared my statement successfully but when I call SQLBindParam (using code below) it consistently fails reporting: [Microsoft][SQL Native Client]Invalid precision value int col_num = 1; SQLINTEGER length = very_long_string.length( ); retcode = SQLBindParameter( StatementHandle, col_num, SQL_PARAM_INPUT, SQL_C_BINARY, SQL_LONGVARBINARY, NULL, NULL, (SQLPOINTER) col_num, NULL, &length ); The above relies on the driver in use returning "N" for the SQL_NEED_LONG_DATA_LEN information type in SQLGetInfo. My driver returns "Y". How do I bind so that I can use SQLPutData?
[ "Though it doesn't look just like the documentation's example code, I found the following solution to work for what I'm trying to accomplish. Thanks gbjbaanb for making me retest my input combinations to SQLBindParameter.\n SQLINTEGER length;\n RETCODE retcode = SQLBindParameter( StatementHandle,\n col_num, // position of the parameter in the query\n SQL_PARAM_INPUT,\n SQL_C_CHAR,\n SQL_VARCHAR,\n data_length, // size of our data\n NULL, // decimal precision: not used our data types\n &my_string, // SQLParamData will return this value later to indicate what data it's looking for so let's pass in the address of our std::string\n data_length,\n &length ); // it needs a length buffer\n\n // length in the following operation must still exist when SQLExecDirect or SQLExecute is called\n // in my code, I used a pointer on the heap for this.\n length = SQL_LEN_DATA_AT_EXEC( data_length ); \n\nAfter a statement is executed, you can use SQLParamData to determine what data SQL wants you to send it as follows:\n std::string* my_string;\n // set string pointer to value given to SQLBindParameter\n retcode = SQLParamData( StatementHandle, (SQLPOINTER*) &my_string ); \n\nFinally, use SQLPutData to send the contents of your string to SQL:\n // send data in chunks until everything is sent\n SQLINTEGER len;\n for ( int i(0); i < my_string->length( ); i += CHUNK_SIZE )\n {\n std::string substr = my_string->substr( i, CHUNK_SIZE );\n\n len = substr.length( );\n\n retcode = SQLPutData( StatementHandle, (SQLPOINTER) substr.c_str( ), len );\n }\n\n", "you're passing NULL as the buffer length, this is an in/out param that shoudl be the size of the col_num parameter. Also, you should pass a value for the ColumnSize or DecimalDigits parameters. \nhttp://msdn.microsoft.com/en-us/library/ms710963(VS.85).aspx\n" ]
[ 3, 1 ]
[]
[]
[ "c++", "sql", "sql_server", "sqlncli" ]
stackoverflow_0000084064_c++_sql_sql_server_sqlncli.txt
Q: naming columns in excel with Complex sql I’m trying to run this SQL using get external. It works, but when I try to rename the sub-queries or anything for that matter it remove it. I tried as, as and the name in '', as then the name in "", and the same with space. What is the right way to do that? Relevant SQL: SELECT list_name, app_name, (SELECT fname + ' ' + lname FROM dbo.d_agent_define map WHERE map.agent_id = tac.agent_id) as agent_login, input, CONVERT(varchar,DATEADD(ss,TAC_BEG_tstamp,'01/01/1970')) FROM dbo.maps_report_list list JOIN dbo.report_tac_agent tac ON (tac.list_id = list.list_id) WHERE input = 'SYS_ERR' AND app_name = 'CHARLOTT' AND convert(VARCHAR,DATEADD(ss,day_tstamp,'01/01/1970'),101) = '09/10/2008' AND list_name LIKE 'NRBAD%' ORDER BY agent_login,CONVERT(VARCHAR,DATEADD(ss,TAC_BEG_tstamp,'01/01/1970')) A: You could get rid of your dbo.d_agent_define subquery and just add in a join to the agent define table. Would this code work? select list_name, app_name, map.fname + ' ' + map.lname as agent_login, input, convert(varchar,dateadd(ss,TAC_BEG_tstamp,'01/01/1970')) as tac_seconds from dbo.maps_report_list list join dbo.report_tac_agent tac on (tac.list_id = list.list_id) join dbo.d_agent_define map on (map.agent_id = tac.agent_id) where input = 'SYS_ERR' and app_name = 'CHARLOTT' and convert(varchar,dateadd(ss,day_tstamp,'01/01/1970'),101) = '09/10/2008' and list_name LIKE 'NRBAD%' order by agent_login,convert(varchar,dateadd(ss,TAC_BEG_tstamp,'01/01/1970')) Note that I named your dateadd column because it did not have a name. I also tried to keep your convention of how you do a join. There are a few things that I would do different with this query to make it more readable, but I only focused on getting rid of the subquery problem. I did not do this, but I would recommend that you qualify all of your columns with the table from which you are getting them. A: To remove the sub query in the SELECT statement I suggest the following: SELECT list_name, app_name, map.fname + ' ' + map.lname as agent_login, input, convert(varchar,dateadd(ss, TAC_BEG_tstamp, '01/01/1970)) FROM dbo.maps_report_list inner join (dbo.report_tac_agent as tac inner join dbo.d_agent_define as map ON (tac.agent_id=map.agent_id)) ON list.list_id = tac.list_id WHERE input = 'SYS_ERR' and app_name = 'CHARLOTT' and convert(varchar,dateadd(ss,day_tstamp,'01/01/1970'),101) = '09/10/2008' and list_name LIKE 'NRBAD%' order by agent_login,convert(varchar,dateadd(ss,TAC_BEG_tstamp,'01/01/1970')) I used parentheses to create the inner join between dbo.report_tac_agent and dbo.d_agent_define first. This is now a set of join data. The combination of those tables are then joined to your list table, which I am assuming is the driving table here. If I am understand what you are trying to do with your sub select, this should work for you. As stated by the other poster you should use table names on your columns (e.g. map.fname), it just makes things easy to understand. I didn't in my example because I am note 100% sure which columns go with which tables. Please let me know if this doesn't do it for you and how the data it returns is wrong. That will make it easier to solve in needed.
naming columns in excel with Complex sql
I’m trying to run this SQL using get external. It works, but when I try to rename the sub-queries or anything for that matter it remove it. I tried as, as and the name in '', as then the name in "", and the same with space. What is the right way to do that? Relevant SQL: SELECT list_name, app_name, (SELECT fname + ' ' + lname FROM dbo.d_agent_define map WHERE map.agent_id = tac.agent_id) as agent_login, input, CONVERT(varchar,DATEADD(ss,TAC_BEG_tstamp,'01/01/1970')) FROM dbo.maps_report_list list JOIN dbo.report_tac_agent tac ON (tac.list_id = list.list_id) WHERE input = 'SYS_ERR' AND app_name = 'CHARLOTT' AND convert(VARCHAR,DATEADD(ss,day_tstamp,'01/01/1970'),101) = '09/10/2008' AND list_name LIKE 'NRBAD%' ORDER BY agent_login,CONVERT(VARCHAR,DATEADD(ss,TAC_BEG_tstamp,'01/01/1970'))
[ "You could get rid of your dbo.d_agent_define subquery and just add in a join to the agent define table.\nWould this code work?\nselect list_name, app_name, \nmap.fname + ' ' + map.lname as agent_login, \ninput, \nconvert(varchar,dateadd(ss,TAC_BEG_tstamp,'01/01/1970')) as tac_seconds\nfrom dbo.maps_report_list list \njoin dbo.report_tac_agent tac \non (tac.list_id = list.list_id) \njoin dbo.d_agent_define map\non (map.agent_id = tac.agent_id)\nwhere input = 'SYS_ERR' \nand app_name = 'CHARLOTT' \nand convert(varchar,dateadd(ss,day_tstamp,'01/01/1970'),101) = '09/10/2008' \nand list_name LIKE 'NRBAD%' \norder by agent_login,convert(varchar,dateadd(ss,TAC_BEG_tstamp,'01/01/1970'))\n\nNote that I named your dateadd column because it did not have a name. I also tried to keep your convention of how you do a join. There are a few things that I would do different with this query to make it more readable, but I only focused on getting rid of the subquery problem.\nI did not do this, but I would recommend that you qualify all of your columns with the table from which you are getting them.\n", "To remove the sub query in the SELECT statement I suggest the following:\nSELECT list_name, app_name, map.fname + ' ' + map.lname as agent_login, input, convert(varchar,dateadd(ss, TAC_BEG_tstamp, '01/01/1970))\nFROM dbo.maps_report_list inner join\n (dbo.report_tac_agent as tac inner join dbo.d_agent_define as map ON (tac.agent_id=map.agent_id)) ON list.list_id = tac.list_id\nWHERE input = 'SYS_ERR' and app_name = 'CHARLOTT' and convert(varchar,dateadd(ss,day_tstamp,'01/01/1970'),101) = '09/10/2008' \n and list_name LIKE 'NRBAD%' order by agent_login,convert(varchar,dateadd(ss,TAC_BEG_tstamp,'01/01/1970'))\n\nI used parentheses to create the inner join between dbo.report_tac_agent and dbo.d_agent_define first. This is now a set of join data.\nThe combination of those tables are then joined to your list table, which I am assuming is the driving table here. If I am understand what you are trying to do with your sub select, this should work for you. \nAs stated by the other poster you should use table names on your columns (e.g. map.fname), it just makes things easy to understand. I didn't in my example because I am note 100% sure which columns go with which tables. Please let me know if this doesn't do it for you and how the data it returns is wrong. That will make it easier to solve in needed.\n" ]
[ 1, 0 ]
[]
[]
[ "excel", "sql" ]
stackoverflow_0000089246_excel_sql.txt
Q: Creating a custom menu in .NET WinForms Using .NET 2.0 with WinForms, I'd like to create a custom, multi-columned menu (similiar to the word 2007 look&feel, but without the ribbon). My approach was creating a control, and using a left/right docked toolstrip, I have constructed a similar look&feel of a menu. However, there are a few shortcomings of this solution, such as the control can only be placed, and displayed within the form; if the form is too small, some area of the control won't be displayed; the control also have to be manually shown/hidden. Thus, I'm looking for a way to display this control outside of the boundaries of the application. Creating a new form would result in title-bar deactivating on display, so that's also out. Alternatively, any other approach to create a customized menu would be very welcomed. Edit: I don't want to use any commercial products for this; and since it's about a simple menu customization, it's not related to Microsoft's ribbon "research" in any way. A: unless you are in the business of providing .net components, you should be looking to buy it off the shelf. Its a lot of work getting such a control right - There are already vendors providing this kind of UI. e.g. ComponentOne if you are trying to build this component as a product, you should look at the link below. Apparently Microsoft has a 'royalty-free' license around the Office UI to protect their R&D investments. As of now you need to tell them that you are using something similar to the Office UI. More of that here A: The MenuStrip class has a Renderer property. You can assign your own ToolStripRenderer derived class to customize the painting. It's a fair amount of work.
Creating a custom menu in .NET WinForms
Using .NET 2.0 with WinForms, I'd like to create a custom, multi-columned menu (similiar to the word 2007 look&feel, but without the ribbon). My approach was creating a control, and using a left/right docked toolstrip, I have constructed a similar look&feel of a menu. However, there are a few shortcomings of this solution, such as the control can only be placed, and displayed within the form; if the form is too small, some area of the control won't be displayed; the control also have to be manually shown/hidden. Thus, I'm looking for a way to display this control outside of the boundaries of the application. Creating a new form would result in title-bar deactivating on display, so that's also out. Alternatively, any other approach to create a customized menu would be very welcomed. Edit: I don't want to use any commercial products for this; and since it's about a simple menu customization, it's not related to Microsoft's ribbon "research" in any way.
[ "\nunless you are in the business of providing .net components, you should be looking to buy it off the shelf. Its a lot of work getting such a control right - There are already vendors providing this kind of UI. e.g. ComponentOne\nif you are trying to build this component as a product, you should look at the link below. Apparently Microsoft has a 'royalty-free' license around the Office UI to protect their R&D investments. As of now you need to tell them that you are using something similar to the Office UI. More of that here\n\n", "The MenuStrip class has a Renderer property. You can assign your own ToolStripRenderer derived class to customize the painting. It's a fair amount of work.\n" ]
[ 2, 1 ]
[]
[]
[ ".net", "c#", "menu", "winforms" ]
stackoverflow_0000071149_.net_c#_menu_winforms.txt
Q: How best to make the selected date of an ASP.NET Calendar control available to JavaScript? How best to make the selected date of an ASP.NET Calendar control available to JavaScript? Most controls are pretty simple, but the calendar requires more than just a simple document.getElementById().value. A: When you click on a date with the calendar, ASP does a postback, you could always put the SelectedDate value of the calendar control into a hidden field on the page during the OnLoad event of the page or the SelectionChanged event of the Calendar control. A: This might help you. It uses YUI, but you can probably port some of that functionality over to another library or custom code it. It should get you started though. http://www.codeproject.com/KB/aspnet/aspnet-yahoouicalendar.aspx A: You might find useful the MS AJAX Calendar control extender, you can get the date just by document.getElementById('<%= DateTextBox.ClientID%>').value; DateTextBox is an asp:TextBox control that will be extended with the AJAX calendar. A: I'm using Page.ClientScript.RegisterClientScriptBlock() to put a small script on the page that just declare a variable with the desired value. I was hoping for some a little less... clunky.
How best to make the selected date of an ASP.NET Calendar control available to JavaScript?
How best to make the selected date of an ASP.NET Calendar control available to JavaScript? Most controls are pretty simple, but the calendar requires more than just a simple document.getElementById().value.
[ "When you click on a date with the calendar, ASP does a postback, you could always put the SelectedDate value of the calendar control into a hidden field on the page during the OnLoad event of the page or the SelectionChanged event of the Calendar control.\n", "This might help you. It uses YUI, but you can probably port some of that functionality over to another library or custom code it. It should get you started though.\nhttp://www.codeproject.com/KB/aspnet/aspnet-yahoouicalendar.aspx\n", "You might find useful the MS AJAX Calendar control extender, you can get the date just by \ndocument.getElementById('<%= DateTextBox.ClientID%>').value;\nDateTextBox is an asp:TextBox control that will be extended with the AJAX calendar.\n", "I'm using Page.ClientScript.RegisterClientScriptBlock() to put a small script on the page that just declare a variable with the desired value. I was hoping for some a little less... clunky.\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "asp.net", "calendar", "javascript" ]
stackoverflow_0000087871_asp.net_calendar_javascript.txt
Q: Zend Framework - ErrorHandler does not seem to be working as expected This is my first experience using the Zend Framework. I am attempting to follow the Quick Start tutorial. Everything was working as expected until I reached the section on the Error Controller and View. When I navigate to a page that does not exist, instead of receiving the error page I get the Fatal Error screen dump (in all it's glory): Fatal error: Uncaught exception 'Zend_Controller_Dispatcher_Exception' with message 'Invalid controller specified (error)' in /home/.fantasia/bcnewman/foo.com/library/Zend/Controller/Dispatcher/Standard.php:249 Stack trace: #0 /home/.fantasia/bcnewman/foo.com/library/Zend/Controller/Front.php(946): Zend_Controller_Dispatcher_Standard->dispatch(Object(Zend_Controller_Request_Http), Object(Zend_Controller_Response_Http)) #1 /home/.fantasia/bcnewman/foo.com/public/index.php(42): Zend_Controller_Front->dispatch() #2 {main} thrown in /home/.fantasia/bcnewman/foo.com/library/Zend/Controller/Dispatcher/Standard.php on line 249 I do not believe this is caused by a syntax error on my part (a copied and pasted the example file's content from the tutorial) and I believe I have the application directory structure correct: ./application ./application/controllers ./application/controllers/IndexController.php ./application/controllers/ErrorHandler.php ./application/views ./application/views/scripts ./application/views/scripts/index ./application/views/scripts/index/index.phtml ./application/views/scripts/error ./application/views/scripts/error/error.phtml ./application/bootstrap.php ./public ./public/index.php And finally, the IndexController and index.phtml view does work. A: You have ErrorHandler.php. It should be ErrorController.php. Controllers all need to be named following the format of NameController.php. Since you don't have it named properly the dispatcher cannot find it. A: Assuming that you have the ErrorController plugin loaded into your front controller, make sure that in your bootstrap that you do not have the following set: $frontController->throwExceptions(true); If this is set then Exceptions will always be thrown, regardless of whether or not you have an error controller set.
Zend Framework - ErrorHandler does not seem to be working as expected
This is my first experience using the Zend Framework. I am attempting to follow the Quick Start tutorial. Everything was working as expected until I reached the section on the Error Controller and View. When I navigate to a page that does not exist, instead of receiving the error page I get the Fatal Error screen dump (in all it's glory): Fatal error: Uncaught exception 'Zend_Controller_Dispatcher_Exception' with message 'Invalid controller specified (error)' in /home/.fantasia/bcnewman/foo.com/library/Zend/Controller/Dispatcher/Standard.php:249 Stack trace: #0 /home/.fantasia/bcnewman/foo.com/library/Zend/Controller/Front.php(946): Zend_Controller_Dispatcher_Standard->dispatch(Object(Zend_Controller_Request_Http), Object(Zend_Controller_Response_Http)) #1 /home/.fantasia/bcnewman/foo.com/public/index.php(42): Zend_Controller_Front->dispatch() #2 {main} thrown in /home/.fantasia/bcnewman/foo.com/library/Zend/Controller/Dispatcher/Standard.php on line 249 I do not believe this is caused by a syntax error on my part (a copied and pasted the example file's content from the tutorial) and I believe I have the application directory structure correct: ./application ./application/controllers ./application/controllers/IndexController.php ./application/controllers/ErrorHandler.php ./application/views ./application/views/scripts ./application/views/scripts/index ./application/views/scripts/index/index.phtml ./application/views/scripts/error ./application/views/scripts/error/error.phtml ./application/bootstrap.php ./public ./public/index.php And finally, the IndexController and index.phtml view does work.
[ "You have ErrorHandler.php. It should be ErrorController.php. Controllers all need to be named following the format of NameController.php. Since you don't have it named properly the dispatcher cannot find it.\n", "Assuming that you have the ErrorController plugin loaded into your front controller, make sure that in your bootstrap that you do not have the following set:\n$frontController->throwExceptions(true);\n\nIf this is set then Exceptions will always be thrown, regardless of whether or not you have an error controller set.\n" ]
[ 4, 2 ]
[]
[]
[ "php", "zend_framework" ]
stackoverflow_0000088918_php_zend_framework.txt
Q: What happens to the time slice if you disable preemption in vxWorks? If you have round robin scheduling enabled in VxWorks, and you use taskLock() to disable preemption, what happens when your timeslice expires? A: When preemption is disabled via taskLock, the timeslice counter will not increment. Your timeslice will never expire until preemption is re-enabled.
What happens to the time slice if you disable preemption in vxWorks?
If you have round robin scheduling enabled in VxWorks, and you use taskLock() to disable preemption, what happens when your timeslice expires?
[ "When preemption is disabled via taskLock, the timeslice counter will not increment. Your timeslice will never expire until preemption is re-enabled.\n" ]
[ 1 ]
[]
[]
[ "vxworks" ]
stackoverflow_0000089575_vxworks.txt
Q: Provisioning Issue using CrmDeploymentService i've been working on Provision of an organization for quite a few days , and had faced few issues which i was successful in resolving them.Let me explain abt the issues i faced, the MSCrmServices is a process that is running under the Network Service. When I call the 'Execute' method on the service from a console application all actions preformed run under the context of the 'Network Service' account. The Network Service account has not enough rights to create an organization so many problems occur during the action. Registry access not allowed. Not the correct SQL Server rights Not enough AD rights. ... Impersonation doesn't work, the service uses the process account to perform the actions. The only thing that works is to run the CRMAppPool identity as an administrator which has the deployment administrator rights (added through the Deployment manager tool). But this issues in CRM deployment doesnt seem to faceoff from me :(. now that i have a new issue after changing the Pool identity to the system administrator, the deployment service gives an error saying Unauthorized!!!! and further when i check the log it says.. Process: w3wp |Organization:00000000-0000-0000-0000-000000000000 |Thread: 1 |Category: Exception |User: 00000000-0000-0000-0000-000000000000 |Level: Error | CrmException..ctor at CrmException..ctor(String message, Exception innerException, Int32 errorCode, Boolean isFlowControlException, Boolean enableTrace) at CrmException..ctor(String message, Int32 errorCode) at CrmObjectNotFoundException..ctor(BusinessEntityMoniker moniker) at BusinessProcessObject.DoRetrievePublishableSingle(BusinessEntityMoniker moniker, EntityExpression entityExpression, Boolean includeUnpublished, ExecutionContext context) at BusinessProcessObject.RetrieveUnpublished(BusinessEntityMoniker moniker, EntityExpression entityExpression, ExecutionContext context) at OrganizationUIService.RetrieveUnpublished(BusinessEntityMoniker moniker, EntityExpression entityExpression, ExecutionContext context) at OrganizationUIService.RetrieveOldFormXml(BusinessEntityMoniker moniker, ExecutionContext context) at OrganizationUIService.ExtractAndSaveFormLabels(IBusinessEntity entity, ExecutionContext context) at OrganizationUIService.Create(IBusinessEntity entity, ExecutionContext context) at ImportFormXmlHandler.createOrgUI(OrganizationUIService orgUIService, XmlNode formNode) at ImportFormXmlHandler.ImportItem() at ImportHandler.Import() at ImportHandler.Import() at RootImportHandler.RunImport() at ImportXml.RunImport() at NewOrgUtility.OrganizationImportDefaultData(Guid organizationId, Version existingDatabaseVersion, String importFile) at NewOrgUtility.OrganizationImportDefaultData(Guid organizationId, String importFile) at NewOrgUtility.ConfigureOrganization(String organizationId, String organizationName, String userAccountName, String userFirstName, String userLastName, String userEmail, String languageCode, String privilegedUserGroup, String sqlAccessGroup, String userGroup, String reportingGroup, String privilegedReportingGroup, Boolean grantNetworkServiceAccess, Boolean autoGroupManagement, String importFileLocation, Boolean sqmOption) at CreateOrganizationInstaller.Create(Guid organizationId, String organizationUniqueName, String organizationFriendlyName, String baseCurrencyCode, String baseCurrencyName, String baseCurrencySymbol, String initialUserDomainName, String initialUserFirstName, String initialUserLastName, String sqlServerName, Uri reportServerUrl, String privilegedUserGroupName, String sqlAccessGroupName, String userGroupName, String reportingGroupName, String privilegedReportingGroupName, String applicationPath, String languageId, Boolean sqmOption, String organizationCollation, MultipleTenancy multipleTenancy) at CreateOrganizationInstaller.Create(ICreateOrganizationInfo organizationInfo) at OrganizationService.Create(DeploymentEntity entity) at CreateRequest.Process() at CrmDeploymentService.Execute(DeploymentServiceRequest request) at RuntimeMethodHandle._InvokeMethodFast(Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at RuntimeMethodHandle.InvokeMethodFast(Object target, Object[] arguments, Signature sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at LogicalMethodInfo.Invoke(Object target, Object[] values) at WebServiceHandler.Invoke() at WebServiceHandler.CoreProcessRequest() at SyncSessionlessHandler.ProcessRequest(HttpContext context) at CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) at ApplicationStepManager.ResumeSteps(Exception error) at HttpApplication.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData) at HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr) at HttpRuntime.ProcessRequestNoDemand(HttpWorkerRequest wr) at ISAPIRuntime.ProcessRequest(IntPtr ecb, Int32 iWRType) Any idea on this.? Have anyone of you come across such an issue. I've been trying to resolve this issue but hard luck. A: Edit: Actually you're not alone. http://www.eggheadcafe.com/software/aspnet/31450420/crmdeploymentservice-crm.aspx Hope that helps.
Provisioning Issue using CrmDeploymentService
i've been working on Provision of an organization for quite a few days , and had faced few issues which i was successful in resolving them.Let me explain abt the issues i faced, the MSCrmServices is a process that is running under the Network Service. When I call the 'Execute' method on the service from a console application all actions preformed run under the context of the 'Network Service' account. The Network Service account has not enough rights to create an organization so many problems occur during the action. Registry access not allowed. Not the correct SQL Server rights Not enough AD rights. ... Impersonation doesn't work, the service uses the process account to perform the actions. The only thing that works is to run the CRMAppPool identity as an administrator which has the deployment administrator rights (added through the Deployment manager tool). But this issues in CRM deployment doesnt seem to faceoff from me :(. now that i have a new issue after changing the Pool identity to the system administrator, the deployment service gives an error saying Unauthorized!!!! and further when i check the log it says.. Process: w3wp |Organization:00000000-0000-0000-0000-000000000000 |Thread: 1 |Category: Exception |User: 00000000-0000-0000-0000-000000000000 |Level: Error | CrmException..ctor at CrmException..ctor(String message, Exception innerException, Int32 errorCode, Boolean isFlowControlException, Boolean enableTrace) at CrmException..ctor(String message, Int32 errorCode) at CrmObjectNotFoundException..ctor(BusinessEntityMoniker moniker) at BusinessProcessObject.DoRetrievePublishableSingle(BusinessEntityMoniker moniker, EntityExpression entityExpression, Boolean includeUnpublished, ExecutionContext context) at BusinessProcessObject.RetrieveUnpublished(BusinessEntityMoniker moniker, EntityExpression entityExpression, ExecutionContext context) at OrganizationUIService.RetrieveUnpublished(BusinessEntityMoniker moniker, EntityExpression entityExpression, ExecutionContext context) at OrganizationUIService.RetrieveOldFormXml(BusinessEntityMoniker moniker, ExecutionContext context) at OrganizationUIService.ExtractAndSaveFormLabels(IBusinessEntity entity, ExecutionContext context) at OrganizationUIService.Create(IBusinessEntity entity, ExecutionContext context) at ImportFormXmlHandler.createOrgUI(OrganizationUIService orgUIService, XmlNode formNode) at ImportFormXmlHandler.ImportItem() at ImportHandler.Import() at ImportHandler.Import() at RootImportHandler.RunImport() at ImportXml.RunImport() at NewOrgUtility.OrganizationImportDefaultData(Guid organizationId, Version existingDatabaseVersion, String importFile) at NewOrgUtility.OrganizationImportDefaultData(Guid organizationId, String importFile) at NewOrgUtility.ConfigureOrganization(String organizationId, String organizationName, String userAccountName, String userFirstName, String userLastName, String userEmail, String languageCode, String privilegedUserGroup, String sqlAccessGroup, String userGroup, String reportingGroup, String privilegedReportingGroup, Boolean grantNetworkServiceAccess, Boolean autoGroupManagement, String importFileLocation, Boolean sqmOption) at CreateOrganizationInstaller.Create(Guid organizationId, String organizationUniqueName, String organizationFriendlyName, String baseCurrencyCode, String baseCurrencyName, String baseCurrencySymbol, String initialUserDomainName, String initialUserFirstName, String initialUserLastName, String sqlServerName, Uri reportServerUrl, String privilegedUserGroupName, String sqlAccessGroupName, String userGroupName, String reportingGroupName, String privilegedReportingGroupName, String applicationPath, String languageId, Boolean sqmOption, String organizationCollation, MultipleTenancy multipleTenancy) at CreateOrganizationInstaller.Create(ICreateOrganizationInfo organizationInfo) at OrganizationService.Create(DeploymentEntity entity) at CreateRequest.Process() at CrmDeploymentService.Execute(DeploymentServiceRequest request) at RuntimeMethodHandle._InvokeMethodFast(Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at RuntimeMethodHandle.InvokeMethodFast(Object target, Object[] arguments, Signature sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at LogicalMethodInfo.Invoke(Object target, Object[] values) at WebServiceHandler.Invoke() at WebServiceHandler.CoreProcessRequest() at SyncSessionlessHandler.ProcessRequest(HttpContext context) at CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) at ApplicationStepManager.ResumeSteps(Exception error) at HttpApplication.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData) at HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr) at HttpRuntime.ProcessRequestNoDemand(HttpWorkerRequest wr) at ISAPIRuntime.ProcessRequest(IntPtr ecb, Int32 iWRType) Any idea on this.? Have anyone of you come across such an issue. I've been trying to resolve this issue but hard luck.
[ "Edit: Actually you're not alone.\nhttp://www.eggheadcafe.com/software/aspnet/31450420/crmdeploymentservice-crm.aspx\nHope that helps.\n" ]
[ 1 ]
[]
[]
[ "deployment", "dynamics_crm_4" ]
stackoverflow_0000080550_deployment_dynamics_crm_4.txt
Q: How do I convert a Ruby string with brackets to an array? I would like to convert the following string into an array/nested array: str = "[[this, is],[a, nested],[array]]" newarray = # this is what I need help with! newarray.inspect # => [['this','is'],['a','nested'],['array']] A: You'll get what you want with YAML. But there is a little problem with your string. YAML expects that there's a space behind the comma. So we need this str = "[[this, is], [a, nested], [array]]" Code: require 'yaml' str = "[[this, is],[a, nested],[array]]" ### transform your string in a valid YAML-String str.gsub!(/(\,)(\S)/, "\\1 \\2") YAML::load(str) # => [["this", "is"], ["a", "nested"], ["array"]] A: You could also treat it as almost-JSON. If the strings really are only letters, like in your example, then this will work: JSON.parse(yourarray.gsub(/([a-z]+)/,'"\1"')) If they could have arbitrary characters (other than [ ] , ), you'd need a little more: JSON.parse("[[this, is],[a, nested],[array]]".gsub(/, /,",").gsub(/([^\[\]\,]+)/,'"\1"')) A: For a laugh: ary = eval("[[this, is],[a, nested],[array]]".gsub(/(\w+?)/, "'\\1'") ) => [["this", "is"], ["a", "nested"], ["array"]] Disclaimer: You definitely shouldn't do this as eval is a terrible idea, but it is fast and has the useful side effect of throwing an exception if your nested arrays aren't valid A: Looks like a basic parsing task. Generally the approach you are going to want to take is to create a recursive function with the following general algorithm base case (input doesn't begin with '[') return the input recursive case: split the input on ',' (you will need to find commas only at this level) for each sub string call this method again with the sub string return array containing the results from this recursive method The only slighlty tricky part here is splitting the input on a single ','. You could write a separate function for this that would scan through the string and keep a count of the openbrackets - closedbrakets seen so far. Then only split on commas when the count is equal to zero. A: Make a recursive function that takes the string and an integer offset, and "reads" out an array. That is, have it return an array or string (that it has read) and an integer offset pointing after the array. For example: s = "[[this, is],[a, nested],[array]]" yourFunc(s, 1) # returns ['this', 'is'] and 11. yourFunc(s, 2) # returns 'this' and 6. Then you can call it with another function that provides an offset of 0, and makes sure that the finishing offset is the length of the string.
How do I convert a Ruby string with brackets to an array?
I would like to convert the following string into an array/nested array: str = "[[this, is],[a, nested],[array]]" newarray = # this is what I need help with! newarray.inspect # => [['this','is'],['a','nested'],['array']]
[ "You'll get what you want with YAML.\nBut there is a little problem with your string. YAML expects that there's a space behind the comma. So we need this\nstr = \"[[this, is], [a, nested], [array]]\"\n\nCode:\nrequire 'yaml'\nstr = \"[[this, is],[a, nested],[array]]\"\n### transform your string in a valid YAML-String\nstr.gsub!(/(\\,)(\\S)/, \"\\\\1 \\\\2\")\nYAML::load(str)\n# => [[\"this\", \"is\"], [\"a\", \"nested\"], [\"array\"]]\n\n", "You could also treat it as almost-JSON. If the strings really are only letters, like in your example, then this will work:\nJSON.parse(yourarray.gsub(/([a-z]+)/,'\"\\1\"'))\n\nIf they could have arbitrary characters (other than [ ] , ), you'd need a little more:\nJSON.parse(\"[[this, is],[a, nested],[array]]\".gsub(/, /,\",\").gsub(/([^\\[\\]\\,]+)/,'\"\\1\"'))\n\n", "For a laugh:\n ary = eval(\"[[this, is],[a, nested],[array]]\".gsub(/(\\w+?)/, \"'\\\\1'\") )\n => [[\"this\", \"is\"], [\"a\", \"nested\"], [\"array\"]]\n\nDisclaimer: You definitely shouldn't do this as eval is a terrible idea, but it is fast and has the useful side effect of throwing an exception if your nested arrays aren't valid\n", "Looks like a basic parsing task. Generally the approach you are going to want to take is to create a recursive function with the following general algorithm\nbase case (input doesn't begin with '[') return the input\nrecursive case:\n split the input on ',' (you will need to find commas only at this level)\n for each sub string call this method again with the sub string\n return array containing the results from this recursive method\n\nThe only slighlty tricky part here is splitting the input on a single ','. You could write a separate function for this that would scan through the string and keep a count of the openbrackets - closedbrakets seen so far. Then only split on commas when the count is equal to zero.\n", "Make a recursive function that takes the string and an integer offset, and \"reads\" out an array. That is, have it return an array or string (that it has read) and an integer offset pointing after the array. For example:\ns = \"[[this, is],[a, nested],[array]]\"\n\nyourFunc(s, 1) # returns ['this', 'is'] and 11.\nyourFunc(s, 2) # returns 'this' and 6.\n\nThen you can call it with another function that provides an offset of 0, and makes sure that the finishing offset is the length of the string.\n" ]
[ 11, 4, 3, 0, 0 ]
[]
[]
[ "arrays", "ruby" ]
stackoverflow_0000038409_arrays_ruby.txt
Q: Using the same App_Code classes across websites Let's say you have a solution with two website projects, Website A and Website B. Now inside Website A's App_Code folder, there is a Class X defined in a ClassX.cs file. What do you do if Website B also needs access to ClassX.cs? Is there any way to share this file across App_Code folders? Assume that moving the file to a common library is out of the question. A: Please please don't use these unholy website projects. Use Web Application projects instead, pack your shared classes into a library project and reference it from all your Web Applications. A: Pack your shared classes into a Library (a DLL) and from each site right-click on add reference and select the library that you have created. A: With the restriction of "Assume that moving the file to a common library is out of the question." the only way you could do this is to use NTFS junction points to essentially create a symlink to have the same .cs file in both folders. This is a terrible option though (for versioning reasons)...moving it to a common library is the best option. Here's the Wikipedia entry on NTFS junction points http://en.wikipedia.org/wiki/NTFS_junction_point and here's a tool for creating them http://technet.microsoft.com/en-us/sysinternals/bb896768.aspx A: I don't believe that there is a way without moving ClassX into a new code library project. .NET requires all an assembly's dependencies to exist in the same folder as the assembly itself, or in the GAC, to be automatically detected. You could try loading the assembly manually via the Reflection classes, although it's a bit hacky. The best solution, if you have the time available and the inclination to undertake it, would be to go with JRoppert's solution of moving it to a web application project. You could then use web references (which work about as nicely as regular references inside VS) to refer to ClassX. HTH
Using the same App_Code classes across websites
Let's say you have a solution with two website projects, Website A and Website B. Now inside Website A's App_Code folder, there is a Class X defined in a ClassX.cs file. What do you do if Website B also needs access to ClassX.cs? Is there any way to share this file across App_Code folders? Assume that moving the file to a common library is out of the question.
[ "Please please don't use these unholy website projects. Use Web Application projects instead, pack your shared classes into a library project and reference it from all your Web Applications.\n", "Pack your shared classes into a Library (a DLL) and from each site right-click on add reference and select the library that you have created.\n", "With the restriction of \"Assume that moving the file to a common library is out of the question.\" the only way you could do this is to use NTFS junction points to essentially create a symlink to have the same .cs file in both folders.\nThis is a terrible option though (for versioning reasons)...moving it to a common library is the best option.\nHere's the Wikipedia entry on NTFS junction points\nhttp://en.wikipedia.org/wiki/NTFS_junction_point\nand here's a tool for creating them\nhttp://technet.microsoft.com/en-us/sysinternals/bb896768.aspx\n", "I don't believe that there is a way without moving ClassX into a new code library project. .NET requires all an assembly's dependencies to exist in the same folder as the assembly itself, or in the GAC, to be automatically detected. \nYou could try loading the assembly manually via the Reflection classes, although it's a bit hacky. \nThe best solution, if you have the time available and the inclination to undertake it, would be to go with JRoppert's solution of moving it to a web application project. You could then use web references (which work about as nicely as regular references inside VS) to refer to ClassX. \nHTH\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ "app_code", "asp.net" ]
stackoverflow_0000088252_app_code_asp.net.txt
Q: Free or open source IBM 3151 or aixterm emulators? Does anyone know of any free or open source terminal emulators that will emulate an IBM 3151 terminal or an HFT terminal (aixterm)? We have some offshore contractors that need access to some of our systems that need a 3151 or hft emulation, but are having issues transferring licenses of Hummingbird HostExplorer to India. For that matter, if we could save on US Hummingbird licenses it would be beneficial as well. Thanks! A: I doubt you'll find an open source or free emulator for this terminal type. While IBM has contributed to open source communities, they are also very interested in protecting their intellectual property. Hummingbird licenses are certainly expensive. We ran into issues with that when I worked for IBM! That said, I never needed a specific terminal type in order to access AIX systems, as we used OpenSSH (comes with AIX 5L). Is there some reason why you can't provide SSH access to these systems to your contractors?
Free or open source IBM 3151 or aixterm emulators?
Does anyone know of any free or open source terminal emulators that will emulate an IBM 3151 terminal or an HFT terminal (aixterm)? We have some offshore contractors that need access to some of our systems that need a 3151 or hft emulation, but are having issues transferring licenses of Hummingbird HostExplorer to India. For that matter, if we could save on US Hummingbird licenses it would be beneficial as well. Thanks!
[ "I doubt you'll find an open source or free emulator for this terminal type. While IBM has contributed to open source communities, they are also very interested in protecting their intellectual property. Hummingbird licenses are certainly expensive. We ran into issues with that when I worked for IBM!\nThat said, I never needed a specific terminal type in order to access AIX systems, as we used OpenSSH (comes with AIX 5L). Is there some reason why you can't provide SSH access to these systems to your contractors?\n" ]
[ 0 ]
[]
[]
[ "aix", "terminal_emulator" ]
stackoverflow_0000089444_aix_terminal_emulator.txt
Q: How to backup LIF formatted disk? I have several old 3.5in floppy disks that I would like to backup. My attempts to create an image of the disks have failed. I tried using the UNIX utility dd_rescue, but when the kernel tries to open (/dev/fd0) I get a kernel error, floppy0: probe failed... I would like an image because some of the floppies are using the LIF file system format. Does anyone have any ideas as to what I should do? HP now Agilent made some tools that could read and write to files on LIF formatted disk. I could use these tools to copy and convert the files to the local disk but not without possibly losing some data in the process. In other words, converting from LIF to some other format back to LIF will lose some information. I just want to backup the raw bytes on the disk and not be concerned with the type of file system. A: I think you'll find the best resource here. Also, if you're going to use raw dd, LIF format has 77 cylinders vs 80 for a normal floppy. A: dd if=/dev/floppy0 of=animage.bin conv=noerror
How to backup LIF formatted disk?
I have several old 3.5in floppy disks that I would like to backup. My attempts to create an image of the disks have failed. I tried using the UNIX utility dd_rescue, but when the kernel tries to open (/dev/fd0) I get a kernel error, floppy0: probe failed... I would like an image because some of the floppies are using the LIF file system format. Does anyone have any ideas as to what I should do? HP now Agilent made some tools that could read and write to files on LIF formatted disk. I could use these tools to copy and convert the files to the local disk but not without possibly losing some data in the process. In other words, converting from LIF to some other format back to LIF will lose some information. I just want to backup the raw bytes on the disk and not be concerned with the type of file system.
[ "I think you'll find the best resource here.\nAlso, if you're going to use raw dd, LIF format has 77 cylinders vs 80 for a normal floppy.\n", "dd if=/dev/floppy0 of=animage.bin conv=noerror\n\n" ]
[ 2, 0 ]
[]
[]
[ "dd", "disk", "filesystems", "linux" ]
stackoverflow_0000087758_dd_disk_filesystems_linux.txt
Q: Bypass GeneratedValue in Hibernate Is it possible to bypass @GeneratedValue for an ID in Hibernate, we have a case where, most of the time we want the ID to be set using GeneratedValue, but in certain cases would like to set the ID manually. Is this possible to do? A: I know you can do this in the JPA spec, so you should be able to in Hibernate (using JPA+ annotations). If you just fill in the ID field of the new persistent model you're creating, then when you "Merge" that model into the EntityManager, it will use the ID you've set. This does have ramifications, though. You've just used up that ID, but the sequence specified by the GeneratedValue annotation doesn't know that. Unless you're specifying an ununsed ID that's LESS than the current sequence value, you're going to get a problem once the sequence catches up to the value you just used. So, maybe I can see where you might want the user to be able to specify an ID, but then you need to catch the possible Exception (duplicate ID) that may come in the future.
Bypass GeneratedValue in Hibernate
Is it possible to bypass @GeneratedValue for an ID in Hibernate, we have a case where, most of the time we want the ID to be set using GeneratedValue, but in certain cases would like to set the ID manually. Is this possible to do?
[ "I know you can do this in the JPA spec, so you should be able to in Hibernate (using JPA+ annotations).\nIf you just fill in the ID field of the new persistent model you're creating, then when you \"Merge\" that model into the EntityManager, it will use the ID you've set.\nThis does have ramifications, though. You've just used up that ID, but the sequence specified by the GeneratedValue annotation doesn't know that. Unless you're specifying an ununsed ID that's LESS than the current sequence value, you're going to get a problem once the sequence catches up to the value you just used.\nSo, maybe I can see where you might want the user to be able to specify an ID, but then you need to catch the possible Exception (duplicate ID) that may come in the future.\n" ]
[ 5 ]
[]
[]
[ "hibernate", "java", "jboss" ]
stackoverflow_0000089439_hibernate_java_jboss.txt
Q: How Does The Debugging Option -g Change the Binary Executable? When writing C/C++ code, in order to debug the binary executable the debug option must be enabled on the compiler/linker. In the case of GCC, the option is -g. When the debug option is enabled, how does the affect the binary executable? What additional data is stored in the file that allows the debugger function as it does? A: -g tells the compiler to store symbol table information in the executable. Among other things, this includes: symbol names type info for symbols files and line numbers where the symbols came from Debuggers use this information to output meaningful names for symbols and to associate instructions with particular lines in the source. For some compilers, supplying -g will disable certain optimizations. For example, icc sets the default optimization level to -O0 with -g unless you explicitly indicate -O[123]. Also, even if you do supply -O[123], optimizations that prevent stack tracing will still be disabled (e.g. stripping frame pointers from stack frames. This has only a minor effect on performance). With some compilers, -g will disable optimizations that can confuse where symbols came from (instruction reordering, loop unrolling, inlining etc). If you want to debug with optimization, you can use -g3 with gcc to get around some of this. Extra debug info will be included about macros, expansions, and functions that may have been inlined. This can allow debuggers and performance tools to map optimized code to the original source, but it's best effort. Some optimizations really mangle the code. For more info, take a look at DWARF, the debugging format originally designed to go along with ELF (the binary format for Linux and other OS's). A: A symbol table is added to the executable which maps function/variable names to data locations, so that debuggers can report back meaningful information, rather than just pointers. This doesn't effect the speed of your program, and you can remove the symbol table with the 'strip' command. A: In addition to the debugging and symbol information Google DWARF (A Developer joke on ELF) By default most compiler optimizations are turned off when debugging is enabled. So the code is the pure translation of the source into Machine Code rather than the result of many highly specialized transformations that are applied to release binaries. But the most important difference (in my opinion) Memory in Debug builds is usually initialized to some compiler specific values to facilitate debugging. In release builds memory is not initialized unless explicitly done so by the application code. Check your compiler documentation for more information: But an example for DevStudio is: 0xCDCDCDCD Allocated in heap, but not initialized 0xDDDDDDDD Released heap memory. 0xFDFDFDFD "NoMansLand" fences automatically placed at boundary of heap memory. Should never be overwritten. If you do overwrite one, you're probably walking off the end of an array. 0xCCCCCCCC Allocated on stack, but not initialized A: -g adds debugging information in the executable, such as the names of variables, the names of functions, and line numbers. This allows a debugger, such as gdb to step through code line by line, set breakpoints, and inspect the values of variables. Because of this additional information using -g increases the size of the executable. Also, gcc allows to use -g together with -O flags, which turn on optimization. Debugging an optimized executable can be very tricky, because variables may be optimized away, or instructions may be executed in a different order. Generally, it is a good idea to turn off optimization when using -g, even though it results in much slower code. A: Just as a matter of interest, you can crack open a hexeditor and take a look at an executable produced with -g and one without. You can see the symbols and things that are added. It may change the assembly (-S) too, but I'm not sure. A: There is some overlap with this question which covers the issue from the other side. A: Some operating systems (like z/OS) produce a "side file" that contains the debug symbols. This helps avoid bloating the executable with extra information.
How Does The Debugging Option -g Change the Binary Executable?
When writing C/C++ code, in order to debug the binary executable the debug option must be enabled on the compiler/linker. In the case of GCC, the option is -g. When the debug option is enabled, how does the affect the binary executable? What additional data is stored in the file that allows the debugger function as it does?
[ "-g tells the compiler to store symbol table information in the executable. Among other things, this includes:\n\nsymbol names\ntype info for symbols\nfiles and line numbers where the symbols came from\n\nDebuggers use this information to output meaningful names for symbols and to associate instructions with particular lines in the source.\nFor some compilers, supplying -g will disable certain optimizations. For example, icc sets the default optimization level to -O0 with -g unless you explicitly indicate -O[123]. Also, even if you do supply -O[123], optimizations that prevent stack tracing will still be disabled (e.g. stripping frame pointers from stack frames. This has only a minor effect on performance).\nWith some compilers, -g will disable optimizations that can confuse where symbols came from (instruction reordering, loop unrolling, inlining etc). If you want to debug with optimization, you can use -g3 with gcc to get around some of this. Extra debug info will be included about macros, expansions, and functions that may have been inlined. This can allow debuggers and performance tools to map optimized code to the original source, but it's best effort. Some optimizations really mangle the code.\nFor more info, take a look at DWARF, the debugging format originally designed to go along with ELF (the binary format for Linux and other OS's).\n", "A symbol table is added to the executable which maps function/variable names to data locations, so that debuggers can report back meaningful information, rather than just pointers. This doesn't effect the speed of your program, and you can remove the symbol table with the 'strip' command.\n", "In addition to the debugging and symbol information\nGoogle DWARF (A Developer joke on ELF)\nBy default most compiler optimizations are turned off when debugging is enabled.\nSo the code is the pure translation of the source into Machine Code rather than the result of many highly specialized transformations that are applied to release binaries.\nBut the most important difference (in my opinion)\nMemory in Debug builds is usually initialized to some compiler specific values to facilitate debugging. In release builds memory is not initialized unless explicitly done so by the application code.\nCheck your compiler documentation for more information:\nBut an example for DevStudio is:\n\n\n0xCDCDCDCD Allocated in heap, but not initialized\n0xDDDDDDDD Released heap memory.\n0xFDFDFDFD \"NoMansLand\" fences automatically placed at boundary of heap memory. Should never be overwritten. If you do overwrite one, you're probably walking off the end of an array.\n0xCCCCCCCC Allocated on stack, but not initialized\n\n\n", "-g adds debugging information in the executable, such as the names of variables, the names of functions, and line numbers. This allows a debugger, such as gdb to step through code line by line, set breakpoints, and inspect the values of variables. Because of this additional information using -g increases the size of the executable. \nAlso, gcc allows to use -g together with -O flags, which turn on optimization. Debugging an optimized executable can be very tricky, because variables may be optimized away, or instructions may be executed in a different order. Generally, it is a good idea to turn off optimization when using -g, even though it results in much slower code.\n", "Just as a matter of interest, you can crack open a hexeditor and take a look at an executable produced with -g and one without. You can see the symbols and things that are added. It may change the assembly (-S) too, but I'm not sure.\n", "There is some overlap with this question which covers the issue from the other side.\n", "Some operating systems (like z/OS) produce a \"side file\" that contains the debug symbols. This helps avoid bloating the executable with extra information.\n" ]
[ 81, 10, 8, 7, 4, 3, 3 ]
[]
[]
[ "debugging", "gcc", "gdb" ]
stackoverflow_0000089603_debugging_gcc_gdb.txt
Q: Can I display the list of all the system objects (semaphores, queues...) in VxWorks? I would like to know what semaphores, messageQueues, etc... are active in my vxWorks 6.x system. I have access to this information via the debugger, but I would like access to it from the shell. Is there a way? A: VxWorks 6.x provides a function called classShow() which will list all the objects of a specific class (e.g. semaphores, message queues, tasks, ...). The following call will give you a list of objects for a given class: classShow(objClassIdGet(classId), 1) The classId types are: 1 windSemClass, /* Wind native semaphore */ 2 windSemPxClass, /* POSIX semaphore */ 3 windMsgQClass, /* Wind native message queue */ 4 windMqPxClass, /* POSIX message queue */ 5 windRtpClass, /* real time process */ 6 windTaskClass, /* task */ 7 windWdClass, /* watchdog */ 8 windFdClass, /* file descriptor */ 9 windPgPoolClass, /* page pool */ 10 windPgMgrClass, /* page manager */ 11 windGrpClass, /* group */ 12 windVmContextClass, /* virtual memory context */ 13 windTrgClass, /* trigger */ 14 windMemPartClass, /* memory partition */ 15 windI2oClass, /* I2O */ 16 windDmsClass, /* device management system */ 17 windSetClass, /* Set */ 18 windIsrClass, /* ISR object */ 19 windTimerClass, /* Timer services */ 20 windSdClass, /* Shared data region */ 21 windPxTraceClass, /* POSIX trace */
Can I display the list of all the system objects (semaphores, queues...) in VxWorks?
I would like to know what semaphores, messageQueues, etc... are active in my vxWorks 6.x system. I have access to this information via the debugger, but I would like access to it from the shell. Is there a way?
[ "VxWorks 6.x provides a function called classShow() which will list all the objects of a specific class (e.g. semaphores, message queues, tasks, ...).\nThe following call will give you a list of objects for a given class:\n\nclassShow(objClassIdGet(classId), 1) \n\nThe classId types are:\n 1 windSemClass, /* Wind native semaphore */\n 2 windSemPxClass, /* POSIX semaphore */\n 3 windMsgQClass, /* Wind native message queue */\n 4 windMqPxClass, /* POSIX message queue */\n 5 windRtpClass, /* real time process */\n 6 windTaskClass, /* task */\n 7 windWdClass, /* watchdog */\n 8 windFdClass, /* file descriptor */\n 9 windPgPoolClass, /* page pool */\n 10 windPgMgrClass, /* page manager */\n 11 windGrpClass, /* group */\n 12 windVmContextClass, /* virtual memory context */\n 13 windTrgClass, /* trigger */\n 14 windMemPartClass, /* memory partition */\n 15 windI2oClass, /* I2O */\n 16 windDmsClass, /* device management system */\n 17 windSetClass, /* Set */\n 18 windIsrClass, /* ISR object */\n 19 windTimerClass, /* Timer services */\n 20 windSdClass, /* Shared data region */\n 21 windPxTraceClass, /* POSIX trace */\n\n" ]
[ 6 ]
[]
[]
[ "vxworks" ]
stackoverflow_0000089740_vxworks.txt
Q: Any problems running SharpDevelop 3.0 and Visual Studio 2008 side by side? I have been asked to lend a hand on a hobby project that a couple friends are working on, they are using SharpDevelop 3.0 (Beta 2 I think, but it might be Beta 1) is there any hassle for me to install and use this IDE given that I have Visual Studio 2008 installed? A: I've had no problems at all, in fact some of the tools in sharpdevelop (like the vb.net -> c# converter) are very nice to have. In addition, there are some good libraries included with sharpdevelop that are also handy (like sharpziplib for zip files) I actually have VS2005, VS2008, SharpDevelop and VisualStudio 6 installed at the moment, and there's more compat problems with MS's tools than with #develop. A: They behave very well together, I have had SharpDevelop installed with 2003, 2005 and 2008. No issues at all. A: I haven't had SharpDevelop installed for a while but when I did the only problem I ran in to was that I couldn't easily share the solution file. If you don't mind having two different solutions there should be no problems.
Any problems running SharpDevelop 3.0 and Visual Studio 2008 side by side?
I have been asked to lend a hand on a hobby project that a couple friends are working on, they are using SharpDevelop 3.0 (Beta 2 I think, but it might be Beta 1) is there any hassle for me to install and use this IDE given that I have Visual Studio 2008 installed?
[ "I've had no problems at all, in fact some of the tools in sharpdevelop (like the vb.net -> c# converter) are very nice to have.\nIn addition, there are some good libraries included with sharpdevelop that are also handy (like sharpziplib for zip files)\nI actually have VS2005, VS2008, SharpDevelop and VisualStudio 6 installed at the moment, and there's more compat problems with MS's tools than with #develop.\n", "They behave very well together, I have had SharpDevelop installed with 2003, 2005 and 2008. No issues at all.\n", "I haven't had SharpDevelop installed for a while but when I did the only problem I ran in to was that I couldn't easily share the solution file. If you don't mind having two different solutions there should be no problems.\n" ]
[ 3, 2, 1 ]
[]
[]
[ "c#", "sharpdevelop", "visual_studio" ]
stackoverflow_0000088571_c#_sharpdevelop_visual_studio.txt
Q: Queuing actions (not effects) to execute after an amount of time. What I'd like to know is if there is a nice way to queue jQuery functions to execute after a set amount of time. This wouldn't pause the execution of other functions, just the ones following in the chain. Perhaps an example of what I'd envisage it would look like would illustrate: $('#alert') .show() .wait(5000) // <-- this bit .hide() ; I know that it's possible by using timeouts, but it seems like that's a messy way to do it, especially compared to the above example (if it were real). So, is something like this already built-in to jQuery, and if not, what is the best way to emulate it? A: You can't do that, and you probably don't want to. While it certainly looks pretty, there is no mechanism in Javascript that will allow you do to that without just looping in "wait" until the time has passed. You could certainly do that but you risk seriously degrading the browser performance and if your timeout is longer than a handful of seconds browsers will show a warning to the user that your javascript seems to be stuck. The correct way to do this is with timeouts: var el = $('#alert'); el.show() setTimeout(function() { el.hide() }, 5000); Your other option would be to extend jquery to essentially add an effect for actions you want to delay: jQuery.fn.extend({ delayedHide: function(time) { var self = this; setTimeout(function() { self.hide(); }, time); } }); $('#alert') .show() .delayedHide(5000) ; You could also extend jquery with a method similar to setTimeout: jQuery.fn.extend({ delayThis: function(fn, time, args) { var self = this; setTimeout(function() { jQuery.fn[fn].apply(self, args); }, time); } }); $('#alert') .show() .delayThis('hide', 5000) ; or to call with args pass arguments in an array: $('#alert') .show() .delayThis('css', 5000, [ 'display', 'none' ]) ; A: The jQuery FxQueues plug-in does just what you need: $('#element').fadeOut({ speed: 'fast', preDelay: 5000 });
Queuing actions (not effects) to execute after an amount of time.
What I'd like to know is if there is a nice way to queue jQuery functions to execute after a set amount of time. This wouldn't pause the execution of other functions, just the ones following in the chain. Perhaps an example of what I'd envisage it would look like would illustrate: $('#alert') .show() .wait(5000) // <-- this bit .hide() ; I know that it's possible by using timeouts, but it seems like that's a messy way to do it, especially compared to the above example (if it were real). So, is something like this already built-in to jQuery, and if not, what is the best way to emulate it?
[ "You can't do that, and you probably don't want to. While it certainly looks pretty, there is no mechanism in Javascript that will allow you do to that without just looping in \"wait\" until the time has passed. You could certainly do that but you risk seriously degrading the browser performance and if your timeout is longer than a handful of seconds browsers will show a warning to the user that your javascript seems to be stuck.\nThe correct way to do this is with timeouts:\nvar el = $('#alert');\nel.show()\nsetTimeout(function() { el.hide() }, 5000);\n\nYour other option would be to extend jquery to essentially add an effect for actions you want to delay:\njQuery.fn.extend({\n delayedHide: function(time) {\n var self = this;\n setTimeout(function() { self.hide(); }, time);\n }\n});\n\n$('#alert')\n .show()\n .delayedHide(5000)\n;\n\nYou could also extend jquery with a method similar to setTimeout:\njQuery.fn.extend({\n delayThis: function(fn, time, args) {\n var self = this;\n setTimeout(function() { jQuery.fn[fn].apply(self, args); }, time);\n }\n});\n\n$('#alert')\n .show()\n .delayThis('hide', 5000)\n;\n\nor to call with args pass arguments in an array:\n$('#alert')\n .show()\n .delayThis('css', 5000, [ 'display', 'none' ])\n;\n\n", "The jQuery FxQueues plug-in does just what you need:\n$('#element').fadeOut({\n speed: 'fast',\n preDelay: 5000\n});\n\n" ]
[ 8, 1 ]
[]
[]
[ "javascript", "jquery" ]
stackoverflow_0000089051_javascript_jquery.txt
Q: Hierarchy recordset in ms access How can I get hierarchy recordset in ms access through select statement? A: DAO doesn't support Hierarchical recordsets. You may be able to use ADO in access, but I'm not certain. A: ADO 2.0 support MSDataShape - an OLEDB provider. Check out data shaping at http://microsoft.apress.com/asptodayarchive/72268/data-shaping-with-ado-part-1
Hierarchy recordset in ms access
How can I get hierarchy recordset in ms access through select statement?
[ "DAO doesn't support Hierarchical recordsets. You may be able to use ADO in access, but I'm not certain. \n", "ADO 2.0 support MSDataShape - an OLEDB provider.\nCheck out data shaping at http://microsoft.apress.com/asptodayarchive/72268/data-shaping-with-ado-part-1\n" ]
[ 1, 1 ]
[]
[]
[ "ms_access", "sql" ]
stackoverflow_0000089752_ms_access_sql.txt
Q: Nightly importable or attachable copies of production database We would like to be able to nightly make a copy/backup/snapshot of a production database so that we can import it in the dev environment. We don't want to log ship to the dev environment because it needs to be something we can reset whenever we like to the last taken copy of the production database. We need to be able to clear certain logging and/or otherwise useless or heavy tables that would just bloat the copy. We prefer the attach/detach method as opposed to something like sql server publishing wizard because of how much faster an attach is than an import. I should mention we only have SQL Server Standard, so some features won't be available. What's the best way to do this? A: MSDN I'd say use those procedures inside a SQL Agent job (use master.xp_cmdshell to perform the copy). A: You might want to put the big huge tables on their own partition and have this partition belong to a different file group. You would backup then backup and restore the main file group. You might want to also consider doing incremental backups. Say, a full backup every weekend and an incremental every night. I haven't done file group backups, so I don't know if these work well together. A: I'm guessing that you are already doing regular backups of your production database? If you aren't, stop reading this reply and go set it up right now. I'd recommend that you write a script that automatically runs, say once a day, that: Drops your current test database. Restores your current production backup to your test environment. You can write a simple script to do this and execute it using the isql.exe command line tool.
Nightly importable or attachable copies of production database
We would like to be able to nightly make a copy/backup/snapshot of a production database so that we can import it in the dev environment. We don't want to log ship to the dev environment because it needs to be something we can reset whenever we like to the last taken copy of the production database. We need to be able to clear certain logging and/or otherwise useless or heavy tables that would just bloat the copy. We prefer the attach/detach method as opposed to something like sql server publishing wizard because of how much faster an attach is than an import. I should mention we only have SQL Server Standard, so some features won't be available. What's the best way to do this?
[ "MSDN\nI'd say use those procedures inside a SQL Agent job (use master.xp_cmdshell to perform the copy).\n", "You might want to put the big huge tables on their own partition and have this partition belong to a different file group. You would backup then backup and restore the main file group.\nYou might want to also consider doing incremental backups. Say, a full backup every weekend and an incremental every night. I haven't done file group backups, so I don't know if these work well together.\n", "I'm guessing that you are already doing regular backups of your production database? If you aren't, stop reading this reply and go set it up right now.\nI'd recommend that you write a script that automatically runs, say once a day, that:\n\nDrops your current test database.\nRestores your current production backup to your test environment.\n\nYou can write a simple script to do this and execute it using the isql.exe command line tool.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "snapshot", "sql_server" ]
stackoverflow_0000089777_snapshot_sql_server.txt
Q: How do you find the difference between 2 strings in PHP? I have 2 strings that I'd like to compare, and return the positions of the different characters in the second string. For example, if I have "The brown fox jumps over the lazy dog" "The quick brown fox jumped over the lazy dog" I want it to highlight "quick" and "ed". What's the best way to go about this in PHP? A: This might do the trick: PHP Inline Diff Text_Diff A: The algorithm you're looking for is the "longest common substring problem". From there it is easy to determine the differences. See Wikipedia: http://en.wikipedia.org/wiki/Diff#Algorithm A: This is going to give you a headache unless you define your porblem more clearly to start! Let's assume that str1 is "Amanda and Amy", and str2 is "Amanda and Amylase Amy". Is your function to return "lase Amy" or "Amylase "? Properly defining your problem is the first step towards a solution!
How do you find the difference between 2 strings in PHP?
I have 2 strings that I'd like to compare, and return the positions of the different characters in the second string. For example, if I have "The brown fox jumps over the lazy dog" "The quick brown fox jumped over the lazy dog" I want it to highlight "quick" and "ed". What's the best way to go about this in PHP?
[ "This might do the trick:\nPHP Inline Diff\nText_Diff\n", "The algorithm you're looking for is the \"longest common substring problem\". From there it is easy to determine the differences. See Wikipedia:\nhttp://en.wikipedia.org/wiki/Diff#Algorithm\n", "This is going to give you a headache unless you define your porblem more clearly to start!\nLet's assume that str1 is \"Amanda and Amy\", and str2 is \"Amanda and Amylase Amy\".\nIs your function to return \"lase Amy\" or \"Amylase \"?\nProperly defining your problem is the first step towards a solution!\n" ]
[ 3, 2, 2 ]
[]
[]
[ "comparison", "php", "string" ]
stackoverflow_0000089799_comparison_php_string.txt
Q: Define an interface in C++ that needs to be implemented in C# and C++ I have an interface that I have defined in C++ which now needs to be implemented in C#. What is the best way to go about this? I don't want to use COM at all in my interface definition. The way I have solved this right now is to to have two interface definitions, one in C++ and one in C#. I then expose the C# interfaces as a COM server. This was my application which is written in C++ can call into C#. Is there anyway I can avoid having to define my implementation in C++ as well as C#? A: If you are willing to use C++/CLI for your managed code instead of C#, then you can just consume the native C++ interface definition directly via the header file. How easy this will be will depend on exactly what is in your interface - simplest case is something that you could use from C. Take a look at Marcus Heege's Expert C++/CLI: .NET for Visual C++ Programmers, for a lot of helpful information on mixing native and managed C++ in .NET. A: Swig is a great tool for wrapping C++ classes in other languages like C#. A: Why don't you want to use COM? This would have been my suggestion. COM interop has worked really well for me, and I've used COM objects and interfaces in C# (simply reference the COM object and the runtime callable wrapper gets created automatically). Similarly marking a C# class as "Register for COM interop" has worked the other way around. A: Write the interface in C++ and use macros to make it look like a standard cpp header file on UNIX and like an IDL file on windows (If this does not work out, you can always write a python/ruby script to generate the IDL from the C++ header file). Compile the IDL to generate a typelib. Use TypeLib Importer to generate the interface definitions for C# and implement the interfaces there. A: The other approach is to use a 'flat', C-style API. You might as well use extern "C" to prevent accidental overloading. Use a DEF file to explicitly name the exported functions, so they're definitely not decorated in any way (C++ functions are 'decorated' with an encoding of the parameter types in the export table). On x86, beware of calling conventions. It's probably to explicitly declare the use of __stdcall or __cdecl. Because P/Invoke is primarily used to invoke Windows APIs, it defaults to StdCall, but C and C++ default to cdecl, because that supports varargs. I recently wrapped the COM interface IRapiStream in a flat C interface as .NET seemed to be trying to convert the IStream into a storage, which failed with the error STG_E_UNIMPLEMENTEDFUNCTION. A: You don't mention which version of .NET you're using, but something that's worked for me in using Visual Studio .NET 2003 is to provide a thin C# wrapper around the pimpled implementation of the real C++ class: public __gc class MyClass_Net { public: MyClass_Net() :native_ptr_(new MyClass()) { } ~MyClass_Net() { delete native_ptr_; } private: MyClass __nogc *native_ptr_; }; Obviously, one would prefer to use a Boost shared_ptr there, but I could never get them to play nicely with V.NET 2003... Methods simply forward to the underlying C++ methods through the pointer. Method arguments may have to be converted. For example, to call a C++ method which takes a string, the C# method should probably take a System.String (System::String in Managed C++). You'd have to use System::Runtime::InteropServices::Marshal::StringToHGlobalAnsi() to do that. One nice thing about this approach is because Managed C++ is a .NET language, you get to expose accessors as properties (__property). You can even expose attributes, very much like in C#.
Define an interface in C++ that needs to be implemented in C# and C++
I have an interface that I have defined in C++ which now needs to be implemented in C#. What is the best way to go about this? I don't want to use COM at all in my interface definition. The way I have solved this right now is to to have two interface definitions, one in C++ and one in C#. I then expose the C# interfaces as a COM server. This was my application which is written in C++ can call into C#. Is there anyway I can avoid having to define my implementation in C++ as well as C#?
[ "If you are willing to use C++/CLI for your managed code instead of C#, then you can just consume the native C++ interface definition directly via the header file. How easy this will be will depend on exactly what is in your interface - simplest case is something that you could use from C.\nTake a look at Marcus Heege's Expert C++/CLI: .NET for Visual C++ Programmers, for a lot of helpful information on mixing native and managed C++ in .NET.\n", "Swig is a great tool for wrapping C++ classes in other languages like C#.\n", "Why don't you want to use COM? \nThis would have been my suggestion. COM interop has worked really well for me, and I've used COM objects and interfaces in C# (simply reference the COM object and the runtime callable wrapper gets created automatically). Similarly marking a C# class as \"Register for COM interop\" has worked the other way around.\n", "Write the interface in C++ and use macros to make it look like a standard cpp header file on UNIX and like an IDL file on windows (If this does not work out, you can always write a python/ruby script to generate the IDL from the C++ header file).\nCompile the IDL to generate a typelib. Use TypeLib Importer to generate the interface definitions for C# and implement the interfaces there.\n", "The other approach is to use a 'flat', C-style API. You might as well use extern \"C\" to prevent accidental overloading. Use a DEF file to explicitly name the exported functions, so they're definitely not decorated in any way (C++ functions are 'decorated' with an encoding of the parameter types in the export table).\nOn x86, beware of calling conventions. It's probably to explicitly declare the use of __stdcall or __cdecl. Because P/Invoke is primarily used to invoke Windows APIs, it defaults to StdCall, but C and C++ default to cdecl, because that supports varargs.\nI recently wrapped the COM interface IRapiStream in a flat C interface as .NET seemed to be trying to convert the IStream into a storage, which failed with the error STG_E_UNIMPLEMENTEDFUNCTION.\n", "You don't mention which version of .NET you're using, but something that's worked for me in using Visual Studio .NET 2003 is to provide a thin C# wrapper around the pimpled implementation of the real C++ class:\npublic __gc class MyClass_Net {\npublic:\n MyClass_Net()\n :native_ptr_(new MyClass())\n {\n }\n ~MyClass_Net()\n {\n delete native_ptr_;\n }\n\nprivate:\n MyClass __nogc *native_ptr_;\n};\n\nObviously, one would prefer to use a Boost shared_ptr there, but I could never get them to play nicely with V.NET 2003...\nMethods simply forward to the underlying C++ methods through the pointer. Method arguments may have to be converted. For example, to call a C++ method which takes a string, the C# method should probably take a System.String (System::String in Managed C++). You'd have to use System::Runtime::InteropServices::Marshal::StringToHGlobalAnsi() to do that.\nOne nice thing about this approach is because Managed C++ is a .NET language, you get to expose accessors as properties (__property). You can even expose attributes, very much like in C#.\n" ]
[ 4, 2, 1, 1, 1, 0 ]
[]
[]
[ ".net", "c#", "c++", "interface", "interop" ]
stackoverflow_0000064645_.net_c#_c++_interface_interop.txt
Q: How to get all file attributes including author, title, mp3 tags, etc, in one sweep I would like to write all meta data (including advanced summary properties) for my files in a windows folder to a csv file. Is there a way to collect all the attributes? I see mp3 files have a different set of attributes compared to jpg files. (c#) This can also be a script (vb, perl) Update: by looking at libextractor (thank you) I can see this can be achieved by writing different plugins for different type of files. I gather this meta data is not a simple collection... A: In Perl, you can use MP3::Tag or MP3::Info A: If you can cope w/ VB.Net: http://www.codeproject.com/KB/vb/mp3id3v1.aspx If you can cope w/ C++/.Net: http://www.codeproject.com/KB/audio-video/mp3fileinfo.aspx For either (assuming the C++) is compiled to .Net, you can use Reflector to disassemble the binary and convert it to C#. Check w/ the respective authors about their licenses first (usually Code Project articles are under an open license like CPOL). A: In a library? Try libextractor if your software is GPL. A: Ok, after the clarification edits, I would suggest looking at the introspection available in .Net. I will warn you however that I think you will get more satisfying results if you forgo introspection and define the specific properties that you want for the file types that you expect to see. Since scripting is valid, then if this were my problem to solve I would use Powershell since the .net introspection is baked in. A: It may not be worth it to add all of the data from a jpeg file (exif data). I would hand pick what attributes I wanted from those files.
How to get all file attributes including author, title, mp3 tags, etc, in one sweep
I would like to write all meta data (including advanced summary properties) for my files in a windows folder to a csv file. Is there a way to collect all the attributes? I see mp3 files have a different set of attributes compared to jpg files. (c#) This can also be a script (vb, perl) Update: by looking at libextractor (thank you) I can see this can be achieved by writing different plugins for different type of files. I gather this meta data is not a simple collection...
[ "In Perl, you can use MP3::Tag or MP3::Info\n", "If you can cope w/ VB.Net: http://www.codeproject.com/KB/vb/mp3id3v1.aspx\nIf you can cope w/ C++/.Net: http://www.codeproject.com/KB/audio-video/mp3fileinfo.aspx\nFor either (assuming the C++) is compiled to .Net, you can use Reflector to disassemble the binary and convert it to C#. Check w/ the respective authors about their licenses first (usually Code Project articles are under an open license like CPOL).\n", "In a library? Try libextractor if your software is GPL.\n", "Ok, after the clarification edits, I would suggest looking at the introspection available in .Net. I will warn you however that I think you will get more satisfying results if you forgo introspection and define the specific properties that you want for the file types that you expect to see.\nSince scripting is valid, then if this were my problem to solve I would use Powershell since the .net introspection is baked in.\n", "It may not be worth it to add all of the data from a jpeg file (exif data). I would hand pick what attributes I wanted from those files.\n" ]
[ 4, 2, 1, 1, 0 ]
[]
[]
[ ".net", "c#", "filesystems", "perl", "vb.net" ]
stackoverflow_0000088181_.net_c#_filesystems_perl_vb.net.txt
Q: How Does Relational Theory Apply in Ways I can Care About while Learning it? So I'm taking the Discrete Math course from MIT's OpenCourseWare and I'm wondering... I see the connection between relations and graphs but not enough to "own" it. I've implemented a simple state machine in SQL as well so I grok graphs pretty well, just not the more rigorous study of how relations and sets compeltely apply. Should I just be following the Yegge train of thought where I just glance over the stuff that I'm not grokking readily and come back when I've learned more? I'd like to be able to better analyze the graph structures I create on a day to day basis (sounds fun) and I want to make sure I'm not passing up valuable information right now. (EDIT: I'd like to get a better idea of how the different set and relation properties relate to things like graph theory and how basic graph theory relates to sets/relations.) Any good resources where I could learn more about this? I'm using the 5th edition of Discrete Mathematics and Its Applications by Rosen in case it matters. Thanks! A: wow, 4 hours and no answer; i had a similar experience in school but just learned the stuff and figured out what it was good for later. it turns out to be very useful, so let's see if this helps - a database is formally defined as a set of relations, but it is also a graph; each table is a node, each column is a node connected to the table, each row is a node connected to the table, each field is a node connected to the row, relationships between tables interconnect nodes, foreign-key relationships interconnect rows, query constraints (where clauses) and joins interconnect nodes and sets of nodes, and so on. An SQL query may be visualized as traversing the graph formed by the database relations and values and performing operations on each node. Under the hood that is what the query execution planner does, it breaks down the query into a set of fundamental operations and arranges them in a graph that is most efficient. Updates to your database may also be thought of as graph operations, e.g. updating the quantity in an order line item row propagates the change to the the total in the order row, which propagates the change to the TotalSales in the Customer row, and so on. many common problems devolve into graph-traversal problems. Ever used Google Maps to get directions to some place?
How Does Relational Theory Apply in Ways I can Care About while Learning it?
So I'm taking the Discrete Math course from MIT's OpenCourseWare and I'm wondering... I see the connection between relations and graphs but not enough to "own" it. I've implemented a simple state machine in SQL as well so I grok graphs pretty well, just not the more rigorous study of how relations and sets compeltely apply. Should I just be following the Yegge train of thought where I just glance over the stuff that I'm not grokking readily and come back when I've learned more? I'd like to be able to better analyze the graph structures I create on a day to day basis (sounds fun) and I want to make sure I'm not passing up valuable information right now. (EDIT: I'd like to get a better idea of how the different set and relation properties relate to things like graph theory and how basic graph theory relates to sets/relations.) Any good resources where I could learn more about this? I'm using the 5th edition of Discrete Mathematics and Its Applications by Rosen in case it matters. Thanks!
[ "wow, 4 hours and no answer; i had a similar experience in school but just learned the stuff and figured out what it was good for later. it turns out to be very useful, so let's see if this helps -\na database is formally defined as a set of relations, but it is also a graph; each table is a node, each column is a node connected to the table, each row is a node connected to the table, each field is a node connected to the row, relationships between tables interconnect nodes, foreign-key relationships interconnect rows, query constraints (where clauses) and joins interconnect nodes and sets of nodes, and so on.\nAn SQL query may be visualized as traversing the graph formed by the database relations and values and performing operations on each node. Under the hood that is what the query execution planner does, it breaks down the query into a set of fundamental operations and arranges them in a graph that is most efficient.\nUpdates to your database may also be thought of as graph operations, e.g. updating the quantity in an order line item row propagates the change to the the total in the order row, which propagates the change to the TotalSales in the Customer row, and so on.\nmany common problems devolve into graph-traversal problems. Ever used Google Maps to get directions to some place?\n" ]
[ 3 ]
[]
[]
[ "discrete_mathematics", "finite_automata", "graph_theory", "relational" ]
stackoverflow_0000088703_discrete_mathematics_finite_automata_graph_theory_relational.txt
Q: Similarity between line strings I have a number of tracks recorded by a GPS, which more formally can be described as a number of line strings. Now, some of the recorded tracks might be recordings of the same route, but because of inaccurasies in the GPS system, the fact that the recordings were made on separate occasions and that they might have been recorded travelling at different speeds, they won't match up perfectly, but still look close enough when viewed on a map by a human to determine that it's actually the same route that has been recorded. I want to find an algorithm that calculates the similarity between two line strings. I have come up with some home grown methods to do this, but would like to know if this is a problem that's already has good algorithms to solve it. How would you calculate the similarity, given that similar means represents the same path on a map? Edit: For those unsure of what I'm talking about, please look at this link for a definition of what a line string is: http://msdn.microsoft.com/en-us/library/bb895372.aspx - I'm not asking about character strings. A: Compute the Fréchet distance on each pair of tracks. The distance can be used to gauge the similarity of your tracks. Math alert: Fréchet was a pioneer in the field of metric space which is relevant to your problem. A: I would add a buffer around the first line based on the estimated probable error, and then determine if the second line fits entirely within the buffer. A: To determine "same route," create the minimal set of normalized path vectors, calculate the total power differences and compare the total to a quality measure. Normalize the GPS waypoints on total path length, walk the vectors of the paths together, creating a new set of path vectors for each path based upon the shortest vector at each waypoint, calculate the total power differences between endpoints of each vector in the normalized paths weighting for vector length, and compare against a quality measure. Tune the power of the differences (start with, say, squared differences) and the quality measure (say as a percent of the total power differences) visually. This algorithm produces a continuous quality measure of the path match as well as a binary result (Are the paths the same?) Paul Tomblin said: I would add a buffer around the first line based on the estimated probable error, and then determine if the second line fits entirely within the buffer. You could modify the algorithm as the normalized vector endpoints are compared. You could determine if any endpoint difference was above a certain size (implementing Paul's buffer idea) or perhaps, if the endpoints were outside the "buffer," use that fact to ignore that endpoint difference, allowing a comparison ignoring side trips. A: You could walk along each point (Pa) of LineString A and measure the distance from Pa to the nearest line-segment of LineString B, averaging each of these distances. This is not a quick or perfect method, but should be able to give use a useful number and is pretty quick to implement. Do the line strings start and finish at similar points, or are they of very different extents? A: If you consider a single line string to be a sequence of [x,y] points (or [x,y,z] points), then you could compute the similarity between each pair of line strings using the Needleman-Wunsch algorithm. As described in the referenced Wikipedia article, the Needleman-Wunsch algorithm requires a "similarity matrix" which defines the distance between a pair of points. However, it would be easy to use a function instead of a matrix. In your case you could simply use the 2D Euclidean distance function (or a 3D Euclidean function if your points have elevation) to provide the distance between each pair of points.
Similarity between line strings
I have a number of tracks recorded by a GPS, which more formally can be described as a number of line strings. Now, some of the recorded tracks might be recordings of the same route, but because of inaccurasies in the GPS system, the fact that the recordings were made on separate occasions and that they might have been recorded travelling at different speeds, they won't match up perfectly, but still look close enough when viewed on a map by a human to determine that it's actually the same route that has been recorded. I want to find an algorithm that calculates the similarity between two line strings. I have come up with some home grown methods to do this, but would like to know if this is a problem that's already has good algorithms to solve it. How would you calculate the similarity, given that similar means represents the same path on a map? Edit: For those unsure of what I'm talking about, please look at this link for a definition of what a line string is: http://msdn.microsoft.com/en-us/library/bb895372.aspx - I'm not asking about character strings.
[ "Compute the Fréchet distance on each pair of tracks. The distance can be used to gauge the similarity of your tracks.\nMath alert: Fréchet was a pioneer in the field of metric space which is relevant to your problem.\n", "I would add a buffer around the first line based on the estimated probable error, and then determine if the second line fits entirely within the buffer.\n", "To determine \"same route,\" create the minimal set of normalized path vectors, calculate the total power differences and compare the total to a quality measure.\n\nNormalize the GPS waypoints on total path length,\nwalk the vectors of the paths together, creating a new set of path vectors for each path based upon the shortest vector at each waypoint,\ncalculate the total power differences between endpoints of each vector in the normalized paths weighting for vector length, and\ncompare against a quality measure.\n\nTune the power of the differences (start with, say, squared differences) and the quality measure (say as a percent of the total power differences) visually. This algorithm produces a continuous quality measure of the path match as well as a binary result (Are the paths the same?)\n\nPaul Tomblin said: I would add a buffer\n around the first line based on the\n estimated probable error, and then\n determine if the second line fits\n entirely within the buffer.\n\nYou could modify the algorithm as the normalized vector endpoints are compared. You could determine if any endpoint difference was above a certain size (implementing Paul's buffer idea) or perhaps, if the endpoints were outside the \"buffer,\" use that fact to ignore that endpoint difference, allowing a comparison ignoring side trips.\n", "You could walk along each point (Pa) of LineString A and measure the distance from Pa to the nearest line-segment of LineString B, averaging each of these distances. \nThis is not a quick or perfect method, but should be able to give use a useful number and is pretty quick to implement.\nDo the line strings start and finish at similar points, or are they of very different extents?\n", "If you consider a single line string to be a sequence of [x,y] points (or [x,y,z] points), then you could compute the similarity between each pair of line strings using the Needleman-Wunsch algorithm. As described in the referenced Wikipedia article, the Needleman-Wunsch algorithm requires a \"similarity matrix\" which defines the distance between a pair of points. However, it would be easy to use a function instead of a matrix. In your case you could simply use the 2D Euclidean distance function (or a 3D Euclidean function if your points have elevation) to provide the distance between each pair of points. \n" ]
[ 12, 3, 2, 1, 1 ]
[ "I actually side with the person (Aaron F) who said that you might be interested in the Levenshtein distance problem (and cited this). His answer seems to me to be the best so far.\nMore specifically, Levenshtein distance (also called edit distance), does not measure strictly the character-by-character distance, but also allows you to perform insertions and deletions. The best algorithm for this distance measure can be computed in quadratic time (pretty slow if your strings are long), but the computational biologists have pretty good heuristics for this, that might be of interest to you on their own. Check out BLAST and FASTA.\nIn your problem, it seems that you are dealing with differences between strings of numbers, and you care about the numbers. If you give more information, I might be able to direct you to the right variant of BLAST/FASTA/etc for your purposes. In any case, you might consider adapting BLAST and FASTA for your needs. They're quite simple.\n1: http://en.wikipedia.org/wiki/Levenshtein_distance, http://www.nist.gov/dads/HTML/Levenshtein.html\n" ]
[ -2 ]
[ "algorithm", "gis", "sql_server" ]
stackoverflow_0000062496_algorithm_gis_sql_server.txt
Q: Can I make Perl ithreads in Windows run concurrently? I have a Perl script that I'm attempting to set up using Perl Threads (use threads). When I run simple tests everything works, but when I do my actual script (which has the threads running multiple SQLPlus sessions), each SQLPlus session runs in order (i.e., thread 1's sqlplus runs steps 1-5, then thread 2's sqlplus runs steps 6-11, etc.). I thought I understood that threads would do concurrent processing, but something's amiss. Any ideas, or should I be doing some other Perl magic? A: A few possible explanations: Are you running this script on a multi-core processor or multi-processor machine? If you only have one CPU only one thread can use it at any time. Are there transactions or locks involved with steps 1-6 that would prevent it from being done concurrently? Are you certain you are using multiple connections to the database and not sharing a single one between threads? A: Actually, you have no way of guaranteeing in which order threads will execute. So the behavior (if not what you expect) is not really wrong. I suspect you have some kind of synchronization going on here. Possibly SQL*Plus only let's itself be called once? Some programs do that... Other possiblilties: thread creation and process creation (you are creating subprocesses for SQL*Plus, aren't you?) take longer than running the thread, so thread 1 is finished before thread 2 even starts You are using transactions in your SQL scripts that force synchronization of database updates. A: Check your database settings. You may find that it is set up in a conservative manner. That would cause even minor reads to block all access to that information. You may also need to call threads::yield.
Can I make Perl ithreads in Windows run concurrently?
I have a Perl script that I'm attempting to set up using Perl Threads (use threads). When I run simple tests everything works, but when I do my actual script (which has the threads running multiple SQLPlus sessions), each SQLPlus session runs in order (i.e., thread 1's sqlplus runs steps 1-5, then thread 2's sqlplus runs steps 6-11, etc.). I thought I understood that threads would do concurrent processing, but something's amiss. Any ideas, or should I be doing some other Perl magic?
[ "A few possible explanations:\n\nAre you running this script on a multi-core processor or multi-processor machine? If you only have one CPU only one thread can use it at any time.\nAre there transactions or locks involved with steps 1-6 that would prevent it from being done concurrently?\nAre you certain you are using multiple connections to the database and not sharing a single one between threads?\n\n", "Actually, you have no way of guaranteeing in which order threads will execute. So the behavior (if not what you expect) is not really wrong.\nI suspect you have some kind of synchronization going on here. Possibly SQL*Plus only let's itself be called once? Some programs do that...\nOther possiblilties:\n\nthread creation and process creation (you are creating subprocesses for SQL*Plus, aren't you?) take longer than running the thread, so thread 1 is finished before thread 2 even starts\nYou are using transactions in your SQL scripts that force synchronization of database updates.\n\n", "Check your database settings. You may find that it is set up in a conservative manner. That would cause even minor reads to block all access to that information.\nYou may also need to call threads::yield.\n" ]
[ 4, 2, 1 ]
[]
[]
[ "concurrency", "multithreading", "perl" ]
stackoverflow_0000086220_concurrency_multithreading_perl.txt
Q: How to avoid conflict when not using ID in URLs I see often (rewritten) URLs without ID in it, like on some wordpress installations. What is the best way of achieve this? Example: site.com/product/some-product-name/ Maybe to keep an array of page names and IDs in cache, to avoid DB query on every page request? How to avoid conflicts, and what are other issues on using urls without IDs? A: Using an ID presents the same conundrum, really--you're just checking for a different value in your database. The "some-product-name" part of your URL above is also something unique. Some people call them slugs (Wordpress, also permalinks). So instead of querying the database for a row that has the particular ID, you're querying the database for a row that has a particular slug. You don't need to know the ID to retrieve the record. A: As long as product names are unique it shouldn't be an issue. It won't take any longer (at least not significant) to look up a product by unique name than numeric ID as long as the column is indexed. A: Wordpress has a field in the wp_posts table for the slug. When you create the post, it creates a slug from the post title (if that's how you have it configured), replacing spaces with dashes (or I think you can set it to underscores). It also takes out the apostrophes, commas, or whatnot. I believe it also limits the overall length of the slug, too. So, in short, it isn't dynamically decoding the URL into the post's title--there's a field in the table that matches the URL version of the post name directly. A: As you may or may not know, the URLs are being re-written with Apache's mod_rewrite module. As mentioned here, Wordpress is, in the background, assigning a slug after sanitizing the title or post name. But, to answer your question, what you're describing is Wordpress' "Pretty Permalinks" feature and you can learn more about it in the Wordpress codex. Newer versions of Wordpress do the re-writing internally (no .htaccess editin, wp_rewrite instead). Which is why you'll see the same ruleset for any permalink structure. Though, if you do some digging you can find the old rewrite rules. For example: RewriteRule ^([0-9]{4})/([0-9]{1,2})/([0-9]{1,2})/?$ /index.php?year=$1&monthnum=$2&day=$3 [QSA,L] Will take a URL like /2008/01/01/ and direct it to /index.php?year=2008&monthnum=01&day=01 (and load a date category). But, as mentioned, a page like product-name exists only because Wordpress already sanitized the post title and stored it as a field in the database.
How to avoid conflict when not using ID in URLs
I see often (rewritten) URLs without ID in it, like on some wordpress installations. What is the best way of achieve this? Example: site.com/product/some-product-name/ Maybe to keep an array of page names and IDs in cache, to avoid DB query on every page request? How to avoid conflicts, and what are other issues on using urls without IDs?
[ "Using an ID presents the same conundrum, really--you're just checking for a different value in your database. The \"some-product-name\" part of your URL above is also something unique. Some people call them slugs (Wordpress, also permalinks). So instead of querying the database for a row that has the particular ID, you're querying the database for a row that has a particular slug. You don't need to know the ID to retrieve the record.\n", "As long as product names are unique it shouldn't be an issue. It won't take any longer (at least not significant) to look up a product by unique name than numeric ID as long as the column is indexed.\n", "Wordpress has a field in the wp_posts table for the slug. When you create the post, it creates a slug from the post title (if that's how you have it configured), replacing spaces with dashes (or I think you can set it to underscores). It also takes out the apostrophes, commas, or whatnot. I believe it also limits the overall length of the slug, too.\nSo, in short, it isn't dynamically decoding the URL into the post's title--there's a field in the table that matches the URL version of the post name directly.\n", "As you may or may not know, the URLs are being re-written with Apache's mod_rewrite module. As mentioned here, Wordpress is, in the background, assigning a slug after sanitizing the title or post name.\nBut, to answer your question, what you're describing is Wordpress' \"Pretty Permalinks\" feature and you can learn more about it in the Wordpress codex. Newer versions of Wordpress do the re-writing internally (no .htaccess editin, wp_rewrite instead). Which is why you'll see the same ruleset for any permalink structure.\nThough, if you do some digging you can find the old rewrite rules. For example:\nRewriteRule ^([0-9]{4})/([0-9]{1,2})/([0-9]{1,2})/?$ /index.php?year=$1&monthnum=$2&day=$3 [QSA,L]\n\nWill take a URL like /2008/01/01/ and direct it to /index.php?year=2008&monthnum=01&day=01 (and load a date category).\nBut, as mentioned, a page like product-name exists only because Wordpress already sanitized the post title and stored it as a field in the database.\n" ]
[ 3, 1, 1, 1 ]
[]
[]
[ "url", "url_rewriting" ]
stackoverflow_0000013213_url_url_rewriting.txt
Q: Date Component Manipulation Is it possible to manipulate the components, such as year, month, day of a date in VBA? I would like a function that, given a day, a month, and a year, returns the corresponding date. A: DateSerial(YEAR, MONTH, DAY) would be what you are looking for. DateSerial(2008, 8, 19) returns 8/19/2008 A: There are several date functions in VBA - check this site DateSerial(YEAR, MONTH, DAY) A: You want DateSerial: Dim someDate As Date = DateSerial(year, month, day)
Date Component Manipulation
Is it possible to manipulate the components, such as year, month, day of a date in VBA? I would like a function that, given a day, a month, and a year, returns the corresponding date.
[ "DateSerial(YEAR, MONTH, DAY)\n\nwould be what you are looking for.\nDateSerial(2008, 8, 19) returns 8/19/2008\n", "There are several date functions in VBA - check this site\nDateSerial(YEAR, MONTH, DAY)\n", "You want DateSerial:\nDim someDate As Date = DateSerial(year, month, day)\n\n" ]
[ 6, 2, 2 ]
[]
[]
[ "date", "datetime", "excel", "vba" ]
stackoverflow_0000089873_date_datetime_excel_vba.txt
Q: Using both 1.1 and 2.0 frameworks on Windows 2003 x64 So, much to my annoyance I discover (after lots of research), that when running 1.1 and 2.0 dot.net frameworks on a 64bit 2003 install, it removes the asp.net tab from the IIS properties. I've tried the registry hacks, I've tried registering 32bit versions of both frameworks, and no luck. My only work around is running the excellent ASP.NET switcher from Dennis Bauer. Does anyone else have any insight? A: Also, you might try running the 32-bit version of MMC. IIRC, MMC can only load extensions that are the same bit-ness as itself, and the .Net 2.0 extension is 32-bit only. That said, the tool you linked in your question is very useful for working around this issue as well.
Using both 1.1 and 2.0 frameworks on Windows 2003 x64
So, much to my annoyance I discover (after lots of research), that when running 1.1 and 2.0 dot.net frameworks on a 64bit 2003 install, it removes the asp.net tab from the IIS properties. I've tried the registry hacks, I've tried registering 32bit versions of both frameworks, and no luck. My only work around is running the excellent ASP.NET switcher from Dennis Bauer. Does anyone else have any insight?
[ "Also, you might try running the 32-bit version of MMC. IIRC, MMC can only load extensions that are the same bit-ness as itself, and the .Net 2.0 extension is 32-bit only.\nThat said, the tool you linked in your question is very useful for working around this issue as well.\n" ]
[ 1 ]
[]
[]
[ "asp.net", "iis", "windows_server_2003" ]
stackoverflow_0000087398_asp.net_iis_windows_server_2003.txt
Q: How do you get a list of all the installed fonts? Specifically in .NET, but I'm leaving it open. A: MSDN: Enumerating Installed Fonts A: http://msdn.microsoft.com/en-us/library/0yf5t4e8.aspx This should help. A: I believe what you are looking for is InstalledFontCollection. (What were the chances that the ONE piece of code that required .net would be relevant to anything here! It boggles the mind!)
How do you get a list of all the installed fonts?
Specifically in .NET, but I'm leaving it open.
[ "MSDN:\nEnumerating Installed Fonts\n", "http://msdn.microsoft.com/en-us/library/0yf5t4e8.aspx\nThis should help.\n", "I believe what you are looking for is InstalledFontCollection.\n(What were the chances that the ONE piece of code that required .net would be relevant to anything here! It boggles the mind!)\n" ]
[ 11, 3, 1 ]
[]
[]
[ ".net", "fonts" ]
stackoverflow_0000089886_.net_fonts.txt
Q: Copyright and Fair Use in Distributable Software At what length of text and/or length of audio snippet does a piece of commercially distributable software pass the threshold of fair use and violate the included work's copyright? Does attribution absolve the developer from infringement? An example would be a quote from a novel used on a start-up screen. A: Unfortunately, there is no cut and dried answer. Determining what is fair use involves a very subjective and fact-dependent four point test. You're never really going to know for sure if a borderline use is permissible or not unless you end up in court and a judge decides. The four factors are: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. Each of these has a specific legal meaning based on previous precedents (which may or may not correspond to what most people would think of as the plain language meaning). If you're doing anything that could get you sued, talk to a lawyer. Software is even more complicated, since not all code is copyrightable to begin with. A: Also, keep in mind that laws vary from country to country, and since most software is distributed anywhere in the world over the web... well, it's a huge headache. It's unfortunate because the threat of lawsuit has a chilling effect on interesting, innovative work.
Copyright and Fair Use in Distributable Software
At what length of text and/or length of audio snippet does a piece of commercially distributable software pass the threshold of fair use and violate the included work's copyright? Does attribution absolve the developer from infringement? An example would be a quote from a novel used on a start-up screen.
[ "Unfortunately, there is no cut and dried answer. Determining what is fair use involves a very subjective and fact-dependent four point test. You're never really going to know for sure if a borderline use is permissible or not unless you end up in court and a judge decides.\nThe four factors are:\n\nthe purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;\nthe nature of the copyrighted work;\nthe amount and substantiality of the portion used in relation to the copyrighted work as a whole; and\nthe effect of the use upon the potential market for or value of the copyrighted work.\n\nEach of these has a specific legal meaning based on previous precedents (which may or may not correspond to what most people would think of as the plain language meaning). If you're doing anything that could get you sued, talk to a lawyer.\nSoftware is even more complicated, since not all code is copyrightable to begin with. \n", "Also, keep in mind that laws vary from country to country, and since most software is distributed anywhere in the world over the web... well, it's a huge headache. It's unfortunate because the threat of lawsuit has a chilling effect on interesting, innovative work.\n" ]
[ 8, 0 ]
[]
[]
[ "language_agnostic" ]
stackoverflow_0000089005_language_agnostic.txt
Q: How can I specify the maximum amount of heap an RTP can use in VxWorks? We are creating a Real-Time Process in VxWorks 6.x, and we would like to limit the amount of memory which can be allocated to the heap. How do we do this? A: When creating a RTP via rtpSpawn(), you can specify an environment variable which controls how the heap behaves. There are 3 environment variables: HEAP_INITIAL_SIZE - How much heap to allocate initially (defaults to 64K) HEAP_MAX_SIZE - Maximum heap to allocate (defaults to no limit) HEAP_INCR_SIZE - memory increment when adding to RTP heap (defaults to 1 virtual page) The following code shows how to use the environment variables: char * envp[] = {"HEAP_INITIAL_SIZE=0x20000", "HEAP_MAX_SIZE=0x100000", NULL); rtpSpawn ("myrtp.vxe", NULL, envp, 100, 0x10000, 0, 0); A: This can be done through the use of the HEAP_MAX_SIZE environment variable. If it is set, it limits the ability of the heap to grow beyond that size. It does not, however, limit the initial heap size. See page 31
How can I specify the maximum amount of heap an RTP can use in VxWorks?
We are creating a Real-Time Process in VxWorks 6.x, and we would like to limit the amount of memory which can be allocated to the heap. How do we do this?
[ "When creating a RTP via rtpSpawn(), you can specify an environment variable which controls how the heap behaves.\nThere are 3 environment variables:\n\nHEAP_INITIAL_SIZE - How much heap to allocate initially (defaults to 64K) \nHEAP_MAX_SIZE - Maximum heap to allocate (defaults to no limit)\nHEAP_INCR_SIZE - memory increment when adding to RTP heap (defaults to 1 virtual page)\n\nThe following code shows how to use the environment variables:\n\n char * envp[] = {\"HEAP_INITIAL_SIZE=0x20000\", \"HEAP_MAX_SIZE=0x100000\", NULL);\n rtpSpawn (\"myrtp.vxe\", NULL, envp, 100, 0x10000, 0, 0);\n\n\n", "This can be done through the use of the HEAP_MAX_SIZE environment variable. If it is set, it limits the ability of the heap to grow beyond that size. It does not, however, limit the initial heap size.\nSee page 31\n" ]
[ 3, 0 ]
[]
[]
[ "vxworks" ]
stackoverflow_0000089866_vxworks.txt
Q: Does LINQ-to-SQL Support Composable Queries? Speaking as a non-C# savvy programmer, I'm curious as to the evaluation semantics of LINQ queries like the following: var people = from p in Person where p.age < 18 select p var otherPeople = from p in people where p.firstName equals "Daniel" select p Assuming that Person is an ADO entity which defines the age and firstName fields, what would this do from a database standpoint? Specifically, would the people query be run to produce an in-memory structure, which would then be queried by the otherPeople query? Or would the construction of otherPeople merely pull the data regarding the query from people and then produce a new database-peered query? So, if I iterated over both of these queries, how many SQL statements would be executed? A: They are composable. This is possible because LINQ queries are actually expressions (code as data), which LINQ providers like LINQ-to-SQL can evaluate and generate corresponding SQL. Because LINQ queries are lazily evaluated (e.g. won't get executed until you iterate over the elements), the code you showed won't actually touch the database. Not until you iterate over otherPeople or people will SQL get generated and executed. A: var people = from p in Person where p.age < 18 select p Translates to: SELECT [t0].[PersonId], [t0].[Age], [t0].[FirstName] FROM [dbo].[Person] AS [t0] WHERE [t0].[Age] < @p0 where @p0 gets sent through as 18 var otherPeople = from p in people where p.firstName equals "Daniel" select p Translates to: SELECT [t0].[PersonId], [t0].[Age], [t0].[FirstName] FROM [dbo].[Person] AS [t0] WHERE [t0].[FirstName] = @p0 where @p0 gets sent through as "Daniel" var morePeople = from p1 in people from p2 in otherPeople where p1.PersonId == p2.PersonId select p1; Translates to: SELECT [t0].[PersonId], [t0].[Age], [t0].[FirstName] FROM [dbo].[Person] AS [t0], [dbo].[Person] AS [t1] WHERE ([t0].[PersonId] = [t1].[PersonId]) AND ([t0].[Age] < @p0) AND ([t1].[FirstName] = @p1) where @p0 is 18, @p1 is "Daniel" When in doubt, call the ToString() on your IQueryable or give a TextWriter to the DataContext's Log property. A: Yes, the resulting query is composed. It includes the full where clause. Turn on SQL profiling and try it to see for yourself. Linq does this through expression trees. The first linq statement produces an expression tree; it doesn't execute the query. The second linq statement builds on the expression tree created by the first. The statement is only executed when you enumerate the resulting collection. A: people and otherPeople contain objects of type IQueryable<Person>. If you iterate over both, separatly, it will run two queries. If you only iterate over otherPeople, it will run the expected query, with two where clauses. If you do .ToList() on people and use the returned List<Person> in the second query instead of people, it becomes LINQ-to-Objects and no SQL is executed. This behavior is referred to as deferred execution. Meaning no query is done until it is needed. Before execution they are just expression trees that get manipulated to formulate the final query. A: Both these queries will be executes when you'll try to access final results. You can try to view original SQL generated from DataContext object properties.
Does LINQ-to-SQL Support Composable Queries?
Speaking as a non-C# savvy programmer, I'm curious as to the evaluation semantics of LINQ queries like the following: var people = from p in Person where p.age < 18 select p var otherPeople = from p in people where p.firstName equals "Daniel" select p Assuming that Person is an ADO entity which defines the age and firstName fields, what would this do from a database standpoint? Specifically, would the people query be run to produce an in-memory structure, which would then be queried by the otherPeople query? Or would the construction of otherPeople merely pull the data regarding the query from people and then produce a new database-peered query? So, if I iterated over both of these queries, how many SQL statements would be executed?
[ "They are composable. This is possible because LINQ queries are actually expressions (code as data), which LINQ providers like LINQ-to-SQL can evaluate and generate corresponding SQL.\nBecause LINQ queries are lazily evaluated (e.g. won't get executed until you iterate over the elements), the code you showed won't actually touch the database. Not until you iterate over otherPeople or people will SQL get generated and executed.\n", "var people = from p in Person\n where p.age < 18\n select p\n\nTranslates to:\nSELECT [t0].[PersonId], [t0].[Age], [t0].[FirstName]\nFROM [dbo].[Person] AS [t0]\nWHERE [t0].[Age] < @p0\n\nwhere @p0 gets sent through as 18\nvar otherPeople = from p in people\n where p.firstName equals \"Daniel\"\n select p\n\nTranslates to:\nSELECT [t0].[PersonId], [t0].[Age], [t0].[FirstName]\nFROM [dbo].[Person] AS [t0]\nWHERE [t0].[FirstName] = @p0\n\nwhere @p0 gets sent through as \"Daniel\"\nvar morePeople = from p1 in people\n from p2 in otherPeople\n where p1.PersonId == p2.PersonId\n select p1;\n\nTranslates to:\nSELECT [t0].[PersonId], [t0].[Age], [t0].[FirstName]\nFROM [dbo].[Person] AS [t0], [dbo].[Person] AS [t1]\nWHERE ([t0].[PersonId] = [t1].[PersonId]) AND ([t0].[Age] < @p0) AND ([t1].[FirstName] = @p1)\n\nwhere @p0 is 18, @p1 is \"Daniel\"\nWhen in doubt, call the ToString() on your IQueryable or give a TextWriter to the DataContext's Log property.\n", "Yes, the resulting query is composed. It includes the full where clause. Turn on SQL profiling and try it to see for yourself.\nLinq does this through expression trees. The first linq statement produces an expression tree; it doesn't execute the query. The second linq statement builds on the expression tree created by the first. The statement is only executed when you enumerate the resulting collection.\n", "people and otherPeople contain objects of type IQueryable<Person>.\nIf you iterate over both, separatly, it will run two queries.\nIf you only iterate over otherPeople, it will run the expected query, with two where clauses.\nIf you do .ToList() on people and use the returned List<Person> in the second query instead of people, it becomes LINQ-to-Objects and no SQL is executed.\nThis behavior is referred to as deferred execution. Meaning no query is done until it is needed. Before execution they are just expression trees that get manipulated to formulate the final query.\n", "Both these queries will be executes when you'll try to access final results. You can try to view original SQL generated from DataContext object properties.\n" ]
[ 12, 4, 3, 1, 0 ]
[]
[]
[ "code_reuse", "linq", "linq_to_sql", "sql" ]
stackoverflow_0000089193_code_reuse_linq_linq_to_sql_sql.txt
Q: What time should I build to production? My users use the site pretty equally 24/7. Is there a meme for build timing? International audience, single cluster of servers on eastern time, but gets hit well into the morning, by international clients. 1 db, several web servers, so if no db, simple, whenever. But when the site has to come down, when would you, as a programmer be least mad to see SO be down for say 15 minutes. A: If there's truly no good time from the users' perspective, then I'd suggest doing it when your team has the most time to recover from any build-related disaster. A: Here's what I have done and its worked well for me: Get a site traffic analysis tool which will graph hourly user load Select low-point in graph for doing updates A: If you're small, then yeah, find when your lowest usage period is, and do it then (for us personally, usually around 1AM-3AM PST is the lowest dip...but it never drops to 0 of course). Once you start growing to having a larger userbase, if you want people to take you seriously you'll need to design your application such that you can upgrade without downtime. This is not simple, and it often involves having multiple servers. I've spent ages trying to get our application to this point, the best I've come up with so far is for a couple hours run both the old version and new version at the same time. Users logged in at the time of the switchover stay on the old version, until they log out. Next time they come in they go to the new version. Any users coming on after the switchover get sent straight to the new version. It's still not foolproof, but it's pretty good. A: What kind of an application is it? Most sites that I use tend to update around 2AM or 3AM. A: Use a second site, and hotswap as needed. A: The issue with hot-swapping, is database would still be shared, and breaking changes would bring stand in down as well. A: I guess you have to ask your clients. In any case, there's the wee hours of the morning. If you're talking about a locally available website, I do not think users will mind if they get an "under maintenance" notice at 2 am in their time zone. A: Depends on your location: 4AM East Coast/1AM West Coast is typlically the lightest time. A: Pick a few times that you'd like to do it and offer them as choices to the decider-types. Whatever you do, put up a "down for routine maintenance" page while you deploy. A: Check the time of least usage Clone/copy/update latest production code to another directory If there exists any database migrations to be done, perform any that are required, and non conflicting with the old code base At time of least usage, move symlink to point to latest code A: First use an analysis tool to try and determine your typically "light" traffic times. Depending on the site and your location in the world in comparison to most of your users, it could be 4am, it could be 1pm, who knows. Then, once you have a good timeframe nailed down, make sure to have your deployment process as automated as possible, so that it happens quickly to minimize the downtime of your site.
What time should I build to production?
My users use the site pretty equally 24/7. Is there a meme for build timing? International audience, single cluster of servers on eastern time, but gets hit well into the morning, by international clients. 1 db, several web servers, so if no db, simple, whenever. But when the site has to come down, when would you, as a programmer be least mad to see SO be down for say 15 minutes.
[ "If there's truly no good time from the users' perspective, then I'd suggest doing it when your team has the most time to recover from any build-related disaster.\n", "Here's what I have done and its worked well for me:\n\nGet a site traffic analysis tool\nwhich will graph hourly user load\nSelect low-point in graph for doing\n updates\n\n", "If you're small, then yeah, find when your lowest usage period is, and do it then (for us personally, usually around 1AM-3AM PST is the lowest dip...but it never drops to 0 of course). Once you start growing to having a larger userbase, if you want people to take you seriously you'll need to design your application such that you can upgrade without downtime. This is not simple, and it often involves having multiple servers. \nI've spent ages trying to get our application to this point, the best I've come up with so far is for a couple hours run both the old version and new version at the same time. Users logged in at the time of the switchover stay on the old version, until they log out. Next time they come in they go to the new version. Any users coming on after the switchover get sent straight to the new version. It's still not foolproof, but it's pretty good.\n", "What kind of an application is it? Most sites that I use tend to update around 2AM or 3AM.\n", "Use a second site, and hotswap as needed.\n", "The issue with hot-swapping, is database would still be shared, and breaking changes would bring stand in down as well.\n", "I guess you have to ask your clients.\nIn any case, there's the wee hours of the morning. If you're talking about a locally available website, I do not think users will mind if they get an \"under maintenance\" notice at 2 am in their time zone.\n", "Depends on your location: 4AM East Coast/1AM West Coast is typlically the lightest time.\n", "Pick a few times that you'd like to do it and offer them as choices to the decider-types. Whatever you do, put up a \"down for routine maintenance\" page while you deploy.\n", "\nCheck the time of least usage\nClone/copy/update latest production code to another directory\nIf there exists any database migrations to be done, perform any that are required, and non conflicting with the old code base\nAt time of least usage, move symlink to point to latest code\n\n", "First use an analysis tool to try and determine your typically \"light\" traffic times. Depending on the site and your location in the world in comparison to most of your users, it could be 4am, it could be 1pm, who knows. Then, once you have a good timeframe nailed down, make sure to have your deployment process as automated as possible, so that it happens quickly to minimize the downtime of your site.\n" ]
[ 9, 4, 2, 1, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "deployment", "timing" ]
stackoverflow_0000089920_deployment_timing.txt
Q: Need Lightweight .NET SMTP implementation (assembly or source) I am writing a small application that will receive messages to process over smtp port 25. I am looking for an .NET assembly that I can incorporate that will listen to port 25 and talk SMTP. I invision that when a message arrives some event is triggered where I can read the message and process it. Esstentilly I need to "Act" like a SMTP server but apart from receiving the message I don't need any more functionaly that you would find in a full blown SMTP server. Let me know if you need more clarification. A: Have you looked at this: CodeProject: SMTP and POP3 Mail Server?
Need Lightweight .NET SMTP implementation (assembly or source)
I am writing a small application that will receive messages to process over smtp port 25. I am looking for an .NET assembly that I can incorporate that will listen to port 25 and talk SMTP. I invision that when a message arrives some event is triggered where I can read the message and process it. Esstentilly I need to "Act" like a SMTP server but apart from receiving the message I don't need any more functionaly that you would find in a full blown SMTP server. Let me know if you need more clarification.
[ "Have you looked at this: CodeProject: SMTP and POP3 Mail Server?\n" ]
[ 2 ]
[]
[]
[ ".net", "c#", "smtp" ]
stackoverflow_0000090037_.net_c#_smtp.txt
Q: What are the pros and cons of the various Python implementations? I am relatively new to Python, and I have always used the standard cpython (v2.5) implementation. I've been wondering about the other implementations though, particularly Jython and IronPython. What makes them better? What makes them worse? What other implementations are there? I guess what I'm looking for is a summary and list of pros and cons for each implementation. A: Jython and IronPython are useful if you have an overriding need to interface with existing libraries written in a different platform, like if you have 100,000 lines of Java and you just want to write a 20-line Python script. Not particularly useful for anything else, in my opinion, because they are perpetually a few versions behind CPython due to community inertia. Stackless is interesting because it has support for green threads, continuations, etc. Sort of an Erlang-lite. PyPy is an experimental interpreter/compiler that may one day supplant CPython, but for now is more of a testbed for new ideas. A: An additional benefit for Jython, at least for some, is it lacks the GIL (the Global Interpreter Lock) and uses Java's native threads. This means that you can run pure Python code in parallel, something not possible with the GIL. A: All of the implementations are listed here: https://wiki.python.org/moin/PythonImplementations CPython is the "reference implementation" and developed by Guido and the core developers. A: Pros: Access to the libraries available for JVM or CLR. Cons: Both naturally lag behind CPython in terms of features. A: IronPython and Jython use the runtime environment for .NET or Java and with that comes Just In Time compilation and a garbage collector different from the original CPython. They might be also faster than CPython thanks to the JIT, but I don't know that for sure. A downside in using Jython or IronPython is that you cannot use native C modules, they can be only used in CPython. A: PyPy is a Python implementation written in RPython wich is a Python subset. RPython can be translated to run on a VM or, unlike standard Python, RPython can be statically compiled.
What are the pros and cons of the various Python implementations?
I am relatively new to Python, and I have always used the standard cpython (v2.5) implementation. I've been wondering about the other implementations though, particularly Jython and IronPython. What makes them better? What makes them worse? What other implementations are there? I guess what I'm looking for is a summary and list of pros and cons for each implementation.
[ "Jython and IronPython are useful if you have an overriding need to interface with existing libraries written in a different platform, like if you have 100,000 lines of Java and you just want to write a 20-line Python script. Not particularly useful for anything else, in my opinion, because they are perpetually a few versions behind CPython due to community inertia.\nStackless is interesting because it has support for green threads, continuations, etc. Sort of an Erlang-lite.\nPyPy is an experimental interpreter/compiler that may one day supplant CPython, but for now is more of a testbed for new ideas.\n", "An additional benefit for Jython, at least for some, is it lacks the GIL (the Global Interpreter Lock) and uses Java's native threads. This means that you can run pure Python code in parallel, something not possible with the GIL.\n", "All of the implementations are listed here:\nhttps://wiki.python.org/moin/PythonImplementations\nCPython is the \"reference implementation\" and developed by Guido and the core developers.\n", "Pros: Access to the libraries available for JVM or CLR.\nCons: Both naturally lag behind CPython in terms of features.\n", "IronPython and Jython use the runtime environment for .NET or Java and with that comes Just In Time compilation and a garbage collector different from the original CPython. They might be also faster than CPython thanks to the JIT, but I don't know that for sure.\nA downside in using Jython or IronPython is that you cannot use native C modules, they can be only used in CPython.\n", "PyPy is a Python implementation written in RPython wich is a Python subset. \nRPython can be translated to run on a VM or, unlike standard Python, RPython can be statically compiled.\n" ]
[ 15, 6, 3, 1, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0000086134_python.txt
Q: How do I use .Net Generics to inherit a template parameter? I want to be able to do this. MyInterface interface = new ServiceProxyHelper<ProxyType>(); Here's the object structure MyTypeThatImplementsMyInterface : MyInterface Will this work? public class ProxyType : MyInterface {} public class ServiceProxyHelper<ProxyType> : IDisposable, MyInterface {} A: I think this is what you're trying to do: public class ServiceProxyHelper<T> where T : MyInterface { ... }
How do I use .Net Generics to inherit a template parameter?
I want to be able to do this. MyInterface interface = new ServiceProxyHelper<ProxyType>(); Here's the object structure MyTypeThatImplementsMyInterface : MyInterface Will this work? public class ProxyType : MyInterface {} public class ServiceProxyHelper<ProxyType> : IDisposable, MyInterface {}
[ "I think this is what you're trying to do:\npublic class ServiceProxyHelper<T> where T : MyInterface { ... }\n\n" ]
[ 3 ]
[]
[]
[ ".net", "c#", "generics" ]
stackoverflow_0000090117_.net_c#_generics.txt
Q: How to reference a custom field in SQL I am using mssql and am having trouble using a subquery. The real query is quite complicated, but it has the same structure as this: select customerName, customerId, ( select count(*) from Purchases where Purchases.customerId=customerData.customerId ) as numberTransactions from customerData And what I want to do is order the table by the number of transactions, but when I use order by numberTransactions It tells me there is no such field. Is it possible to do this? Should I be using some sort of special keyword, such as this, or self? A: use the field number, in this case: order by 3 A: Sometimes you have to wrestle with SQL's syntax (expected scope of clauses) SELECT * FROM ( select customerName, customerId, ( select count(*) from Purchases where Purchases.customerId=customerData.customerId ) as numberTransactions from customerData ) as sub order by sub.numberTransactions Also, a solution using JOIN is correct. Look at the query plan, SQL Server should give identical plans for both solutions. A: Do an inner join. It's much easier and more readable. select customerName, customerID, count(*) as numberTransactions from customerdata c inner join purchases p on c.customerID = p.customerID group by customerName,customerID order by numberTransactions EDIT: Hey Nathan, You realize you can inner join this whole table as a sub right? Select T.*, T2.* From T inner join (select customerName, customerID, count(*) as numberTransactions from customerdata c inner join purchases p on c.customerID = p.customerID group by customerName,customerID ) T2 on T.CustomerID = T2.CustomerID order by T2.numberTransactions Or if that's no good you can construct your queries using temporary tables (#T1 etc) A: There are better ways to get your result but just from your example query this will work on SQL2000 or better. If you wrap your alias in single ticks 'numberTransactions' and then call ORDER BY 'numberTransactions' select customerName, customerId, ( select count(*) from Purchases where Purchases.customerId=customerData.customerId ) as 'numberTransactions' from customerData ORDER BY 'numberTransactions' A: The same thing could be achieved by using GROUP BY and a JOIN, and you'll be rid of the subquery. This might be faster too. A: I think you can do this in SQL2005, but not SQL2000.
How to reference a custom field in SQL
I am using mssql and am having trouble using a subquery. The real query is quite complicated, but it has the same structure as this: select customerName, customerId, ( select count(*) from Purchases where Purchases.customerId=customerData.customerId ) as numberTransactions from customerData And what I want to do is order the table by the number of transactions, but when I use order by numberTransactions It tells me there is no such field. Is it possible to do this? Should I be using some sort of special keyword, such as this, or self?
[ "use the field number, in this case:\norder by 3\n\n", "Sometimes you have to wrestle with SQL's syntax (expected scope of clauses)\nSELECT *\nFROM\n(\nselect\n customerName,\n customerId,\n (\n select count(*)\n from Purchases\n where Purchases.customerId=customerData.customerId\n ) as numberTransactions\nfrom customerData\n) as sub\norder by sub.numberTransactions\n\nAlso, a solution using JOIN is correct. Look at the query plan, SQL Server should give identical plans for both solutions.\n", "Do an inner join. It's much easier and more readable.\nselect \ncustomerName,\ncustomerID,\ncount(*) as numberTransactions\nfrom\n customerdata c inner join purchases p on c.customerID = p.customerID\ngroup by customerName,customerID\norder by numberTransactions\nEDIT: Hey Nathan,\nYou realize you can inner join this whole table as a sub right?\nSelect T.*, T2.*\nFrom T inner join \n(select \ncustomerName,\ncustomerID,\ncount(*) as numberTransactions\nfrom\n customerdata c inner join purchases p on c.customerID = p.customerID\ngroup by customerName,customerID\n) T2 on T.CustomerID = T2.CustomerID\norder by T2.numberTransactions\n\nOr if that's no good you can construct your queries using temporary tables (#T1 etc)\n", "There are better ways to get your result but just from your example query this will work on SQL2000 or better.\nIf you wrap your alias in single ticks 'numberTransactions' and then call ORDER BY 'numberTransactions'\nselect\n customerName, \n customerId,\n (\n select count(*) \n from Purchases \n where Purchases.customerId=customerData.customerId\n ) as 'numberTransactions'\nfrom customerData\nORDER BY 'numberTransactions'\n\n", "The same thing could be achieved by using GROUP BY and a JOIN, and you'll be rid of the subquery. This might be faster too.\n", "I think you can do this in SQL2005, but not SQL2000. \n" ]
[ 9, 8, 4, 2, 0, 0 ]
[ "You need to duplicate your logic. SQL Server isn't very smart at columns that you've named but aren't part of the dataset in your FROM statement.\nSo use\nselect \n customerName, \n customerId,\n (\n select count(*) \n from Purchases p\n where p.customerId = c.customerId\n ) as numberTransactions\nfrom customerData c\norder by (select count(*) from purchases p where p.customerID = c.customerid)\n\nAlso, use aliases, they make your code easier to read and maintain. ;)\n" ]
[ -1 ]
[ "sql", "sql_server" ]
stackoverflow_0000089820_sql_sql_server.txt
Q: Inlining C++ code Is there any difference to the following code: class Foo { inline int SomeFunc() { return 42; } int AnotherFunc() { return 42; } }; Will both functions gets inlined? Does inline actually make any difference? Are there any rules on when you should or shouldn't inline code? I often use the AnotherFunc syntax (accessors for example) but I rarely specify inline directly. A: The inline keyword is essentially a hint to the compiler. Using inline doesn't guarantee that your function will be inlined, nor does omitting it guarantee that it won't. You are just letting the compiler know that it might be a good idea to try harder to inline that particular function. A: Both forms should be inlined in the exact same way. Inline is implicit for function bodies defined in a class definition. A: Sutter's Guru of the Week #33 answers some of your questions and more. http://www.gotw.ca/gotw/033.htm A: class Foo { inline int SomeFunc() { return 42; } int AnotherFunc() { return 42; } }; It is correct that both ways are guaranteed to compile the same. However, it is preferable to do neither of these ways. According to the C++ FAQ you should declare it normally inside the class definition, and then define it outside the class definition, inside the header, with the explicit inline keyword. As the FAQ describes, this is because you want to separate the declaration and definition for the readability of others (declaration is equivalent to "what" and definition "how"). Does inline actually make any difference? Yes, if the compiler grants the inline request, it is vastly different. Think of inlined code as a macro. Everywhere it is called, the function call is replaced with the actual code in the function definition. This can result in code bloat if you inline large functions, but the compiler typically protects you from this by not granting an inline request if the function is too big. Are there any rules on when you should or shouldn't inline code? I don't know of any hard+fast rules, but a guideline is to only inline code if it is called often and it is relatively small. Setters and getters are commonly inlined. If it is in an especially performance intensive area of the code, inlining should be considered. Always remember you are trading execution speed for executable size with inlining. A: VC++ supports __forceinline and __declspec(noinline) directives if you think you know better than the compiler. Hint: you probably don't! A: Inline is a compiler hint and does not force the compiler to inline the code (at least in C++). So the short answer is it's compiler and probably context dependent what will happen in your example. Most good compilers would probably inline both especially due to the obvious optimization of a constant return from both functions. In general inline is not something you should worry about. It brings the performance benefit of not having to execute machine instructions to generate a stack frame and return control flow. But in all but the most specialized cases I would argue that is trivial. Inline is important in two cases. One if you are in a real-time environment and not responding fast enough. Two is if code profiling showed a significant bottleneck in a really tight loop (i.e. a subroutine called over and over) then inlining could help. Specific applications and architectures may also lead you to inlining as an optimization. A: I have found some C++ compilers (I.e. SunStudio) complain if the inline is omitted as in int AnotherFunc() { return 42; } So I would recommend always using the inline keyword in this case. And don't forget to remove the inline keyword if you later implement the method as an actual function call, this will really mess up linking (in SunStudio 11 and 12 and Borland C++ Builder). I would suggest making minimal use of inline code because when stepping through code with with a debugger, it will 'step into' the inline code even when using 'step over' command, this can be rather annoying. A: Note that outside of a class, inline does something more useful in the code: by forcing (well, sort of) the C++ compiler to generate the code inline at each call to the function, it prevents multiple definitions of the same symbol (the function signature) in different translation units. So if you inline a non-member function in a header file, and include that in multiple cpp files you don't have the linker yelling at you. If the function is too big for you to suggest inline-ing, do it the C way: declare in header, define in cpp. This has little to do with whether the code is really inlined: it allows the style of implementation in header, as is common for short member functions. (I imagine the compiler will be smart if it needs a non-inline rendering of the function, as it is for template functions, but...) A: Also to add to what Greg said, when preforming optimization (i.e. inline-ing) the compiler consults not only the key words in the code but also other command line arguments the specify how the compiler should optimize the code.
Inlining C++ code
Is there any difference to the following code: class Foo { inline int SomeFunc() { return 42; } int AnotherFunc() { return 42; } }; Will both functions gets inlined? Does inline actually make any difference? Are there any rules on when you should or shouldn't inline code? I often use the AnotherFunc syntax (accessors for example) but I rarely specify inline directly.
[ "The inline keyword is essentially a hint to the compiler. Using inline doesn't guarantee that your function will be inlined, nor does omitting it guarantee that it won't. You are just letting the compiler know that it might be a good idea to try harder to inline that particular function.\n", "Both forms should be inlined in the exact same way. Inline is implicit for function bodies defined in a class definition.\n", "Sutter's Guru of the Week #33 answers some of your questions and more.\nhttp://www.gotw.ca/gotw/033.htm\n", "class Foo \n{\n inline int SomeFunc() { return 42; }\n int AnotherFunc() { return 42; }\n};\n\nIt is correct that both ways are guaranteed to compile the same. However, it is preferable to do neither of these ways. According to the C++ FAQ you should declare it normally inside the class definition, and then define it outside the class definition, inside the header, with the explicit inline keyword. As the FAQ describes, this is because you want to separate the declaration and definition for the readability of others (declaration is equivalent to \"what\" and definition \"how\").\n\nDoes inline actually make any difference?\n\nYes, if the compiler grants the inline request, it is vastly different. Think of inlined code as a macro. Everywhere it is called, the function call is replaced with the actual code in the function definition. This can result in code bloat if you inline large functions, but the compiler typically protects you from this by not granting an inline request if the function is too big.\n\nAre there any rules on when you should or shouldn't inline code?\n\nI don't know of any hard+fast rules, but a guideline is to only inline code if it is called often and it is relatively small. Setters and getters are commonly inlined. If it is in an especially performance intensive area of the code, inlining should be considered. Always remember you are trading execution speed for executable size with inlining.\n", "VC++ supports __forceinline and __declspec(noinline) directives if you think you know better than the compiler. Hint: you probably don't!\n", "Inline is a compiler hint and does not force the compiler to inline the code (at least in C++). So the short answer is it's compiler and probably context dependent what will happen in your example. Most good compilers would probably inline both especially due to the obvious optimization of a constant return from both functions.\nIn general inline is not something you should worry about. It brings the performance benefit of not having to execute machine instructions to generate a stack frame and return control flow. But in all but the most specialized cases I would argue that is trivial.\nInline is important in two cases. One if you are in a real-time environment and not responding fast enough. Two is if code profiling showed a significant bottleneck in a really tight loop (i.e. a subroutine called over and over) then inlining could help.\nSpecific applications and architectures may also lead you to inlining as an optimization.\n", "I have found some C++ compilers (I.e. SunStudio) complain if the inline is omitted as in \nint AnotherFunc() { return 42; }\n\nSo I would recommend always using the inline keyword in this case. And don't forget to remove the inline keyword if you later implement the method as an actual function call, this will really mess up linking (in SunStudio 11 and 12 and Borland C++ Builder).\nI would suggest making minimal use of inline code because when stepping through code with with a debugger, it will 'step into' the inline code even when using 'step over' command, this can be rather annoying.\n", "Note that outside of a class, inline does something more useful in the code: by forcing (well, sort of) the C++ compiler to generate the code inline at each call to the function, it prevents multiple definitions of the same symbol (the function signature) in different translation units. \nSo if you inline a non-member function in a header file, and include that in multiple cpp files you don't have the linker yelling at you. If the function is too big for you to suggest inline-ing, do it the C way: declare in header, define in cpp.\nThis has little to do with whether the code is really inlined: it allows the style of implementation in header, as is common for short member functions.\n(I imagine the compiler will be smart if it needs a non-inline rendering of the function, as it is for template functions, but...)\n", "Also to add to what Greg said, when preforming optimization (i.e. inline-ing) the compiler consults not only the key words in the code but also other command line arguments the specify how the compiler should optimize the code.\n" ]
[ 26, 16, 6, 3, 2, 2, 1, 0, 0 ]
[]
[]
[ "c++", "inline_functions" ]
stackoverflow_0000086561_c++_inline_functions.txt
Q: How does has_one :through work? I have three models: class ReleaseItem < ActiveRecord::Base has_many :pack_release_items has_one :pack, :through => :pack_release_items end class Pack < ActiveRecord::Base has_many :pack_release_items has_many :release_items, :through=>:pack_release_items end class PackReleaseItem < ActiveRecord::Base belongs_to :pack belongs_to :release_item end The problem is that, during execution, if I add a pack to a release_item it is not aware that the pack is a pack. For instance: Loading development environment (Rails 2.1.0) >> item = ReleaseItem.new(:filename=>'MAESTRO.TXT') => #<ReleaseItem id: nil, filename: "MAESTRO.TXT", created_by: nil, title: nil, sauce_author: nil, sauce_group: nil, sauce_comment: nil, filedate: nil, filesize: nil, created_at: nil, updated_at: nil, content: nil> >> pack = Pack.new(:filename=>'legion01.zip', :year=>1998) => #<Pack id: nil, filename: "legion01.zip", created_by: nil, filesize: nil, items: nil, year: 1998, month: nil, filedate: nil, created_at: nil, updated_at: nil> >> item.pack = pack => #<Pack id: nil, filename: "legion01.zip", created_by: nil, filesize: nil, items: nil, year: 1998, month: nil, filedate: nil, created_at: nil, updated_at: nil> >> item.pack.filename NoMethodError: undefined method `filename' for #<Class:0x2196318> from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/base.rb:1667:in `method_missing_without_paginate' from /usr/local/lib/ruby/gems/1.8/gems/mislav-will_paginate-2.3.3/lib/will_paginate/finder.rb:164:in `method_missing' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/associations/association_collection.rb:285:in `send' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/associations/association_collection.rb:285:in `method_missing_without_paginate' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/base.rb:1852:in `with_scope' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/associations/association_proxy.rb:168:in `send' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/associations/association_proxy.rb:168:in `with_scope' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/associations/association_collection.rb:281:in `method_missing_without_paginate' from /usr/local/lib/ruby/gems/1.8/gems/mislav-will_paginate-2.3.3/lib/will_paginate/finder.rb:164:in `method_missing' from (irb):5 >> It seems that I should have access to item.pack, but it is unaware that the pack is a Pack item. A: It appears that your usage of has_one :through is correct. The problem you're seeing has to do with saving objects. For an association to work, the object that is being referenced needs to have an id to populate the model_id field for the object. In this case, PackReleaseItems have a pack_id and a release_item_id field that need to be filled for the association to work correctly. Try saving before accessing objects through an association. A: Your problem is in how you're associating the ReleaseItem and the Pack. has_many :through and has_one :through both work through an object that also acts as a join table, in this case PackReleaseItem. Since this is not just a join table (if it were, you should just use has_many without :through), properly creating the association requires creating the join object, like so: >> item.pack_release_items.create :pack => pack What you're doing with your item.pack = pack call is simply associating the objects in memory. When you go to look it up again, it looks "through" the pack_release_items, which is empty. A: You want to save or create (instead of new) the item and pack. Otherwise, the database has not assigned id's for the association.
How does has_one :through work?
I have three models: class ReleaseItem < ActiveRecord::Base has_many :pack_release_items has_one :pack, :through => :pack_release_items end class Pack < ActiveRecord::Base has_many :pack_release_items has_many :release_items, :through=>:pack_release_items end class PackReleaseItem < ActiveRecord::Base belongs_to :pack belongs_to :release_item end The problem is that, during execution, if I add a pack to a release_item it is not aware that the pack is a pack. For instance: Loading development environment (Rails 2.1.0) >> item = ReleaseItem.new(:filename=>'MAESTRO.TXT') => #<ReleaseItem id: nil, filename: "MAESTRO.TXT", created_by: nil, title: nil, sauce_author: nil, sauce_group: nil, sauce_comment: nil, filedate: nil, filesize: nil, created_at: nil, updated_at: nil, content: nil> >> pack = Pack.new(:filename=>'legion01.zip', :year=>1998) => #<Pack id: nil, filename: "legion01.zip", created_by: nil, filesize: nil, items: nil, year: 1998, month: nil, filedate: nil, created_at: nil, updated_at: nil> >> item.pack = pack => #<Pack id: nil, filename: "legion01.zip", created_by: nil, filesize: nil, items: nil, year: 1998, month: nil, filedate: nil, created_at: nil, updated_at: nil> >> item.pack.filename NoMethodError: undefined method `filename' for #<Class:0x2196318> from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/base.rb:1667:in `method_missing_without_paginate' from /usr/local/lib/ruby/gems/1.8/gems/mislav-will_paginate-2.3.3/lib/will_paginate/finder.rb:164:in `method_missing' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/associations/association_collection.rb:285:in `send' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/associations/association_collection.rb:285:in `method_missing_without_paginate' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/base.rb:1852:in `with_scope' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/associations/association_proxy.rb:168:in `send' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/associations/association_proxy.rb:168:in `with_scope' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/associations/association_collection.rb:281:in `method_missing_without_paginate' from /usr/local/lib/ruby/gems/1.8/gems/mislav-will_paginate-2.3.3/lib/will_paginate/finder.rb:164:in `method_missing' from (irb):5 >> It seems that I should have access to item.pack, but it is unaware that the pack is a Pack item.
[ "It appears that your usage of has_one :through is correct. The problem you're seeing has to do with saving objects. For an association to work, the object that is being referenced needs to have an id to populate the model_id field for the object. In this case, PackReleaseItems have a pack_id and a release_item_id field that need to be filled for the association to work correctly. Try saving before accessing objects through an association.\n", "Your problem is in how you're associating the ReleaseItem and the Pack.\nhas_many :through and has_one :through both work through an object that also acts as a join table, in this case PackReleaseItem. Since this is not just a join table (if it were, you should just use has_many without :through), properly creating the association requires creating the join object, like so:\n>> item.pack_release_items.create :pack => pack\n\nWhat you're doing with your item.pack = pack call is simply associating the objects in memory. When you go to look it up again, it looks \"through\" the pack_release_items, which is empty.\n", "You want to save or create (instead of new) the item and pack. Otherwise, the database has not assigned id's for the association.\n" ]
[ 6, 2, 1 ]
[]
[]
[ "activerecord", "ruby", "ruby_on_rails" ]
stackoverflow_0000089908_activerecord_ruby_ruby_on_rails.txt
Q: .NET Testing Naming Conventions What are the best conventions of naming testing-assemblies in .NET (or any other language or platform)? What I'm mainly split between are these options (please provide others!): Company.Website - the project Company.Website.Tests or Company.Website Company.WebsiteTests The problem with the first solution is that it looks like .Tests are a sub-namespace to the site, while they really are more parallel in my mind. What happens when a new sub-namespace comes into play, like Company.Website.Controls, where should I put the tests for that namespace, for instance? Maybe it should even be: Tests.Company.Website and Tests.Company.Website.Controls, and so on. A: I will go with * Company.Website - the project * Company.Website.Tests The short reason and answer is simple, testing and project are linked in code, therefore it should share namespace. If you want splitting of code and testing in a solution you have that option anyway. e.g. you can set up a solution with -Code Folder Company.Website -Tests Folder Company.Website.Tests A: I personally would go with Company.Tests.Website That way you have a common tests namespace and projects inside it, following the same structure as the actual project. A: I actually have an alternate parallel root. Tests.Company.Website It works nicely for disambiguating things when you have new sub namespaces. A: I'm a big fan of structuring the test namespace like this: Company.Tests.Website.xxx Company.Tests.Website.Controls Like you, I think of the tests as a parallel namespace structure to the main code and this provides you with that. It also has the advantage that, since the namespace still starts with your company name you shouldn't have any naming collisions with 3rd party libraries A: We follow an embedded approach: Company.Namespace.Test Company.Namespace.Data.Test This way the tests are close to the code that is being tested, without having to toggle back and forth between projects or hunt down references to ensure there is a test covering a particular method. We also don't have to maintain two separate, but identical, hierarchies. We can also test distinct parts of the code as we enhance and develop. Seems a little weird at first, but over the long term it has worked really well for us. A: I too prefer "Tests" prefixing the actual name of the assembly so that its easy to see all of my unit test assemblies listed alphabetically together when I mass-select them to pull into NUNit or whatever test harness you are using. So if Website were the name of my solution (and assemblies), I suggest - Tests.Website.dll to go along with the actual code assembly Website.Dll A: I usually name test projects Project-Tests for brevity in Solution Explorer, and I use Company.Namespace.Tests for namespaces. A: I prefer to go with: Company.Website.Tests I don't care about any sub-namespaces like Company.Website.Controls, all of the tests go into the same namespace: Company.Website.Tests. You don't want your test namespaces to HAVE to be in parrallel with the rest of your code because it just makes refactoring namespaces take twice as long. A: I prefer Company.Website.Spec and usually have one test project per solution A: With MVC starting to become a reality in the .net web development world, I would start thinking along those lines. Remember that M, V and C are distinct components, so: Company.Namespace.Website Company.Namespace.Website.Core Company.Namspance.Website.Core.Tests Company.Namespace.Website.Model Company.Namespace.Website.Model.Tests Website is your lightweight view. Core contains controllers, helpers, the view interfaces, etc. Core.Tests are your tests for said Core. Model is for your data model. The cool thing here is that your model tests can automate your database specific tests. This may be overkill for some people, but I find that it allows me to separate concerns fairly easily.
.NET Testing Naming Conventions
What are the best conventions of naming testing-assemblies in .NET (or any other language or platform)? What I'm mainly split between are these options (please provide others!): Company.Website - the project Company.Website.Tests or Company.Website Company.WebsiteTests The problem with the first solution is that it looks like .Tests are a sub-namespace to the site, while they really are more parallel in my mind. What happens when a new sub-namespace comes into play, like Company.Website.Controls, where should I put the tests for that namespace, for instance? Maybe it should even be: Tests.Company.Website and Tests.Company.Website.Controls, and so on.
[ "I will go with \n* Company.Website - the project\n* Company.Website.Tests\n\nThe short reason and answer is simple, testing and project are linked in code, therefore it should share namespace.\nIf you want splitting of code and testing in a solution you have that option anyway. e.g. you can set up a solution with \n-Code Folder\n\nCompany.Website\n\n-Tests Folder\n\nCompany.Website.Tests\n\n", "I personally would go with\nCompany.Tests.Website\nThat way you have a common tests namespace and projects inside it, following the same structure as the actual project.\n", "I actually have an alternate parallel root.\nTests.Company.Website\nIt works nicely for disambiguating things when you have new sub namespaces.\n", "I'm a big fan of structuring the test namespace like this:\nCompany.Tests.Website.xxx\nCompany.Tests.Website.Controls\nLike you, I think of the tests as a parallel namespace structure to the main code and this provides you with that. It also has the advantage that, since the namespace still starts with your company name you shouldn't have any naming collisions with 3rd party libraries\n", "We follow an embedded approach:\nCompany.Namespace.Test\nCompany.Namespace.Data.Test\n\nThis way the tests are close to the code that is being tested, without having to toggle back and forth between projects or hunt down references to ensure there is a test covering a particular method. We also don't have to maintain two separate, but identical, hierarchies.\nWe can also test distinct parts of the code as we enhance and develop.\nSeems a little weird at first, but over the long term it has worked really well for us.\n", "I too prefer \"Tests\" prefixing the actual name of the assembly so that its easy to see all of my unit test assemblies listed alphabetically together when I mass-select them to pull into NUNit or whatever test harness you are using.\nSo if Website were the name of my solution (and assemblies), I suggest -\nTests.Website.dll to go along with the actual code assembly Website.Dll\n", "I usually name test projects Project-Tests for brevity in Solution Explorer, and I use Company.Namespace.Tests for namespaces.\n", "I prefer to go with:\nCompany.Website.Tests\nI don't care about any sub-namespaces like Company.Website.Controls, all of the tests go into the same namespace: Company.Website.Tests. You don't want your test namespaces to HAVE to be in parrallel with the rest of your code because it just makes refactoring namespaces take twice as long.\n", "I prefer Company.Website.Spec and usually have one test project per solution\n", "With MVC starting to become a reality in the .net web development world, I would start thinking along those lines. Remember that M, V and C are distinct components, so:\n\nCompany.Namespace.Website\nCompany.Namespace.Website.Core\nCompany.Namspance.Website.Core.Tests\nCompany.Namespace.Website.Model\nCompany.Namespace.Website.Model.Tests\n\nWebsite is your lightweight view. \nCore contains controllers, helpers, the view interfaces, etc. Core.Tests are your tests for said Core.\nModel is for your data model. The cool thing here is that your model tests can automate your database specific tests.\nThis may be overkill for some people, but I find that it allows me to separate concerns fairly easily.\n" ]
[ 30, 13, 7, 5, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ ".net", "conventions", "naming", "testing", "unit_testing" ]
stackoverflow_0000084717_.net_conventions_naming_testing_unit_testing.txt
Q: Enforce unique rows in MySQL I have a table in MySQL that has 3 fields and I want to enforce uniqueness among two of the fields. Here is the table DDL: CREATE TABLE `CLIENT_NAMES` ( `ID` int(11) NOT NULL auto_increment, `CLIENT_NAME` varchar(500) NOT NULL, `OWNER_ID` int(11) NOT NULL, PRIMARY KEY (`ID`), ) ENGINE=InnoDB DEFAULT CHARSET=utf8; The ID field is a surrogate key (this table is being loaded with ETL). The CLIENT_NAME is a field that contains names of clients The OWNER_ID is an id indicates a clients owner. I thought I could enforce this with a unique index on CLIENT_NAME and OWNER_ID, ALTER TABLE `DW`.`CLIENT_NAMES` ADD UNIQUE INDEX enforce_unique_idx(`CLIENT_NAME`, `OWNER_ID`); but MySQL gives me an error: Error executing SQL commands to update table. Specified key was too long; max key length is 765 bytes (error 1071) Anyone else have any ideas? A: MySQL cannot enforce uniqueness on keys that are longer than 765 bytes (and apparently 500 UTF8 characters can surpass this limit). Does CLIENT_NAME really need to be 500 characters long? Seems a bit excessive. Add a new (shorter) column that is hash(CLIENT_NAME). Get MySQL to enforce uniqueness on that hash instead. A: Have you looked at CONSTRAINT ... UNIQUE? A: Something seems a bit odd about this table; I would actually think about refactoring it. What do ID and OWNER_ID refer to, and what is the relationship between them? Would it make sense to have CREATE TABLE `CLIENTS` ( `ID` int(11) NOT NULL auto_increment, `CLIENT_NAME` varchar(500) NOT NULL, # other client fields - address, phone, whatever PRIMARY KEY (`ID`), ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE `CLIENTS_OWNERS` ( `CLIENT_ID` int(11) NOT NULL, `OWNER_ID` int(11) NOT NULL, PRIMARY KEY (`CLIENT_ID`,`OWNER_ID`), ) ENGINE=InnoDB DEFAULT CHARSET=utf8; I would really avoid adding a unique key like that on a 500 character string. It's much more efficient to enforce uniqueness on two ints, plus an id in a table should really refer to something that needs an id; in your version, the ID field seems to identify just the client/owner relationship, which really doesn't need a separate id, since it's just a mapping.
Enforce unique rows in MySQL
I have a table in MySQL that has 3 fields and I want to enforce uniqueness among two of the fields. Here is the table DDL: CREATE TABLE `CLIENT_NAMES` ( `ID` int(11) NOT NULL auto_increment, `CLIENT_NAME` varchar(500) NOT NULL, `OWNER_ID` int(11) NOT NULL, PRIMARY KEY (`ID`), ) ENGINE=InnoDB DEFAULT CHARSET=utf8; The ID field is a surrogate key (this table is being loaded with ETL). The CLIENT_NAME is a field that contains names of clients The OWNER_ID is an id indicates a clients owner. I thought I could enforce this with a unique index on CLIENT_NAME and OWNER_ID, ALTER TABLE `DW`.`CLIENT_NAMES` ADD UNIQUE INDEX enforce_unique_idx(`CLIENT_NAME`, `OWNER_ID`); but MySQL gives me an error: Error executing SQL commands to update table. Specified key was too long; max key length is 765 bytes (error 1071) Anyone else have any ideas?
[ "MySQL cannot enforce uniqueness on keys that are longer than 765 bytes (and apparently 500 UTF8 characters can surpass this limit).\n\nDoes CLIENT_NAME really need to be 500 characters long? Seems a bit excessive.\nAdd a new (shorter) column that is hash(CLIENT_NAME). Get MySQL to enforce uniqueness on that hash instead.\n\n", "Have you looked at CONSTRAINT ... UNIQUE?\n", "Something seems a bit odd about this table; I would actually think about refactoring it. What do ID and OWNER_ID refer to, and what is the relationship between them? \nWould it make sense to have \nCREATE TABLE `CLIENTS` (\n`ID` int(11) NOT NULL auto_increment,\n`CLIENT_NAME` varchar(500) NOT NULL,\n# other client fields - address, phone, whatever\nPRIMARY KEY (`ID`),\n) ENGINE=InnoDB DEFAULT CHARSET=utf8;\n\nCREATE TABLE `CLIENTS_OWNERS` (\n`CLIENT_ID` int(11) NOT NULL,\n`OWNER_ID` int(11) NOT NULL,\nPRIMARY KEY (`CLIENT_ID`,`OWNER_ID`),\n) ENGINE=InnoDB DEFAULT CHARSET=utf8;\n\nI would really avoid adding a unique key like that on a 500 character string. It's much more efficient to enforce uniqueness on two ints, plus an id in a table should really refer to something that needs an id; in your version, the ID field seems to identify just the client/owner relationship, which really doesn't need a separate id, since it's just a mapping.\n" ]
[ 9, 0, 0 ]
[ "Here. For the UTF8 charset, MySQL may use up to 3 bytes per character. CLIENT_NAME is 3 x 500 = 1500 bytes. Shorten CLIENT_NAME to 250.\nlater: +1 to creating a hash of the name and using that as the key.\n" ]
[ -1 ]
[ "indexing", "mysql", "mysql_error_1071" ]
stackoverflow_0000090092_indexing_mysql_mysql_error_1071.txt
Q: How do I create a sql dependency on a table in sql server 2000 and asp.net 2.0? I need to create sql dependency on a table in sql server 2000 in my asp.net 2.0 pages. What are the required actions and what is the best way? Thanks.. A: Microsoft has a great tutorial on this which basically explains that you need to enable it using the aspnet_regsql.exe utility or the SqlCacheDependencyAdmin class
How do I create a sql dependency on a table in sql server 2000 and asp.net 2.0?
I need to create sql dependency on a table in sql server 2000 in my asp.net 2.0 pages. What are the required actions and what is the best way? Thanks..
[ "Microsoft has a great tutorial on this which basically explains that you need to enable it using the aspnet_regsql.exe utility or the SqlCacheDependencyAdmin class\n" ]
[ 1 ]
[]
[]
[ "asp.net", "sql_server", "sqldependency" ]
stackoverflow_0000090172_asp.net_sql_server_sqldependency.txt
Q: How can I programmatically determine the capabilities of an optical drive in Win32 I'm trying to create a deployment tool that will install software based on the hardware found on a system. I'd like the tool to be able to determine if the optical drive is a writer (to determine if burning software sould be installed) or can read DVDs (to determine if a player should be installed). I tried uing the following code strComputer = "." Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2") Set colItems = objWMIService.ExecQuery("Select * from Win32_CDROMDrive") For Each objItem in colItems Wscript.Echo "MediaType: " & objItem.MediaType Next but it always respons with CD-ROM A: You can use WMI to enumerate what Windows knows about a drive; get the Win32_DiskDrive instance from which you should be able to grab the the Win32_PhysicalMedia information for the physical media the drive uses; the MediaType property to get what media it uses (CD, CDRW, DVD, DVDRW, etc, etc). A: Platform SDK - IDiscMaster::EnumDiscRecorders (XP / 2003) DirectX and DirectShow has extensive interfaces to work with DVD Else enumerate disk drives and try firing a DeviceIonControlCode that supports extarcting the type info. Good luck
How can I programmatically determine the capabilities of an optical drive in Win32
I'm trying to create a deployment tool that will install software based on the hardware found on a system. I'd like the tool to be able to determine if the optical drive is a writer (to determine if burning software sould be installed) or can read DVDs (to determine if a player should be installed). I tried uing the following code strComputer = "." Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2") Set colItems = objWMIService.ExecQuery("Select * from Win32_CDROMDrive") For Each objItem in colItems Wscript.Echo "MediaType: " & objItem.MediaType Next but it always respons with CD-ROM
[ "You can use WMI to enumerate what Windows knows about a drive; get the Win32_DiskDrive instance from which you should be able to grab the the Win32_PhysicalMedia information for the physical media the drive uses; the MediaType property to get what media it uses (CD, CDRW, DVD, DVDRW, etc, etc).\n", "Platform SDK - IDiscMaster::EnumDiscRecorders (XP / 2003)\nDirectX and DirectShow has extensive interfaces to work with DVD\nElse enumerate disk drives and try firing a DeviceIonControlCode that supports extarcting the type info. \nGood luck\n" ]
[ 1, 0 ]
[]
[]
[ "dvd", "optical_drive", "winapi" ]
stackoverflow_0000090029_dvd_optical_drive_winapi.txt
Q: Control which columns become primary keys with Microsoft Access ODBC link to Oracle When you create a Microsoft Access 2003 link to an Oracle table using Oracle's ODBC driver, you are sometimes asked to state which columns are the primary key(s). I would like to know how to change that initial assignment, or even how to get Access/ODBC to forget the assignment. In my limited testing I wonder if the assignment isn't cached by the ODBC driver itself. The columns I initial chose are not correct. Update: I never did get a full answer on this one, deleting the links then restoring them didn't work. I think it's an obscure bug. I've moved on and haven't had to worry about this oddity since. A: You must delete the link to the table and create a new one. When a table is linked all the connection info about the table's path, structure (including primary key), permissions, passwords and statistics are stored in the Access db. If any of those items change in the linked table, refreshing links won't automatically update it on the Access side because Access continues to use the previously stored info. You must delete or drop the linked table and recreate the link, storing the current connection information. Don't know for sure if this next bit also applies to odbc linked tables, but I suspect it does. For Jet tables, it's a good idea to periodically delete all links and recreate them to improve query performance, because if a linked table's statistics are made on a table with few records, once that table is filled with many more records, new statistics will tell Jet's optimizer whether using indexes or a full table scan would be the better course of action when running a query. A: It is not possible to delete the link and then relink?
Control which columns become primary keys with Microsoft Access ODBC link to Oracle
When you create a Microsoft Access 2003 link to an Oracle table using Oracle's ODBC driver, you are sometimes asked to state which columns are the primary key(s). I would like to know how to change that initial assignment, or even how to get Access/ODBC to forget the assignment. In my limited testing I wonder if the assignment isn't cached by the ODBC driver itself. The columns I initial chose are not correct. Update: I never did get a full answer on this one, deleting the links then restoring them didn't work. I think it's an obscure bug. I've moved on and haven't had to worry about this oddity since.
[ "You must delete the link to the table and create a new one. When a table is linked all the connection info about the table's path, structure (including primary key), permissions, passwords and statistics are stored in the Access db. If any of those items change in the linked table, refreshing links won't automatically update it on the Access side because Access continues to use the previously stored info. You must delete or drop the linked table and recreate the link, storing the current connection information.\nDon't know for sure if this next bit also applies to odbc linked tables, but I suspect it does. For Jet tables, it's a good idea to periodically delete all links and recreate them to improve query performance, because if a linked table's statistics are made on a table with few records, once that table is filled with many more records, new statistics will tell Jet's optimizer whether using indexes or a full table scan would be the better course of action when running a query.\n", "It is not possible to delete the link and then relink?\n" ]
[ 2, 1 ]
[]
[]
[ "ms_access", "odbc", "oracle" ]
stackoverflow_0000087883_ms_access_odbc_oracle.txt
Q: How do you restart Rails under Mongrel, without stopping and starting Mongrel Is there a way to restart the Rails app (e.g. when you've changed a plugin/config file) while Mongrel is running. Or alternatively quickly restart Mongrel. Mongrel gives these hints that you can but how do you do it? ** Signals ready. TERM => stop. USR2 => restart. INT => stop (no restart). ** Rails signals registered. HUP => reload (without restart). It might not work well. A: You can add the -c option if the config for your app's cluster is elsewhere: mongrel_rails cluster::restart -c /path/to/config A: 1st discover the current mongrel pid path with something like: >ps axf | fgrep mongrel you will see a process line like: ruby /usr/lib64/ruby/gems/1.8/gems/swiftiply-0.6.1.1/bin/mongrel_rails start -p 3000 -a 0.0.0.0 -e development -P /home/xxyyzz/rails/myappname/tmp/pids/mongrel.pid -d Take the '-P /home/xxyyzz/rails/myappname/tmp/pids/mongrel.pid' part and use it like this: >mongrel_rails restart -P /home/xxyyzz/rails/myappname/tmp/pids/mongrel.pid Sending USR2 to Mongrel at PID 18481...Done. I use this to recover from the dreaded "Broken pipe" to MySQL problem. A: in your rails home directory mongrel_rails cluster::restart A: For example, killall -USR2 mongrel_rails
How do you restart Rails under Mongrel, without stopping and starting Mongrel
Is there a way to restart the Rails app (e.g. when you've changed a plugin/config file) while Mongrel is running. Or alternatively quickly restart Mongrel. Mongrel gives these hints that you can but how do you do it? ** Signals ready. TERM => stop. USR2 => restart. INT => stop (no restart). ** Rails signals registered. HUP => reload (without restart). It might not work well.
[ "You can add the -c option if the config for your app's cluster is elsewhere:\nmongrel_rails cluster::restart -c /path/to/config\n\n", "1st discover the current mongrel pid path with something like:\n\n>ps axf | fgrep mongrel\n\nyou will see a process line like:\nruby /usr/lib64/ruby/gems/1.8/gems/swiftiply-0.6.1.1/bin/mongrel_rails start -p 3000 -a 0.0.0.0 -e development -P /home/xxyyzz/rails/myappname/tmp/pids/mongrel.pid -d\nTake the '-P /home/xxyyzz/rails/myappname/tmp/pids/mongrel.pid' part and use it like this:\n\n>mongrel_rails restart -P /home/xxyyzz/rails/myappname/tmp/pids/mongrel.pid\n\nSending USR2 to Mongrel at PID 18481...Done.\nI use this to recover from the dreaded \"Broken pipe\" to MySQL problem.\n", "in your rails home directory \nmongrel_rails cluster::restart\n\n", "For example,\nkillall -USR2 mongrel_rails\n\n" ]
[ 5, 5, 4, 3 ]
[]
[]
[ "mongrel", "ruby_on_rails" ]
stackoverflow_0000074218_mongrel_ruby_on_rails.txt
Q: Create a calendar event on Palm OS I've been googling for a while now, and can't figure out how to create an event in the calendar on a newer Palm OS device. Any ideas on how to do this? I'm guessing that I'll be creating a record in the calendar database, but the format of the data in that record, and which database to put it in, I don't know. A: In Palm's later devices, they moved to an extended format for the PIM applications like Contacts and Calendar. This was done to allow better mapping between the device's databases and those used by Microsoft Outlook, but it meant that the format changed from the traditional format in the original PIMs. Palm has a PIM Access SDk available from the Palm Developer Network site that includes code for accessing these database formats. The devices also support the original database using a shadow version of the DB and system libraries that translate changes back and forth to the shadows. However, the shadow DBs don't have all the data that the extended DBs have, and the conversion isn't always triggered. A: Ok, literally ten seconds after I posted this question, I got an email from the palm developer network that led me right where I needed to go. Frustrating. It appears that you'll need the PIM SDK, which is available through the Palm Developer network here.
Create a calendar event on Palm OS
I've been googling for a while now, and can't figure out how to create an event in the calendar on a newer Palm OS device. Any ideas on how to do this? I'm guessing that I'll be creating a record in the calendar database, but the format of the data in that record, and which database to put it in, I don't know.
[ "In Palm's later devices, they moved to an extended format for the PIM applications like Contacts and Calendar. This was done to allow better mapping between the device's databases and those used by Microsoft Outlook, but it meant that the format changed from the traditional format in the original PIMs.\nPalm has a PIM Access SDk available from the Palm Developer Network site that includes code for accessing these database formats. The devices also support the original database using a shadow version of the DB and system libraries that translate changes back and forth to the shadows. However, the shadow DBs don't have all the data that the extended DBs have, and the conversion isn't always triggered.\n", "Ok, literally ten seconds after I posted this question, I got an email from the palm developer network that led me right where I needed to go. Frustrating. It appears that you'll need the PIM SDK, which is available through the Palm Developer network here. \n" ]
[ 4, 1 ]
[]
[]
[ "garnet_os", "palm_os" ]
stackoverflow_0000078586_garnet_os_palm_os.txt
Q: How can I ban a whole company from my web site? For reasons I won't go into, I wish to ban an entire company from accessing my web site. Checking the remote hostname in php using gethostbyaddr() works, but this slows down the page load too much. Large organizations (eg. hp.com or microsoft.com) often have blocks of IP addresses. Is there anyway I get the full list, or am I stuck with the slow reverse-DNS lookup? If so, can I speed it up? Edit: Okay, now I know I can use the .htaccess file to ban a range. Now, how can I figure out what that range should be for a given organization? A: How about an .htaccess: Deny from x.x.x.x if you need to deny a range say: 192.168.0.x then you would use Deny from 192.168.0 and the same applies for hostnames: Deny from sub.domain.tld or if you want a PHP solution $ips = array('1.1.1.1', '2.2.2.2', '3.3.3.3'); if(in_array($_SERVER['REMOTE_ADDR'])){die();} For more info on the htaccess method see this page. Now to determine the range is going to be hard, most companies (unless they are big corperate) are going to have a dynamic IP just like you and me. This is a problem I have had to deal with before and the best thing is either to ban the hostname, or the entire range, for example if they are on 192.168.0.123 then ban 192.168.0.123, unfortunatly you are going to get a few innocent people with either method. A: If you're practicing safe webhosting, then you have a firewall. Use it. Large companies have blocks of IP addresses, but even smaller companies rarely change their IP. So there's an easy way to do this without reducing your performance: Every month do a reverse lookup on all the IPs in your log and then put all the IPs used by that company in your firewall as deny. After awhile yo'll begin to see whether they have dynamic addresses or not. If they do, then you may have to do reverse lookups for each connection attempt, but unless they are a small company you shouldn't have to worry about it. A: Continue to use gethostbyaddr(), but behind a cache. You should only have to resolve it once per IP address, and then it would not be a significant performance issue. If you want, prime the cache from your server logs so returning users won't even hit the one-time slowdown. A: If your goal in doing this is to make it slightly inconvenient for people from a company to access your site, follow the advice above. But you won't be able to completely ensure you're blocking every access because they could always be going through a proxy. And if it's accessible to the rest of the public, you'll have to worry about archive.org, search engine caches, etc. Probably not the answer you're looking for, but it's accurate. A: Take a look at .htaccess if you're using apache: .htaccess tutorial A: First search for the company on whois.net. If you know they are just one domain, do a whois lookup. Otherwise, search for domains they own by keyword. You can find out the main IP ranges assigned to the company through whois queries, and then build your deny rule(s) accordingly. A: I know WikiScanner lets you search for a company or other organization, and then lists the IP address ranges belonging to them. Just as an example, here's all the IP addresses belonging to Google, at least according to WikiScanner. According to HowStuffWorks, they use something called "IP2Location". A: Do you have access to the actual server config? If so depending on the server you could do it in the configuration. See this thread for some information that may be helpful. A: http://en.wikipedia.org/wiki/Rwhois telnet rwhois.arin.net 4321 This used to work. A: The load shouldn't be put on the webserver, you should put it on the firewall. A: Note that using the techniques above it will never be possible to completely ban the specific company from accessing your website. It will still be possible for them to use proxy servers or look at your site from home. If you absolutely want to control who has access, you should only allow authenticated and authorized users to access your site.
How can I ban a whole company from my web site?
For reasons I won't go into, I wish to ban an entire company from accessing my web site. Checking the remote hostname in php using gethostbyaddr() works, but this slows down the page load too much. Large organizations (eg. hp.com or microsoft.com) often have blocks of IP addresses. Is there anyway I get the full list, or am I stuck with the slow reverse-DNS lookup? If so, can I speed it up? Edit: Okay, now I know I can use the .htaccess file to ban a range. Now, how can I figure out what that range should be for a given organization?
[ "How about an .htaccess:\nDeny from x.x.x.x\n\nif you need to deny a range say: 192.168.0.x then you would use\nDeny from 192.168.0\n\nand the same applies for hostnames:\nDeny from sub.domain.tld\n\nor if you want a PHP solution\n$ips = array('1.1.1.1', '2.2.2.2', '3.3.3.3');\nif(in_array($_SERVER['REMOTE_ADDR'])){die();}\n\nFor more info on the htaccess method see this page.\nNow to determine the range is going to be hard, most companies (unless they are big corperate) are going to have a dynamic IP just like you and me.\nThis is a problem I have had to deal with before and the best thing is either to ban the hostname, or the entire range, for example if they are on 192.168.0.123 then ban 192.168.0.123, unfortunatly you are going to get a few innocent people with either method.\n", "If you're practicing safe webhosting, then you have a firewall. Use it.\nLarge companies have blocks of IP addresses, but even smaller companies rarely change their IP. So there's an easy way to do this without reducing your performance:\nEvery month do a reverse lookup on all the IPs in your log and then put all the IPs used by that company in your firewall as deny.\nAfter awhile yo'll begin to see whether they have dynamic addresses or not. If they do, then you may have to do reverse lookups for each connection attempt, but unless they are a small company you shouldn't have to worry about it.\n", "Continue to use gethostbyaddr(), but behind a cache. You should only have to resolve it once per IP address, and then it would not be a significant performance issue. If you want, prime the cache from your server logs so returning users won't even hit the one-time slowdown.\n", "If your goal in doing this is to make it slightly inconvenient for people from a company to access your site, follow the advice above. But you won't be able to completely ensure you're blocking every access because they could always be going through a proxy. And if it's accessible to the rest of the public, you'll have to worry about archive.org, search engine caches, etc.\nProbably not the answer you're looking for, but it's accurate.\n", "Take a look at .htaccess if you're using apache: .htaccess tutorial\n", "First search for the company on whois.net. If you know they are just one domain, do a whois lookup. Otherwise, search for domains they own by keyword.\nYou can find out the main IP ranges assigned to the company through whois queries, and then build your deny rule(s) accordingly. \n", "I know WikiScanner lets you search for a company or other organization, and then lists the IP address ranges belonging to them. Just as an example, here's all the IP addresses belonging to Google, at least according to WikiScanner.\nAccording to HowStuffWorks, they use something called \"IP2Location\".\n", "Do you have access to the actual server config? If so depending on the server you could do it in the configuration.\nSee this thread for some information that may be helpful.\n", "http://en.wikipedia.org/wiki/Rwhois\ntelnet rwhois.arin.net 4321\nThis used to work.\n", "The load shouldn't be put on the webserver, you should put it on the firewall. \n", "Note that using the techniques above it will never be possible to completely ban the specific company from accessing your website. It will still be possible for them to use proxy servers or look at your site from home.\nIf you absolutely want to control who has access, you should only allow authenticated and authorized users to access your site. \n" ]
[ 11, 4, 2, 2, 1, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ ".net", "apache", "php" ]
stackoverflow_0000089480_.net_apache_php.txt
Q: How to make my .NET app support different languages The application I'm writing is almost complete and I'd like people who speak different languages to use it. I'm not sure where to start, what's the difference between globalisation and culture in regards to programming? How does one take uncommon phrases such as "this application was built to do this and that" instead of File, Open, Save etc...and turn them into say, Spanish? Many thanks :-) A: Microsoft already has a very good tutorial A: You have differents things to do to have a "globalized" application. 1) Translate every label in your forms and controls in your application You need to set the property "Localizable" to true on every form and control. This property enables the creation of resources files in each language and region. Now, with the property "Language", you can select which language you want to support. When you select a language in the combo box list, your form (or control) will be automatically switched to this language. Now, it is your job to translate every word in the control. As soon as you do a modification, Visual Studio will create a resource file for the specific language. (Per example, MyForm.fr-FR.resx for French-France). 2) Import every hardcoded string in your code into a resx file Create a resource file (personnally, I use StringTable.resx) and add every string to translate in this file. After that, create a resource file for every language that you want to support and translate the strings in each file. Per example, if you want to support French, you create StringTable.fr.resx or StringTable.fr-FR.resx for French-France. With ResourceManager class, you can load each string. Note: If you are using Visual Studio 2005 or 2008, you already have a resource files created by default. 3) You need to elaborate your forms and controls wisely Guidelines from Microsoft: Microsoft Guidelines 4) Dealing with Date and Numbers If your application creates data files which can be send to other user in other region, you need to think about it when you save your data in the file. So, always stock your datetime in UTC and do the conversion in local only when you load the information. The same thing applies to decimal number especially if they are stored in text. When you will compile your application, Visual Studion will create satellite file like MyApplication.fr.dll in a subfolder fr. To load this dll, you need to switch the language of the current thread at the startup of your application. Here the code: CultureInfo ci = new CultureInfo("fr"); Thread.CurrentThread.CurrentUICulture = ci; A: All your queries shall be answered in the book below. The initial chapters explain all the main concepts and terminology and fancy abbreviations like i18n. Didn't get time to read it till the end.. but good till the point I read. Recommended if you are serious about doing it the right way and have the time :) http://www.amazon.com/NET-Internationalization-Developers-Applications-Development/dp/0321341384 A: For a very simple system, create an interface which defines methods like GetSaveText(), etc. and allow assemblies like this to be plugged in to your application. A: This should be a pretty good solution for anywhere from 10-1000 strings: Have a resource file for each locale. I don't know .NET but I'm sure there is some common way to do this. Then, in your resource-fetching code, load the appropriate one based on your user's the browser locale setting. Ask this code to fetch you the proper string for some key. Example file contents, if I were to implement it from scratch: resources.en: save=Save close=Close ok=OK areYouSure=Are you sure? resources.es: save=I don't know how to say anything in Spanish, oops close=... ok=... areYouSure=...
How to make my .NET app support different languages
The application I'm writing is almost complete and I'd like people who speak different languages to use it. I'm not sure where to start, what's the difference between globalisation and culture in regards to programming? How does one take uncommon phrases such as "this application was built to do this and that" instead of File, Open, Save etc...and turn them into say, Spanish? Many thanks :-)
[ "Microsoft already has a very good tutorial\n", "You have differents things to do to have a \"globalized\" application.\n1) Translate every label in your forms and controls in your application\nYou need to set the property \"Localizable\" to true on every form and control. This property enables the creation of resources files in each language and region. Now, with the property \"Language\", you can select which language you want to support. When you select a language in the combo box list, your form (or control) will be automatically switched to this language. Now, it is your job to translate every word in the control. As soon as you do a modification, Visual Studio will create a resource file for the specific language. (Per example, MyForm.fr-FR.resx for French-France).\n2) Import every hardcoded string in your code into a resx file\nCreate a resource file (personnally, I use StringTable.resx) and add every string to translate in this file. After that, create a resource file for every language that you want to support and translate the strings in each file. Per example, if you want to support French, you create StringTable.fr.resx or StringTable.fr-FR.resx for French-France. With ResourceManager class, you can load each string.\nNote: If you are using Visual Studio 2005 or 2008, you already have a resource files created by default.\n3) You need to elaborate your forms and controls wisely\nGuidelines from Microsoft: Microsoft Guidelines\n4) Dealing with Date and Numbers\nIf your application creates data files which can be send to other user in other region, you need to think about it when you save your data in the file. So, always stock your datetime in UTC and do the conversion in local only when you load the information. The same thing applies to decimal number especially if they are stored in text.\n\nWhen you will compile your application, Visual Studion will create satellite file like MyApplication.fr.dll in a subfolder fr. To load this dll, you need to switch the language of the current thread at the startup of your application. \nHere the code:\nCultureInfo ci = new CultureInfo(\"fr\");\nThread.CurrentThread.CurrentUICulture = ci;\n\n", "All your queries shall be answered in the book below. The initial chapters explain all the main concepts and terminology and fancy abbreviations like i18n. Didn't get time to read it till the end.. but good till the point I read. Recommended if you are serious about doing it the right way and have the time :)\nhttp://www.amazon.com/NET-Internationalization-Developers-Applications-Development/dp/0321341384\n\n", "For a very simple system, create an interface which defines methods like GetSaveText(), etc. and allow assemblies like this to be plugged in to your application.\n", "This should be a pretty good solution for anywhere from 10-1000 strings:\nHave a resource file for each locale. I don't know .NET but I'm sure there is some common way to do this. Then, in your resource-fetching code, load the appropriate one based on your user's the browser locale setting. Ask this code to fetch you the proper string for some key.\nExample file contents, if I were to implement it from scratch:\nresources.en:\nsave=Save\nclose=Close\nok=OK\nareYouSure=Are you sure?\n\nresources.es:\nsave=I don't know how to say anything in Spanish, oops\nclose=...\nok=...\nareYouSure=...\n\n" ]
[ 5, 5, 2, 0, 0 ]
[]
[]
[ ".net", "culture", "globalization" ]
stackoverflow_0000090061_.net_culture_globalization.txt
Q: MDB2 disconnects and forgets charset setting when reconnecting We recently debugged a strange bug. A solution was found, but the solution is not entirely satisfactory. We use IntSmarty to localize our website, and store the localized strings in a database using our own wrapper. In its destructor, IntSmarty saves any new strings that it might have, resulting in a database call. We use a Singleton instance of MDB2 to do queries against MySQL, and after connecting we used the SetCharset()-function to change the character set to UTF-8. We found that strings that were saved by IntSmarty were interpreted as ISO-8859-1 when the final inserts were made. We looked closely at the query log, and found that the MySQL connection got disconnected before IntSmarty's destructor got called. It then got reestablished, but no "SET NAMES utf8" query was issued on the new connection. This resulted that the saved strings got interpreted as ISO-8859-1 by MySQL. There seems to be no options that set the default character set on MDB2. Our solution to this problem was changing the MySQL server configuration, by adding init-connect='SET NAMES utf8' to my.cnf. This only solves the problem that our character set is always the same. So, is there any way that I can prevent the connection from being torn down before all the queries have been run? Can I force the MDB2 instance to be destructed after everything else? Turning on persistent connections works, but is not a desired answer. A: From the PHP5 documentation: The destructor method will be called as soon as all references to a particular object are removed or when the object is explicitly destroyed or in any order in shutdown sequence. PHP documentation (emphasis mine) What is probably happening is that your script does not explicitly destroy the object, and so when PHP gets to the end of the script it starts cleaning up things in whatever order it feels like--which in your case, is closing the database link first. If you explicitly destroy the IntSmarty object prior to the actual end of the script, that should solve your problem.
MDB2 disconnects and forgets charset setting when reconnecting
We recently debugged a strange bug. A solution was found, but the solution is not entirely satisfactory. We use IntSmarty to localize our website, and store the localized strings in a database using our own wrapper. In its destructor, IntSmarty saves any new strings that it might have, resulting in a database call. We use a Singleton instance of MDB2 to do queries against MySQL, and after connecting we used the SetCharset()-function to change the character set to UTF-8. We found that strings that were saved by IntSmarty were interpreted as ISO-8859-1 when the final inserts were made. We looked closely at the query log, and found that the MySQL connection got disconnected before IntSmarty's destructor got called. It then got reestablished, but no "SET NAMES utf8" query was issued on the new connection. This resulted that the saved strings got interpreted as ISO-8859-1 by MySQL. There seems to be no options that set the default character set on MDB2. Our solution to this problem was changing the MySQL server configuration, by adding init-connect='SET NAMES utf8' to my.cnf. This only solves the problem that our character set is always the same. So, is there any way that I can prevent the connection from being torn down before all the queries have been run? Can I force the MDB2 instance to be destructed after everything else? Turning on persistent connections works, but is not a desired answer.
[ "From the PHP5 documentation:\n\nThe destructor method will be called as soon as all references to a particular object are removed or when the object is explicitly destroyed or in any order in shutdown sequence.\nPHP documentation\n\n(emphasis mine)\nWhat is probably happening is that your script does not explicitly destroy the object, and so when PHP gets to the end of the script it starts cleaning up things in whatever order it feels like--which in your case, is closing the database link first.\nIf you explicitly destroy the IntSmarty object prior to the actual end of the script, that should solve your problem.\n" ]
[ 1 ]
[]
[]
[ "mdb2", "mysql", "pear", "php" ]
stackoverflow_0000081061_mdb2_mysql_pear_php.txt
Q: Track installs of software Despite my lack of coding knowledge I managed to write a small little app in VB net that a lot of people are now using. Since I made it for free I have no way of knowing how popular it really is and was thinking I could make it ping some sort of online stat counter so I could figure out if I should port it to other languages. Any idea of how I could ping a url via vb without actually opening a window or asking to receive any data? When I google a lot of terms for this I end up with examples with 50+ lines of code for what I would think should only take one line or so, similar to opening an IE window. Side Note: Would of course fully inform all users this was happening. A: Just a sidenote: You should inform your users that you are doing this (or not do it at all) for privacy concerns. Even if you aren't collecting any personal data it can be considered a privacy problem. For example, when programs collect usage information, they almost always have a box in the installation process asking if the user wants to participate in an "anonymous usage survey" or something similar. What if you just tracked downloads? A: Might be easier to track downloads (assuming people are getting this via HTTP) instead of installs. Otherwise, add a "register now?" feature. A: You could use something simple in the client app like Sub PingServer(Server As String, Port As Integer) Dim Temp As New System.Net.Sockets(); Temp.Connect(Server, Port) Temp.Close() End Sub Get your webserver to listen on a particular port and count connections. Also, you really shouldn't do this without the user's knowledge, so as others have said, it would be better to count downloads, or implement a registration feature. A: I assume you are making this available via a website. So you could just ask people to give you their email address in order to get the download link for the installer. Then you can track how many people add themselves to your email list each month/week/etc. It also means you can email them all when you make a new release so that they can keep up to date with the latest and greatest. Note: Always ensure they have an unsubscribe link at the end of each email you send them. A: .NET? Create an ASMX Web Service and set it up on your web site. Then add the service reference to your app. EDIT/CLARIFICATION: Your Web Service can then store passed data into a database, instead of relying on Web Logs: Installation Id, Install Date, Number of times run, etc. A: The guys over at vbdotnetheaven.com have a simple example using the WebClient, WebRequest and HttpWebRequest classes. Here is their WebClient class example: Imports System Imports System.IO Imports System.Net Module Module1 Sub Main() ' Address of URL Dim URL As String = http://www.c-sharpcorner.com/default.asp ' Get HTML data Dim client As WebClient = New WebClient() Dim data As Stream = client.OpenRead(URL) Dim reader As StreamReader = New StreamReader(data) Dim str As String = "" str = reader.ReadLine() Do While str.Length > 0 Console.WriteLine(str) str = reader.ReadLine() Loop End Sub End Module
Track installs of software
Despite my lack of coding knowledge I managed to write a small little app in VB net that a lot of people are now using. Since I made it for free I have no way of knowing how popular it really is and was thinking I could make it ping some sort of online stat counter so I could figure out if I should port it to other languages. Any idea of how I could ping a url via vb without actually opening a window or asking to receive any data? When I google a lot of terms for this I end up with examples with 50+ lines of code for what I would think should only take one line or so, similar to opening an IE window. Side Note: Would of course fully inform all users this was happening.
[ "Just a sidenote: You should inform your users that you are doing this (or not do it at all) for privacy concerns. Even if you aren't collecting any personal data it can be considered a privacy problem. For example, when programs collect usage information, they almost always have a box in the installation process asking if the user wants to participate in an \"anonymous usage survey\" or something similar. What if you just tracked downloads?\n", "Might be easier to track downloads (assuming people are getting this via HTTP) instead of installs. Otherwise, add a \"register now?\" feature.\n", "You could use something simple in the client app like\nSub PingServer(Server As String, Port As Integer)\n Dim Temp As New System.Net.Sockets();\n Temp.Connect(Server, Port)\n Temp.Close()\nEnd Sub\n\nGet your webserver to listen on a particular port and count connections.\nAlso, you really shouldn't do this without the user's knowledge, so as others have said, it would be better to count downloads, or implement a registration feature.\n", "I assume you are making this available via a website. So you could just ask people to give you their email address in order to get the download link for the installer. Then you can track how many people add themselves to your email list each month/week/etc. It also means you can email them all when you make a new release so that they can keep up to date with the latest and greatest.\nNote: Always ensure they have an unsubscribe link at the end of each email you send them.\n", ".NET? Create an ASMX Web Service and set it up on your web site. Then add the service reference to your app.\nEDIT/CLARIFICATION: Your Web Service can then store passed data into a database, instead of relying on Web Logs: Installation Id, Install Date, Number of times run, etc.\n", "The guys over at vbdotnetheaven.com have a simple example using the WebClient, WebRequest and HttpWebRequest classes. Here is their WebClient class example:\nImports System\nImports System.IO\nImports System.Net\nModule Module1 \n Sub Main()\n ' Address of URL\n Dim URL As String = http://www.c-sharpcorner.com/default.asp\n ' Get HTML data\n Dim client As WebClient = New WebClient()\n Dim data As Stream = client.OpenRead(URL)\n Dim reader As StreamReader = New StreamReader(data)\n Dim str As String = \"\"\n str = reader.ReadLine()\n Do While str.Length > 0\n Console.WriteLine(str)\n str = reader.ReadLine()\n Loop\n End Sub\nEnd Module\n\n" ]
[ 2, 1, 1, 1, 0, 0 ]
[]
[]
[ "vb.net" ]
stackoverflow_0000090313_vb.net.txt
Q: How can I know why one of my vxWorks task is pended? In vxWorks, I can issue the "i" command in the shell, and I get the list of tasks in my system along with some information like the following example: NAME ENTRY TID PRI STATUS PC SP ERRNO DELAY ---------- ------------ -------- --- ---------- -------- -------- ------- ----- tJobTask 1005a6e0 103bae00 0 PEND 100e5860 105fffa8 0 0 tExcTask 10059960 10197cbc 0 PEND 100e5860 101a0ef4 0 0 tLogTask logTask 103bed78 0 PEND 100e37cd 1063ff24 0 0 tNbioLog 1005b390 103bf210 0 PEND 100e5860 1067ff54 0 0 For the tasks that are pended, I would like to know what they are pended on. Is there a way to do this? A: The "w" command will do exactly what you want: NAME ENTRY TID STATUS DELAY OBJ_TYPE OBJ_ID OBJ_NAME ---------- ---------- ---------- ---------- ----- ---------- ---------- -------- tJobTask 0x1005a6e0 0x103bae00 PEND 0 SEM_B 0x10184088 N/A tExcTask 0x10059960 0x10197cbc PEND 0 SEM_B 0x10183ff8 N/A tLogTask logTask 0x103bed78 PEND 0 MSG_Q(R) 0x103be358 N/A tNbioLog 0x1005b390 0x103bf210 PEND 0 SEM_B 0x103bf198 N/A
How can I know why one of my vxWorks task is pended?
In vxWorks, I can issue the "i" command in the shell, and I get the list of tasks in my system along with some information like the following example: NAME ENTRY TID PRI STATUS PC SP ERRNO DELAY ---------- ------------ -------- --- ---------- -------- -------- ------- ----- tJobTask 1005a6e0 103bae00 0 PEND 100e5860 105fffa8 0 0 tExcTask 10059960 10197cbc 0 PEND 100e5860 101a0ef4 0 0 tLogTask logTask 103bed78 0 PEND 100e37cd 1063ff24 0 0 tNbioLog 1005b390 103bf210 0 PEND 100e5860 1067ff54 0 0 For the tasks that are pended, I would like to know what they are pended on. Is there a way to do this?
[ "The \"w\" command will do exactly what you want:\n\n NAME ENTRY TID STATUS DELAY OBJ_TYPE OBJ_ID OBJ_NAME\n---------- ---------- ---------- ---------- ----- ---------- ---------- --------\ntJobTask 0x1005a6e0 0x103bae00 PEND 0 SEM_B 0x10184088 N/A \ntExcTask 0x10059960 0x10197cbc PEND 0 SEM_B 0x10183ff8 N/A \ntLogTask logTask 0x103bed78 PEND 0 MSG_Q(R) 0x103be358 N/A \ntNbioLog 0x1005b390 0x103bf210 PEND 0 SEM_B 0x103bf198 N/A \n\n" ]
[ 5 ]
[]
[]
[ "vxworks" ]
stackoverflow_0000090433_vxworks.txt
Q: Does Vista do stricter checking of Interface Ids in DCOM calls? (the Stub received bad Data)? I hope everyone will pardon the length, and narrative fashion, of this question. I decided to describe the situation in some detail in my blog. I later saw Joel's invitation to this site, and I thought I'd paste it here to see if anyone has any insight into the situation. I wrote, and now support, an application that consists of a Visual Basic thick client speaking DCOM to middle tier COM+ components written in C++ using ATL. It runs in all eight of our offices. Each office hosts a back-end server that contains the COM+ application (consisting of 18 separate components) and the SQLServer. The SQLServer is typically on the same back-end server, but need not be. We recently migrated the back-end server in our largest office -- New York -- from a MSC cluster to a new virtual machine hosted on VMWare's ESX technology. Since the location of the COM+ application had moved from the old server to a new one with a different name, I had to redirect all the clients so that they activated the COM+ application on the new server. The procedure was old hat as I had done essentially the same thing for several of my smaller offices that had gone through similar infrastructure upgrades. All seemed routine and on Monday morning the entire office -- about 1,000 Windows XP workstations -- were running without incident on the new server. But then the call came from my mobile group -- there was an attorney working from home with a VPN connection that was getting a strange error after being redirected to the new server: Error on FillTreeView2 - The stub received bad data. Huh? I had never seen this error message before. Was it the new server? But all the workstations in the office were working fine. I told the mobile group to switch the attorney back to the old sever (which was still up), and the error disappeared. So what was the difference? Turns out this attorney was running Vista at home. We don't run Vista in any of our offices, but we do have some attorneys that run Vista at home (certainly some in my New York office). I do as well and I've never seen this problem. To confirm that there was an issue, I fired up my Vista laptop, pointed it to the new server, and got the same error. I pointed it back to the old server, and it worked fine. Clearly there was some problem with Vista and the components on the new server -- a problem that did not seem to affect XP clients. What could it be? Next stop -- the application error log on my laptop. This yielded more information on the error: Source: Microsoft-Windows-RPC-Events Date: 9/2/2008 11:56:07 AM Event ID: 10 Level: Error Computer: DevLaptop Description: Application has failed to complete a COM call because an incorrect interface ID was passed as a parameter. The expected Interface ID was 00000555-0000-0010-8000-00aa006d2ea4, The Interface ID returned was 00000556-0000-0010-8000-00aa006d2ea4. User Action - Contact the application vendor for updated version of the application. The interface ids provided the clue I needed to unravel the mystery. The "expected" interface id identifies MDAC's Recordset interface -- specifically version 2.1 of that interface. The "returned" interface corresponds to a later version of Recordset (version 2.5 which differs from version 2.1 by the inclusion of one additional entry at the end of the vtable -- method Save). Indeed my component's interfaces expose many methods that pass Recordset as an output parameter. So were they suddenly returning a later version of Recordset -- with a different interface id? It certainly appeared to be the case. And then I thought, why should it matter. The vtable looks the same to clients of the older interface. Indeed, I suspect that if we were talking about in-process COM, and not DCOM, this apparently innocuous impedance mismatch would have been silently ignored and would have caused no issues. Of course, when process and machine boundaries come into play, there is a proxy and a stub between the client and the server. In this case, I was using type library marshaling with the free threaded marshaller. So there were two mysteries to solve: Why was I returning a different interface in the output parameters from methods on my new server? Why did this affect only Vista clients? As my server software was hosted on servers at each of my eight offices, I decided to try pointing my Vista client at all of them in sequence to see which had problems with Vista and which didn't. Illuminating test. Some of the older servers still worked with Vista but the newer ones did not. Although some of the older servers were still running Windows 2000 while the newer ones were at 2003, that did not seem to be the issue. After comparing the dates of the component DLLs it appeared that whenever the client pointed to servers with component DLLs dated before 2003 Vista was fine. But those that had DLLs with dates after 2003 were problematic. Believe it or nor, there were no (or at least no significant) changes to the code on the server components in many years. Apparently the differing dates were simply due to recompiles of my components on my development machine(s). And it appeared that one of those recompiles happened in 2003. The light bulb went on. When passing Recordsets back from server to client, my ATL C++ components refer to the interface as _Recordset. This symbol comes from the type library embedded within msado15.dll. This is the line I had in the C++ code: #import "c:\Program Files\Common Files\System\ADO\msado15.dll" no_namespace rename ( "EOF", "adoEOF" ) Don't be deceived by the 15 in msdad15.dll. Apparently this DLL has not changed name in the long series of MDAC versions. When I compiled the application back in the day, the version of MDAC was 2.1. So _Recordset compiled with the 2.1 interface id and that is the interface returned by the servers running those components. All the client's use the COM+ application proxy that was generated (I believe) back in 1999. The type library that defines my interfaces includes the line: importlib("msado21.tlb"); which explains why they expect version 2.1 of Recordset in my method's output parameters. Clearly the problem was with my 2003 recompile and the fact that at that time the _Recordset symbol no longer corresponded to version 2.1. Indeed _Recordset corresponded to the 2.5 version with its distinct interface id. The solution for me was to change all references from _Recordset to Recordset21 in my C++ code. I rebuilt the components and deployed them to the new server. Voila -- the clients seemed happy again. In conclusion, there are two nagging questions that remain for me. Why does the proxy/stub infrastructure seem to behave differently with Vista clients? It appears that Vista is making stricter checks of the interface ids coming back from method parameters than is XP. How should I have coded this differently back in 1999 so that this would not have happened? Interfaces are supposed to be immutable and when I recompiled under a newer version of MDAC, I inadvertently changed my interface because the methods now returned a different Recordset interface as an output parameter. As far as I know, the type library back then did not have a version-specific symbol -- that is, later versions of the MDAC type libraries define Recordset21, but that symbol was not available back in the 2.1 type library. A: When Microsoft got the security religion, DCOM (and the underlying RPC) got a lot of attention, and there definitely were changes made to close security holes that resulted in stricter marshaling. I'm suprised you see this in Vista but not in XP, but its possible that additional checks were added for Vista. Alternatively, its possible that optional strictness in XP was made mandatory in Vista. While I don't know enough about MDAC to know if you could have prevented this, I do know that security is one of the few areas where Microsoft is pretty willing to sacrifice backward compatibility, so it is possible you could not have done anything "better" back in 1999.
Does Vista do stricter checking of Interface Ids in DCOM calls? (the Stub received bad Data)?
I hope everyone will pardon the length, and narrative fashion, of this question. I decided to describe the situation in some detail in my blog. I later saw Joel's invitation to this site, and I thought I'd paste it here to see if anyone has any insight into the situation. I wrote, and now support, an application that consists of a Visual Basic thick client speaking DCOM to middle tier COM+ components written in C++ using ATL. It runs in all eight of our offices. Each office hosts a back-end server that contains the COM+ application (consisting of 18 separate components) and the SQLServer. The SQLServer is typically on the same back-end server, but need not be. We recently migrated the back-end server in our largest office -- New York -- from a MSC cluster to a new virtual machine hosted on VMWare's ESX technology. Since the location of the COM+ application had moved from the old server to a new one with a different name, I had to redirect all the clients so that they activated the COM+ application on the new server. The procedure was old hat as I had done essentially the same thing for several of my smaller offices that had gone through similar infrastructure upgrades. All seemed routine and on Monday morning the entire office -- about 1,000 Windows XP workstations -- were running without incident on the new server. But then the call came from my mobile group -- there was an attorney working from home with a VPN connection that was getting a strange error after being redirected to the new server: Error on FillTreeView2 - The stub received bad data. Huh? I had never seen this error message before. Was it the new server? But all the workstations in the office were working fine. I told the mobile group to switch the attorney back to the old sever (which was still up), and the error disappeared. So what was the difference? Turns out this attorney was running Vista at home. We don't run Vista in any of our offices, but we do have some attorneys that run Vista at home (certainly some in my New York office). I do as well and I've never seen this problem. To confirm that there was an issue, I fired up my Vista laptop, pointed it to the new server, and got the same error. I pointed it back to the old server, and it worked fine. Clearly there was some problem with Vista and the components on the new server -- a problem that did not seem to affect XP clients. What could it be? Next stop -- the application error log on my laptop. This yielded more information on the error: Source: Microsoft-Windows-RPC-Events Date: 9/2/2008 11:56:07 AM Event ID: 10 Level: Error Computer: DevLaptop Description: Application has failed to complete a COM call because an incorrect interface ID was passed as a parameter. The expected Interface ID was 00000555-0000-0010-8000-00aa006d2ea4, The Interface ID returned was 00000556-0000-0010-8000-00aa006d2ea4. User Action - Contact the application vendor for updated version of the application. The interface ids provided the clue I needed to unravel the mystery. The "expected" interface id identifies MDAC's Recordset interface -- specifically version 2.1 of that interface. The "returned" interface corresponds to a later version of Recordset (version 2.5 which differs from version 2.1 by the inclusion of one additional entry at the end of the vtable -- method Save). Indeed my component's interfaces expose many methods that pass Recordset as an output parameter. So were they suddenly returning a later version of Recordset -- with a different interface id? It certainly appeared to be the case. And then I thought, why should it matter. The vtable looks the same to clients of the older interface. Indeed, I suspect that if we were talking about in-process COM, and not DCOM, this apparently innocuous impedance mismatch would have been silently ignored and would have caused no issues. Of course, when process and machine boundaries come into play, there is a proxy and a stub between the client and the server. In this case, I was using type library marshaling with the free threaded marshaller. So there were two mysteries to solve: Why was I returning a different interface in the output parameters from methods on my new server? Why did this affect only Vista clients? As my server software was hosted on servers at each of my eight offices, I decided to try pointing my Vista client at all of them in sequence to see which had problems with Vista and which didn't. Illuminating test. Some of the older servers still worked with Vista but the newer ones did not. Although some of the older servers were still running Windows 2000 while the newer ones were at 2003, that did not seem to be the issue. After comparing the dates of the component DLLs it appeared that whenever the client pointed to servers with component DLLs dated before 2003 Vista was fine. But those that had DLLs with dates after 2003 were problematic. Believe it or nor, there were no (or at least no significant) changes to the code on the server components in many years. Apparently the differing dates were simply due to recompiles of my components on my development machine(s). And it appeared that one of those recompiles happened in 2003. The light bulb went on. When passing Recordsets back from server to client, my ATL C++ components refer to the interface as _Recordset. This symbol comes from the type library embedded within msado15.dll. This is the line I had in the C++ code: #import "c:\Program Files\Common Files\System\ADO\msado15.dll" no_namespace rename ( "EOF", "adoEOF" ) Don't be deceived by the 15 in msdad15.dll. Apparently this DLL has not changed name in the long series of MDAC versions. When I compiled the application back in the day, the version of MDAC was 2.1. So _Recordset compiled with the 2.1 interface id and that is the interface returned by the servers running those components. All the client's use the COM+ application proxy that was generated (I believe) back in 1999. The type library that defines my interfaces includes the line: importlib("msado21.tlb"); which explains why they expect version 2.1 of Recordset in my method's output parameters. Clearly the problem was with my 2003 recompile and the fact that at that time the _Recordset symbol no longer corresponded to version 2.1. Indeed _Recordset corresponded to the 2.5 version with its distinct interface id. The solution for me was to change all references from _Recordset to Recordset21 in my C++ code. I rebuilt the components and deployed them to the new server. Voila -- the clients seemed happy again. In conclusion, there are two nagging questions that remain for me. Why does the proxy/stub infrastructure seem to behave differently with Vista clients? It appears that Vista is making stricter checks of the interface ids coming back from method parameters than is XP. How should I have coded this differently back in 1999 so that this would not have happened? Interfaces are supposed to be immutable and when I recompiled under a newer version of MDAC, I inadvertently changed my interface because the methods now returned a different Recordset interface as an output parameter. As far as I know, the type library back then did not have a version-specific symbol -- that is, later versions of the MDAC type libraries define Recordset21, but that symbol was not available back in the 2.1 type library.
[ "When Microsoft got the security religion, DCOM (and the underlying RPC) got a lot of attention, and there definitely were changes made to close security holes that resulted in stricter marshaling. I'm suprised you see this in Vista but not in XP, but its possible that additional checks were added for Vista. Alternatively, its possible that optional strictness in XP was made mandatory in Vista.\nWhile I don't know enough about MDAC to know if you could have prevented this, I do know that security is one of the few areas where Microsoft is pretty willing to sacrifice backward compatibility, so it is possible you could not have done anything \"better\" back in 1999.\n" ]
[ 2 ]
[]
[]
[ "dcom", "windows_vista" ]
stackoverflow_0000063720_dcom_windows_vista.txt
Q: Opening two HTMLHelp files simultaneously in Delphi causes both help windows to hang In Delphi, the application's main help file is assigned through the TApplication.HelpFile property. All calls to the application's help system then use this property (in conjunction with CurrentHelpFile) to determine the help file to which help calls should be routed. In addition to TApplication.HelpFile, each form also has a TForm.HelpFile property which can be used to specify a different (separate) help file for help calls originating from that specific form. If an application's main help window is already open however, and a help call is made display help from a secondary help file, both help windows hang. Neither of the help windows can now be accessed, and neither can be closed. The only way to get rid of the help windows is to close the application, which results in both help windows being automatically closed as well. Example: Application.HelpFile := 'Main Help.chm'; //assign the main help file name Application.HelpContext(0); //dispays the main help window Form1.HelpFile := 'Secondary Help.chm'; //assign a different help file Application.HelpContext(0); //should display a second help window The last line of code above opens the secondary help window (but with no content) and then both help windows hang. My Question is this: Is it possible to display two HTMLHelp windows at the same time, and if so, what is the procedure to be followed? If not, is there a way to tell whether or not an application's help window is already open, and then close it programatically before displaying a different help window? (I am Using Delphi 2007 with HTMLHelp files on Windows Vista) UPDATE: 2008-09-18 Opening two help files at the same time does in fact work as expected using the code above. The problem seems to be with the actual help files I was using - not the code. I tried the same code with different help files, and it worked fine. Strangely enough, the two help files I was using each works fine on it's own - it's only when you try to open both at the same time that they hang, and only if you open them from code (in Windows explorer I can open both at the same time without a problem). Anyway - the problem is definitely with the help files and not the code - so the original questions is now pretty much invalid. UPDATE 2: 2008-09-18 I eventually found the cause of the hanging help windows. I will post the answer below and accept it as the correct one for future reference. I have also changed the questions title. Oops... It seems that I cannot accept my own answer... Please vote it up so it stays at the top. A: Assuming you have two help files called "Help File 1.chm" and "Help File 2.chm" and you are opening these help files from your Delphi code. To open Help File 1, the following code will work: procedure TForm1.Button1Click(Sender: TObject); begin Application.HelpFile := 'Help File 1.chm'; Application.HelpContext(0); end; To open Help File 2, the following code will work: procedure TForm1.Button1Click(Sender: TObject); begin Application.HelpFile := 'Help File 2.chm'; Application.HelpContext(0); end; But to open both files at the same time, the following code will cause both help windows to hang. procedure TForm1.Button1Click(Sender: TObject); begin Application.HelpFile := 'Help File 1.chm'; Application.HelpContext(0); Application.HelpFile := 'Help File 2.chm'; Application.HelpContext(0); end; SOLUTION: The problem is caused by the fact that there are spaces in the help file names. Removing the spaces from the file names will fix the problem. The following code will work fine: procedure TForm1.Button1Click(Sender: TObject); begin Application.HelpFile := 'HelpFile1.chm'; Application.HelpContext(0); Application.HelpFile := 'HelpFile2.chm'; Application.HelpContext(0); end; A: I just tested that and it works, as expected, with the kind of code you tried. Compiled in D2007/XP, ran in both XP and Vista without problem. procedure TForm1.Button1Click(Sender: TObject); begin Application.HelpFile:= 'depends.chm'; Application.HelpContext(0); HelpFile:='GExperts.chm'; Application.HelpContext(0); end; Both help files open and are alive and well.... Q1: Have you checked the validity of your help files? Q2: Where did you place your code? A: Tried. Just works. A: Inexperienced with help files here, and even moreso with Vista, but I can offer you a possible workaround... Build a second application whose only job is to open a help file. You can pass the help file name as a command line argument. You can easily check from your main application whether this help application is running. This will give you full control, as you can decide whether you want to Send a message to close the help application before opening the secondary help Allow more than one instance of the help application to allow different help files to be open at the same time Allow the help to remain open after your application closes, or whether you want to send a message to it to close it You can also check whether an instance of the help application already has the requested help file open and decide whether you want to allow it to be opened a second time, or simply bring the existing instance to the foreground. As stated, this is a workaround - if it turns out to be your only option let me know if you need code examples. Otherwise I'll keep this post clean (and save myself time in the short term) and not clutter it with unnecessary source
Opening two HTMLHelp files simultaneously in Delphi causes both help windows to hang
In Delphi, the application's main help file is assigned through the TApplication.HelpFile property. All calls to the application's help system then use this property (in conjunction with CurrentHelpFile) to determine the help file to which help calls should be routed. In addition to TApplication.HelpFile, each form also has a TForm.HelpFile property which can be used to specify a different (separate) help file for help calls originating from that specific form. If an application's main help window is already open however, and a help call is made display help from a secondary help file, both help windows hang. Neither of the help windows can now be accessed, and neither can be closed. The only way to get rid of the help windows is to close the application, which results in both help windows being automatically closed as well. Example: Application.HelpFile := 'Main Help.chm'; //assign the main help file name Application.HelpContext(0); //dispays the main help window Form1.HelpFile := 'Secondary Help.chm'; //assign a different help file Application.HelpContext(0); //should display a second help window The last line of code above opens the secondary help window (but with no content) and then both help windows hang. My Question is this: Is it possible to display two HTMLHelp windows at the same time, and if so, what is the procedure to be followed? If not, is there a way to tell whether or not an application's help window is already open, and then close it programatically before displaying a different help window? (I am Using Delphi 2007 with HTMLHelp files on Windows Vista) UPDATE: 2008-09-18 Opening two help files at the same time does in fact work as expected using the code above. The problem seems to be with the actual help files I was using - not the code. I tried the same code with different help files, and it worked fine. Strangely enough, the two help files I was using each works fine on it's own - it's only when you try to open both at the same time that they hang, and only if you open them from code (in Windows explorer I can open both at the same time without a problem). Anyway - the problem is definitely with the help files and not the code - so the original questions is now pretty much invalid. UPDATE 2: 2008-09-18 I eventually found the cause of the hanging help windows. I will post the answer below and accept it as the correct one for future reference. I have also changed the questions title. Oops... It seems that I cannot accept my own answer... Please vote it up so it stays at the top.
[ "Assuming you have two help files called \"Help File 1.chm\" and \"Help File 2.chm\" and you are opening these help files from your Delphi code.\nTo open Help File 1, the following code will work:\nprocedure TForm1.Button1Click(Sender: TObject);\nbegin\n Application.HelpFile := 'Help File 1.chm';\n Application.HelpContext(0);\nend;\n\nTo open Help File 2, the following code will work:\nprocedure TForm1.Button1Click(Sender: TObject);\nbegin\n Application.HelpFile := 'Help File 2.chm';\n Application.HelpContext(0);\nend;\n\nBut to open both files at the same time, the following code will cause both help windows to hang.\nprocedure TForm1.Button1Click(Sender: TObject);\nbegin\n Application.HelpFile := 'Help File 1.chm';\n Application.HelpContext(0);\n\n Application.HelpFile := 'Help File 2.chm';\n Application.HelpContext(0);\nend;\n\nSOLUTION:\nThe problem is caused by the fact that there are spaces in the help file names.\nRemoving the spaces from the file names will fix the problem.\nThe following code will work fine:\nprocedure TForm1.Button1Click(Sender: TObject);\nbegin\n Application.HelpFile := 'HelpFile1.chm';\n Application.HelpContext(0);\n\n Application.HelpFile := 'HelpFile2.chm';\n Application.HelpContext(0);\nend;\n\n", "I just tested that and it works, as expected, with the kind of code you tried.\nCompiled in D2007/XP, ran in both XP and Vista without problem.\nprocedure TForm1.Button1Click(Sender: TObject);\nbegin\n Application.HelpFile:= 'depends.chm';\n Application.HelpContext(0);\n HelpFile:='GExperts.chm';\n Application.HelpContext(0);\nend;\n\nBoth help files open and are alive and well....\nQ1: Have you checked the validity of your help files?\nQ2: Where did you place your code?\n", "Tried. Just works.\n", "Inexperienced with help files here, and even moreso with Vista, but I can offer you a possible workaround...\nBuild a second application whose only job is to open a help file. You can pass the help file name as a command line argument.\nYou can easily check from your main application whether this help application is running. This will give you full control, as you can decide whether you want to\n\nSend a message to close the help application before opening the secondary help\nAllow more than one instance of the help application to allow different help files to be open at the same time\nAllow the help to remain open after your application closes, or whether you want to send a message to it to close it\n\nYou can also check whether an instance of the help application already has the requested help file open and decide whether you want to allow it to be opened a second time, or simply bring the existing instance to the foreground.\nAs stated, this is a workaround - if it turns out to be your only option let me know if you need code examples. Otherwise I'll keep this post clean (and save myself time in the short term) and not clutter it with unnecessary source\n" ]
[ 5, 1, 1, 0 ]
[]
[]
[ "chm", "delphi" ]
stackoverflow_0000081243_chm_delphi.txt
Q: How to compile a DLL that does not require an external manifest file? I would like to compile a DLL under Visual Studio 2008 that depends on msvcr90.dll as a private assembly (basically I'll dump this DLL into the same directory as my application) without needing an external manifest file. I followed the steps outlined in http://msdn.microsoft.com/en-us/library/ms235291.aspx section "Deploying Visual C++ library DLLs as private assemblies" but instead of using an external manifest file (i.e. Microsoft.VC90.CRT.manifest) I'd like to embed it in the DLLs somehow. If I embed Microsoft.VC90.CRT.manifest into msvcr90.dll or the DLL loading it, and remove the external manifest file, LoadLibrary() fails. The problem is when you embed the manifest into a DLL it actually embeds the following: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugCRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"> </dependentAssembly> </dependency> </assembly> I think the <dependentAssembly> line is what's causing it to die if the manifest file is missing. Any ideas? A: Add the following to the preprocessor definitions: _CRT_NOFORCE_MANIFEST
How to compile a DLL that does not require an external manifest file?
I would like to compile a DLL under Visual Studio 2008 that depends on msvcr90.dll as a private assembly (basically I'll dump this DLL into the same directory as my application) without needing an external manifest file. I followed the steps outlined in http://msdn.microsoft.com/en-us/library/ms235291.aspx section "Deploying Visual C++ library DLLs as private assemblies" but instead of using an external manifest file (i.e. Microsoft.VC90.CRT.manifest) I'd like to embed it in the DLLs somehow. If I embed Microsoft.VC90.CRT.manifest into msvcr90.dll or the DLL loading it, and remove the external manifest file, LoadLibrary() fails. The problem is when you embed the manifest into a DLL it actually embeds the following: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugCRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"> </dependentAssembly> </dependency> </assembly> I think the <dependentAssembly> line is what's causing it to die if the manifest file is missing. Any ideas?
[ "Add the following to the preprocessor definitions:\n_CRT_NOFORCE_MANIFEST\n\n" ]
[ 1 ]
[]
[]
[ "visual_studio_2008" ]
stackoverflow_0000089994_visual_studio_2008.txt
Q: Maven Jdepend report contains no data I'm running the jdepend maven plugin on my project and whether I run "mvn site:site" or "mvn jdepend:generate" the report that gets generated says "There are no package used." There are no errors in the maven output. Other plugins (cobertura, findbugs, etc.) run fine. My pom is configured like this: <reporting> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>jdepend-maven-plugin</artifactId> </plugin> Any ideas? A: Did you try running "mvn -U -cpu site:site" to update all the maven dependencies? Maybe this question is better asked in the Maven forum :)
Maven Jdepend report contains no data
I'm running the jdepend maven plugin on my project and whether I run "mvn site:site" or "mvn jdepend:generate" the report that gets generated says "There are no package used." There are no errors in the maven output. Other plugins (cobertura, findbugs, etc.) run fine. My pom is configured like this: <reporting> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>jdepend-maven-plugin</artifactId> </plugin> Any ideas?
[ "Did you try running \"mvn -U -cpu site:site\" to update all the maven dependencies?\nMaybe this question is better asked in the Maven forum :)\n" ]
[ 1 ]
[]
[]
[ "java", "jdepend", "maven_2", "maven_plugin" ]
stackoverflow_0000088879_java_jdepend_maven_2_maven_plugin.txt
Q: Loading .Net Fx query I have built a simple C#.Net app on a M/C with only .Net FX 1.1 present. Now when I execute this app on a M/C where there is : Case 1) Only .Net fx 2.0 is installed Case 2) Both .Net Fx 1.1 amd 2.0 are installed How is it determined to load the appropriate .Net framework in the above cases. A: The behavior as I understand it is your 1.1. app will use the 1.1 Framework unless it's unavailable, in which case it will use the 2.0 Framework, this is how an application can be compiled against the 1.1 Framework but can often still work on Vista, where only the 2.0 framework is available. Some handy resources I've used in the past when looking at these issues are Thomas F. Abraham's posts here and here, this guide on installing the .Net Framework 1.1 on Vista (if you have to support some legacy application's that require it) and this post which documents running asp.net 1.1 and 2.0 side by side (which describes the application pool issues you will encounter if attempting to mix the framework version of applications).
Loading .Net Fx query
I have built a simple C#.Net app on a M/C with only .Net FX 1.1 present. Now when I execute this app on a M/C where there is : Case 1) Only .Net fx 2.0 is installed Case 2) Both .Net Fx 1.1 amd 2.0 are installed How is it determined to load the appropriate .Net framework in the above cases.
[ "The behavior as I understand it is your 1.1. app will use the 1.1 Framework unless it's unavailable, in which case it will use the 2.0 Framework, this is how an application can be compiled against the 1.1 Framework but can often still work on Vista, where only the 2.0 framework is available.\nSome handy resources I've used in the past when looking at these issues are Thomas F. Abraham's posts here and here, this guide on installing the .Net Framework 1.1 on Vista (if you have to support some legacy application's that require it) and this post which documents running asp.net 1.1 and 2.0 side by side (which describes the application pool issues you will encounter if attempting to mix the framework version of applications).\n" ]
[ 0 ]
[]
[]
[ ".net" ]
stackoverflow_0000090476_.net.txt
Q: How well will WCF scale to a large number of client users? Does anyone have any experience with how well web services build with Microsoft's WCF will scale to a large number of users? The level I'm thinking of is in the region of 1000+ client users connecting to a collection of WCF services providing the business logic for our application, and these talking to a database - similar to a traditional 3-tier architecture. Are there any particular gotchas that have slowed down performance, or any design lessons learnt that have enabled this level of scalability? A: To ensure your WCF application can scale to the desired level I think you might need to tweak your thinking about the stats your services have to meet. You mention servicing "1000+ client users" but to gauge if your services can perform at that level you'll also need to have some estimated usage figures, which will help you calculate some simpler stats such as the number of requests per second your app needs to handle. Having just finished working on a WCF project we managed to get 400 requests per second on our test hardware, which combined with our expected usage pattern of each user making 300 requests a day indicated we could handle an average of 100,000 users a day (assuming a flat usage graph across the day). In addition, since it's fairly common to make the WCF service code stateless, it's pretty easy to scale out the actual WCF code by adding additional boxes, which means the overall performance of your system is much more likely to be limited by your business logic and persistence layer than it is by WCF. A: WCF configuration default limits, concurrency and scalability A: Probably the 4 biggest things you can start looking at first (besides just having good service code) are items related to: Bindings - some binding and they protocols they run on are just faster than others, tcp is going to be faster than any of the http bindings Instance Mode - this determines how your classes are allocated against the session callers One & Two Way Operations - if a response isn't needed back to the client, then do one-way Throttling - Max Sessions / Concurant Calls and Instances They did design WCF to be secure by default so the defaults are very limiting.
How well will WCF scale to a large number of client users?
Does anyone have any experience with how well web services build with Microsoft's WCF will scale to a large number of users? The level I'm thinking of is in the region of 1000+ client users connecting to a collection of WCF services providing the business logic for our application, and these talking to a database - similar to a traditional 3-tier architecture. Are there any particular gotchas that have slowed down performance, or any design lessons learnt that have enabled this level of scalability?
[ "To ensure your WCF application can scale to the desired level I think you might need to tweak your thinking about the stats your services have to meet.\nYou mention servicing \"1000+ client users\" but to gauge if your services can perform at that level you'll also need to have some estimated usage figures, which will help you calculate some simpler stats such as the number of requests per second your app needs to handle.\nHaving just finished working on a WCF project we managed to get 400 requests per second on our test hardware, which combined with our expected usage pattern of each user making 300 requests a day indicated we could handle an average of 100,000 users a day (assuming a flat usage graph across the day). \nIn addition, since it's fairly common to make the WCF service code stateless, it's pretty easy to scale out the actual WCF code by adding additional boxes, which means the overall performance of your system is much more likely to be limited by your business logic and persistence layer than it is by WCF.\n", "WCF configuration default limits, concurrency and scalability\n", "Probably the 4 biggest things you can start looking at first (besides just having good service code) are items related to:\n\nBindings - some binding and they protocols they run on are just faster than others, tcp is going to be faster than any of the http bindings\nInstance Mode - this determines how your classes are allocated against the session callers\nOne & Two Way Operations - if a response isn't needed back to the client, then do one-way\nThrottling - Max Sessions / Concurant Calls and Instances\n\nThey did design WCF to be secure by default so the defaults are very limiting.\n" ]
[ 15, 3, 2 ]
[]
[]
[ "scalability", "soa", "wcf" ]
stackoverflow_0000043823_scalability_soa_wcf.txt
Q: Programmatically stream audio in Cocoa on the Mac How do I go about programmatically creating audio streams using Cocoa on the Mac. To make, say a white-noise generator using core frameworks on Mac OSX in Cocoa apps? A: One way is using the CoreAudio DefaultOutputUnit. You can configure it with parameters such as output sampling rate, resolution, and output sample format. Then you can programmatically create a raw sound wave and provide this to the output unit. Take a look at this example on your machine at /Developer/Examples/CoreAudio/SimpleSDK/DefaultOutputUnit/ Which uses the default output unit to play a programmatically rendered sine wave. Using that as a starting point and you can write a routine to render any thing else to output. This location at /Developer/Examples/CoreAudio/ also contains tons of other core audio examples. A: Look at Audio Queue Services.
Programmatically stream audio in Cocoa on the Mac
How do I go about programmatically creating audio streams using Cocoa on the Mac. To make, say a white-noise generator using core frameworks on Mac OSX in Cocoa apps?
[ "One way is using the CoreAudio DefaultOutputUnit.\nYou can configure it with parameters such as output sampling rate, resolution, and output sample format. Then you can programmatically create a raw sound wave and provide this to the output unit.\nTake a look at this example on your machine at /Developer/Examples/CoreAudio/SimpleSDK/DefaultOutputUnit/\nWhich uses the default output unit to play a programmatically rendered sine wave. Using that as a starting point and you can write a routine to render any thing else to output.\nThis location at /Developer/Examples/CoreAudio/ also contains tons of other core audio examples.\n", "Look at Audio Queue Services.\n" ]
[ 4, 2 ]
[]
[]
[ "audio", "cocoa", "macos", "stream" ]
stackoverflow_0000087695_audio_cocoa_macos_stream.txt
Q: Is elegant, semantic CSS with ASP.Net still a pipe dream? I know Microsoft has made efforts in the direction of semantic and cross-browser compliant XHTML and CSS, but it still seems like a PitA to pull off elegant markup. I've downloaded and tweaked the CSS Friendly Adapters and all that. But I still find myself frustrated with bloated and unattractive code. Is elegant, semantic CSS with ASP.Net still a pipe dream? Or is it finally possible, I just need more practice? A: The easiest way to generate elegant HTML and CSS is to use MVC framework, where you have much more control over HTML generation than with Web Forms. A: See this question for more discussion, including use of MVC. This site uses ASP.NET and the markup is pretty clean. Check out the HTML/CSS on MicrosoftPDC.com (a site I'm working on) - it uses ASP.NET webforms, but we're designing with clean markup as a priority. A: As long as you use the Visual Studio designer, it's probably a pipe dream. I write all of my ASP.NET code (all markup, and CSS) by hand, simply to avoid the designer. Later versions of Visual Studio have gotten much better at not mangling your .aspx/.ascx files, but they're still far from perfect. A: A better question is: is it really worth it? I write web applications and rarely does the elegance of the resulting HTML/CSS/JavaScript add anything to the end goal. If your end goal is to have people do a "view source" on your stuff and admire it, then maybe this is important and worth all of the effort, but I doubt it. If you need the semantics, use XML for your data. I do believe in the idea of the semantic web, but my applications don't need to have anything to do with it. A: As DannySmurf said, hand building is the way to go. That said, you might look at Expression Web. At least it is pretty accurate in how it renders the pages. A: @JasonBunting - Yes, it's absolutely worth it. Semantic and cross-browser markup means that search engines have an easier (and thus higher rankings) time with your content, that browsers have an easier (and thus less error-prone) time parsing your content for display, and that future developers have an easier time maintaining your code. A: Yes - it's a pipe dream. Since working with a professional web designer on a joint project who HATED the output of ASP.net server side controls I stopped using them. I essentially had to write ASP.net apps like you would write a modern PHP app. If you have a heavy business layer then your page or UI code can be minimal. I've never looked back since. The extra time spent writing everything custom has saved me a great deal of time trying to make Visual Studio / ASP.net play nice with CSS/XHTML. A: i can't believe nobody has mentioned css adapters. many of the common controls used in asp.net (gridview and treeview for example) can be processed through an adapter to change the resulting html that is outputted to the browser. if going the mvc route isn't a viable option, it is possible to write your own adapters for any of the built in asp.net controls. http://www.asp.net/CssAdapters/
Is elegant, semantic CSS with ASP.Net still a pipe dream?
I know Microsoft has made efforts in the direction of semantic and cross-browser compliant XHTML and CSS, but it still seems like a PitA to pull off elegant markup. I've downloaded and tweaked the CSS Friendly Adapters and all that. But I still find myself frustrated with bloated and unattractive code. Is elegant, semantic CSS with ASP.Net still a pipe dream? Or is it finally possible, I just need more practice?
[ "The easiest way to generate elegant HTML and CSS is to use MVC framework, where you have much more control over HTML generation than with Web Forms.\n", "See this question for more discussion, including use of MVC. This site uses ASP.NET and the markup is pretty clean. Check out the HTML/CSS on MicrosoftPDC.com (a site I'm working on) - it uses ASP.NET webforms, but we're designing with clean markup as a priority.\n", "As long as you use the Visual Studio designer, it's probably a pipe dream. I write all of my ASP.NET code (all markup, and CSS) by hand, simply to avoid the designer. Later versions of Visual Studio have gotten much better at not mangling your .aspx/.ascx files, but they're still far from perfect.\n", "A better question is: is it really worth it? I write web applications and rarely does the elegance of the resulting HTML/CSS/JavaScript add anything to the end goal. If your end goal is to have people do a \"view source\" on your stuff and admire it, then maybe this is important and worth all of the effort, but I doubt it.\nIf you need the semantics, use XML for your data. I do believe in the idea of the semantic web, but my applications don't need to have anything to do with it.\n", "As DannySmurf said, hand building is the way to go.\nThat said, you might look at Expression Web. At least it is pretty accurate in how it renders the pages.\n", "@JasonBunting - Yes, it's absolutely worth it. Semantic and cross-browser markup means that search engines have an easier (and thus higher rankings) time with your content, that browsers have an easier (and thus less error-prone) time parsing your content for display, and that future developers have an easier time maintaining your code.\n", "Yes - it's a pipe dream. Since working with a professional web designer on a joint project who HATED the output of ASP.net server side controls I stopped using them. I essentially had to write ASP.net apps like you would write a modern PHP app. If you have a heavy business layer then your page or UI code can be minimal.\nI've never looked back since. The extra time spent writing everything custom has saved me a great deal of time trying to make Visual Studio / ASP.net play nice with CSS/XHTML.\n", "i can't believe nobody has mentioned css adapters. many of the common controls used in asp.net (gridview and treeview for example) can be processed through an adapter to change the resulting html that is outputted to the browser.\nif going the mvc route isn't a viable option, it is possible to write your own adapters for any of the built in asp.net controls.\nhttp://www.asp.net/CssAdapters/\n" ]
[ 13, 2, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "asp.net", "css", "semantics", "xhtml" ]
stackoverflow_0000033223_asp.net_css_semantics_xhtml.txt
Q: In Applescript, how can I get to the Help menu Search field, like Spotlight? In OS X, in order to quickly get at menu items from the keyboard, I want to be able to type a key combination, have it run a script, and have the script focus the Search field in the Help menu. It should work just like the key combination for Spotlight, so if I run it again, it should dismiss the menu. I can run the script with Quicksilver, but how can I write the script? A: Alternatively, hit cmd-? and don't mess with the script. :-) That puts key focus in the help menu's search field. A: Here is the script I came up with. tell application "System Events" tell (first process whose frontmost is true) click menu "Help" of menu bar 1 end tell end tell
In Applescript, how can I get to the Help menu Search field, like Spotlight?
In OS X, in order to quickly get at menu items from the keyboard, I want to be able to type a key combination, have it run a script, and have the script focus the Search field in the Help menu. It should work just like the key combination for Spotlight, so if I run it again, it should dismiss the menu. I can run the script with Quicksilver, but how can I write the script?
[ "Alternatively, hit cmd-? and don't mess with the script. :-) That puts key focus in the help menu's search field.\n", "Here is the script I came up with.\ntell application \"System Events\"\n tell (first process whose frontmost is true)\n click menu \"Help\" of menu bar 1\n end tell\nend tell\n\n" ]
[ 2, 1 ]
[]
[]
[ "applescript", "macos", "menu", "search", "spotlight" ]
stackoverflow_0000069391_applescript_macos_menu_search_spotlight.txt