content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Triangle Trigonometry (ActionScript 3) I am trying to write a formula in ActionScript 3 that will give me var "z" (please see image below) in degrees, which I will then convert to radians. I will already know the value of vars "x" and "y". Using trigonometry, how can I calculate the length of the hypotenuse and therefore the variable angle of var z? A solution in either AS3 or psuedocode would be very helpful. Thanks. A: What you need is this: var h:Number = Math.sqrt(x*x + y*y); var z:Number = Math.atan2(y, x); That should give you the angle in radians, you might need to swap x/y and possibly add or remove 90 degrees but it should do the trick! (Note that you don't even need h to get z when you're using atan2) I use multiplication instead of Math.pow() just because Math is pretty slow, you can do: var h:Number = Math.sqrt(Math.pow(x, 2) + Math.pow(y, 2)); And it should be exactly the same. A: z is equivalent to 180 - angle of yH. Or: 180 - arctan(x/y) //Degrees pi - arctan(x/y) //radians Also, if actionscript's math libraries have it, use arctan2, which takes both the x and y and deals with signs correctly. A: The angle you want is the same as the angle opposed to the one wetween y and h. Let's call a the angle between y and h, the angle you want is actually 180 - a or PI - a depending on your unit (degrees or radians). Now geometry tells us that: cos(a) = y/h sin(a) = x/h tan(a) = x/y Using tan(), we get: a = arctan(x/y) As we are looking for 180 - a, you should compute: 180 - arctan(x/y) A: What @Patrick said, also the hypotenuse is sqrt(x^2 + y^2).
Triangle Trigonometry (ActionScript 3)
I am trying to write a formula in ActionScript 3 that will give me var "z" (please see image below) in degrees, which I will then convert to radians. I will already know the value of vars "x" and "y". Using trigonometry, how can I calculate the length of the hypotenuse and therefore the variable angle of var z? A solution in either AS3 or psuedocode would be very helpful. Thanks.
[ "What you need is this:\nvar h:Number = Math.sqrt(x*x + y*y);\nvar z:Number = Math.atan2(y, x);\n\nThat should give you the angle in radians, you might need to swap x/y and possibly add or remove 90 degrees but it should do the trick! (Note that you don't even need h to get z when you're using atan2)\nI use multiplication instead of Math.pow() just because Math is pretty slow, you can do:\nvar h:Number = Math.sqrt(Math.pow(x, 2) + Math.pow(y, 2));\n\nAnd it should be exactly the same.\n", "z is equivalent to 180 - angle of yH. Or: \n180 - arctan(x/y) //Degrees\npi - arctan(x/y) //radians\n\nAlso, if actionscript's math libraries have it, use arctan2, which takes both the x and y and deals with signs correctly.\n", "The angle you want is the same as the angle opposed to the one wetween y and h.\nLet's call a the angle between y and h, the angle you want is actually 180 - a or PI - a depending on your unit (degrees or radians).\nNow geometry tells us that:\ncos(a) = y/h\nsin(a) = x/h\ntan(a) = x/y\n\nUsing tan(), we get:\na = arctan(x/y)\n\nAs we are looking for 180 - a, you should compute:\n180 - arctan(x/y)\n\n", "What @Patrick said, also the hypotenuse is sqrt(x^2 + y^2).\n" ]
[ 6, 4, 1, 0 ]
[]
[]
[ "actionscript_3", "hypotenuse", "trigonometry" ]
stackoverflow_0000056118_actionscript_3_hypotenuse_trigonometry.txt
Q: How can I stop losing all my IDE window position when pressing the start debugging button? I use Visual Studio 2008. I haven't seen this behavior before and, as far as I know, I didn't change anything in the options. When I press Start debugging all the possibly windows (watch 1 - 4), data sources, properties, registers (to be honest I have not even ever seen these windows before) appear in front of the code window and stay there after I stop the debugger. Anyone has an idea what could be causing this ? (I am using CodeRush and Refactor for quite a while now) When I close and restart visual studio all the windows are where they should be. PS: Previously I have seen normal switching from normal to debug mode and back with some repositioning changes. That is the way it used to work. Now it is not. It has suddenly gone mad and when going to the debug mode it sometimes shows all possible IDE windows and sometimes not. When it does it no longer returns to the previous state. I cannot find this in the options anywhere. A: Visual Studio remembers 2 sets of window layouts, normal mode and debugging mode. My solution is to arrange my normal windows exactly like I want them, then start debugging an application and once again arrange all of the windows the way I want, usually making it as similar to my normal layout as possible, then stopping the debugger and doing a File Exit so that VS saves my settings. After doing that, it recalls my 2 different layouts each time. A: I'm experiencing the same thing - whenever the debugger is running, switching focus back to the IDE immediately caused the debug panel to expand. I ended up just pinning the debug panel so that it always appears when debugging, and just changing its height as needed. A: To add to palehorse, another tip is Full Screen mode.
How can I stop losing all my IDE window position when pressing the start debugging button?
I use Visual Studio 2008. I haven't seen this behavior before and, as far as I know, I didn't change anything in the options. When I press Start debugging all the possibly windows (watch 1 - 4), data sources, properties, registers (to be honest I have not even ever seen these windows before) appear in front of the code window and stay there after I stop the debugger. Anyone has an idea what could be causing this ? (I am using CodeRush and Refactor for quite a while now) When I close and restart visual studio all the windows are where they should be. PS: Previously I have seen normal switching from normal to debug mode and back with some repositioning changes. That is the way it used to work. Now it is not. It has suddenly gone mad and when going to the debug mode it sometimes shows all possible IDE windows and sometimes not. When it does it no longer returns to the previous state. I cannot find this in the options anywhere.
[ "Visual Studio remembers 2 sets of window layouts, normal mode and debugging mode. My solution is to arrange my normal windows exactly like I want them, then start debugging an application and once again arrange all of the windows the way I want, usually making it as similar to my normal layout as possible, then stopping the debugger and doing a File Exit so that VS saves my settings.\nAfter doing that, it recalls my 2 different layouts each time.\n", "I'm experiencing the same thing - whenever the debugger is running, switching focus back to the IDE immediately caused the debug panel to expand.\nI ended up just pinning the debug panel so that it always appears when debugging, and just changing its height as needed.\n", "To add to palehorse, another tip is Full Screen mode.\n" ]
[ 5, 0, 0 ]
[]
[]
[ "visual_studio", "visual_studio_2008" ]
stackoverflow_0000057345_visual_studio_visual_studio_2008.txt
Q: SMTP Mail Timeout Issue When I'm creating a user for my web application, an SMTP email (using ASP.NET's SmtpClient) is sent to the user with the automatically generated password. However, sometimes what I notice is that it times out and the new user simply won't receive the email with the password. Alright, so I'll display a message indicating that the mail did not go through but the user is created. Therefore, the sys admin has 2 options so far: Reset the password for the user and hope another SMTP mail is sent with the auto-generated password. Delete and recreate the user. I could rollback the user creation if the smtp is not sent but what is the best practice to tackle this problem? I'm thinking that I should retry sending the email 3 times with a timeout period of 5 seconds each. So 15 seconds would be the worse case scenario. Is this the way to go? A: Well, depending on your platform, if you can just hand off your mail to a local MTA, it should handle the retries and such. Your program can just queue the mail and move on, not worry about dealing with timeouts and graylists etc. If the message still can't be delivered, you could always try resending it (via a password reset feature). If that fails as well, most likely there was a mistake in the email address, and I would suggest deleting the account, causing the user to re-register. This, of course, might not be possible on some systems, depending what can be done with an unconfirmed user - that really depends on what you allow people to do before their email is validated. A: It sounds like your web app is speaking SMTP directly to your user's mail server. [Your web app is a MUA (Mail User Agent) talking to the user's MTA (Mail Transfer Agent).] Nothing says that the user's MTA must be reachable or working at the moment. You need to run your own MTA so you ensure that somebody is providing queueing, retries, etc. If you really want to bend over backwards, you could do what you're doing (only one attempt though), fallback to queueing the message and continuing to retry on a slower schedule for at least 24 hours, and expose that unfinished state to the user. The official answer on how your app is supposed to behave can be found in RFC1123 (Requirements for Internet Hosts - Application and Support): 5.3.1.1 Sending Strategy The general model of a sender-SMTP is one or more processes that periodically attempt to transmit outgoing mail. In a typical system, the program that composes a message has some method for requesting immediate attention for a new piece of outgoing mail, while mail that cannot be transmitted immediately MUST be queued and periodically retried by the sender. A mail queue entry will include not only the message itself but also the envelope information. The sender MUST delay retrying a particular destination after one attempt has failed. In general, the retry interval SHOULD be at least 30 minutes; however, more sophisticated and variable strategies will be beneficial when the sender-SMTP can determine the reason for non- delivery. Retries continue until the message is transmitted or the sender gives up; the give-up time generally needs to be at least 4-5 days. The parameters to the retry algorithm MUST be configurable. A: IMHO you should notify the user, asking him to verify the email, without retries. If the user does not verify the email and leaves the page, you better roll back the account since the user can not access it anyway. Most cases of timeout would be caused by invalid email accounts. Users either made a mistake or gave you a non existent email addressto avoid being spammed. If at all possible, do not ask for your users emails. Yhe number one rule of programming should be: DO NOT annoy the user. A: If you are using ASP.NET and the System.Net.Mail classes, you are probably sending the mail via the IIS instance on the web server machine (I'm not sure since you didn't specify). There's not a good way to know what's going on with your Mail Transfer Agent (IIS SMTP). It has its own retry logic, and by default, it could take a long time for the message to be delivered. How are you detecting that the mail was not delivered? What's the "timeout" coming from? You should have a background process that handles the sending of mail. If delivery to the MTA succeeds, you should assume all is well. Unless you are blacklisted for SPAM, most MTAs will keep retrying until they get through. If you actually get an error dropping the message off with you MTA, then definitely retry it, or figure out what's causing the failure and fix the bug. Honestly, this part should never fail. You might want to monitor the return address for NDR messages so you can take some sort of action when you know for sure when the email wasn't delivered. But if the user cannot yet log in to the system, there's no good way to let them know what happened. Maybe you could set a cookie with a value that you associate with the email, and put something up on the login/registration page if you were unable to deliver the mail.
SMTP Mail Timeout Issue
When I'm creating a user for my web application, an SMTP email (using ASP.NET's SmtpClient) is sent to the user with the automatically generated password. However, sometimes what I notice is that it times out and the new user simply won't receive the email with the password. Alright, so I'll display a message indicating that the mail did not go through but the user is created. Therefore, the sys admin has 2 options so far: Reset the password for the user and hope another SMTP mail is sent with the auto-generated password. Delete and recreate the user. I could rollback the user creation if the smtp is not sent but what is the best practice to tackle this problem? I'm thinking that I should retry sending the email 3 times with a timeout period of 5 seconds each. So 15 seconds would be the worse case scenario. Is this the way to go?
[ "Well, depending on your platform, if you can just hand off your mail to a local MTA, it should handle the retries and such. Your program can just queue the mail and move on, not worry about dealing with timeouts and graylists etc.\nIf the message still can't be delivered, you could always try resending it (via a password reset feature). If that fails as well, most likely there was a mistake in the email address, and I would suggest deleting the account, causing the user to re-register.\nThis, of course, might not be possible on some systems, depending what can be done with an unconfirmed user - that really depends on what you allow people to do before their email is validated.\n", "It sounds like your web app is speaking SMTP directly to your user's mail server.\n[Your web app is a MUA (Mail User Agent) talking to the user's MTA (Mail Transfer Agent).]\nNothing says that the user's MTA must be reachable or working at the moment. You need to run your own MTA so you ensure that somebody is providing queueing, retries, etc.\nIf you really want to bend over backwards, you could do what you're doing (only one attempt though), fallback to queueing the message and continuing to retry on a slower schedule for at least 24 hours, and expose that unfinished state to the user.\nThe official answer on how your app is supposed to behave can be found in RFC1123 (Requirements for Internet Hosts - Application and Support):\n\n5.3.1.1 Sending Strategy\nThe general model of a sender-SMTP is\n one or more processes that\n periodically attempt to transmit\n outgoing mail. In a typical system,\n the program that composes a message\n has some method for requesting\n immediate attention for a new piece of\n outgoing mail, while mail that cannot\n be transmitted immediately MUST be\n queued and periodically retried by the\n sender. A mail queue entry will\n include not only the message itself\n but also the envelope information.\nThe sender MUST delay retrying a\n particular destination after one\n attempt has failed. In general, the\n retry interval SHOULD be at least 30\n minutes; however, more sophisticated\n and variable strategies will be\n beneficial when the sender-SMTP can\n determine the reason for non-\n delivery.\nRetries continue until the message is\n transmitted or the sender gives up;\n the give-up time generally needs to be\n at least 4-5 days. The parameters to\n the retry algorithm MUST be\n configurable.\n\n", "IMHO you should notify the user, asking him to verify the email, without retries. \nIf the user does not verify the email and leaves the page, you better roll back the account since the user can not access it anyway. \nMost cases of timeout would be caused by invalid email accounts. Users either made a mistake or gave you a non existent email addressto avoid being spammed. \nIf at all possible, do not ask for your users emails. Yhe number one rule of programming should be: DO NOT annoy the user.\n", "If you are using ASP.NET and the System.Net.Mail classes, you are probably sending the mail via the IIS instance on the web server machine (I'm not sure since you didn't specify). There's not a good way to know what's going on with your Mail Transfer Agent (IIS SMTP). It has its own retry logic, and by default, it could take a long time for the message to be delivered.\nHow are you detecting that the mail was not delivered? What's the \"timeout\" coming from?\nYou should have a background process that handles the sending of mail. If delivery to the MTA succeeds, you should assume all is well. Unless you are blacklisted for SPAM, most MTAs will keep retrying until they get through. If you actually get an error dropping the message off with you MTA, then definitely retry it, or figure out what's causing the failure and fix the bug. Honestly, this part should never fail.\nYou might want to monitor the return address for NDR messages so you can take some sort of action when you know for sure when the email wasn't delivered. But if the user cannot yet log in to the system, there's no good way to let them know what happened. Maybe you could set a cookie with a value that you associate with the email, and put something up on the login/registration page if you were unable to deliver the mail.\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ ".net", "asp.net", "email", "smtp" ]
stackoverflow_0000057285_.net_asp.net_email_smtp.txt
Q: Should data security be performed on the database side? We're in the process of setting up a new framework and way of doing business for our new internal apps. Our current design dictates that all security logic should be handled by our database, and all information (and I mean all) will be going in and out of the database via stored procedures. The theory is, the data access layer requests info from a stored procedure and passes over authentication to the database. The database determines the user's role/permissions and decides whether or not to perform the task (whether that be retrieving data or making an update). I guess this means fewer database transactions. One call to the database. If the security was in our data access layer, this would require 1 database call to determine if the user had proper permissions, and then 1 separate database call to perform the action. I, for one, find the SQL Management studio completely lacking as an IDE. My main concern is we will end up having to maintain some nasty amount of business logic in our stored procedures for some very minimal performance gains. Right now, we're using LINQ for our ORM. It seems light and fast, but best of all, its really easy to rapidly develop in. Is the maintenance cost worth the performance gain? Are we fooling ourselves into thinking there will even be a noticeable performance gain? Or are we just making a nightmare for ourselves? Our environment: Internal, non-mission critical business apps C#/ASP.NET 3.5 Windows 2003 MS SQL Server 2005 35 Medium sized web apps with approx 500 users A: Don't do that. We recently had a VERY BAD experience when the "database guru" decided to go to another company. The maintenance of all the logic in the procedures are just horrible!! Yes, you're going to have some performance improvement, but that's not worth it. In fact, performance is not even a big concern in internal application. Invest more money in good servers. It'll pay off. A: Unfortunately there is no "one true answer". The choice you must make depends on multiple factors, like: The familiarity of the team with the given solutions (ie if a majority of them is comfortable writing SQL, it can be in the database, however if a majority of them is more comfortable with C#, it should be in the code) The "political power" of each party etc There is no decisive advantage in any direction (as you said performance gains are minimal), the one thing to keep in mind is the DRY (Don't Repeat Yourself) principle: don't reimplement the functionality twice (in the code and in the DB), because keeping them in synch will be a nightmare. Pick one solution and stick to it. A: You could do it but its a huge pain to develop against and maintain. Take it from someone who is on a project where almost all business logic is coded in stored procedures. For security, ASP.NET has user and role management baked into it so you might be saving trips to the database but so what? In exchange it becomes far more annoying to handle and debug system and validation errors because they have to bubble up from the database. Unit testing is far more difficult since the frameworks available for unit testing sprocs are far less developed. Proper oop and domain driven design is all but out the window. And the performance gain is going to be tiny if any. We talked about this here. I would recommend that if you want to save your sanity as a developer you fight tooth and nail to keep the database as the persistence layer only A: IMHO: Application service tier -> application logic and validation Application data tier -> data logic and security Database -> data consistency You will be bitten by the sproc approach sooner or later, I have learned this the hard way. Procs are great for one shot operations that need a lot of performance, but the CRUD part is the data tiers job A: It all depends on your case it is probably better not to go the SP route and do everything the DDD way (make a Domain model in code and use that). However, if you have a database that is not only used by your application but by many then you should probably consider web services. In any way, the database should only be accessible via one layer that enforces the business rules else you are going to end up with "dirty" data and sanitizing your data afterward is a much bigger pain than writing a few business rules beforehand. A good database should have check-constraints and indexes set, so it will have some business rules whether you like it or not. And if you have to deal with millions and billions of records you will be happy to have a good DB-guy that solves the problem for you. A: Stored procedures are usually a win for security. Simplifying the relationship between your application and the database reduces the number of places where you can have errors; errors in code that interfaces business logic to the database tend to be security problems. So, your DBA isn't wrong about locking things down to stored procedures. Another benefit to locking the application down to stored procedures is that the app stack's database connection can have its privileges locked down to specific stored procedure calls and nothing else. A benefit to having a DBA involved in security logic for your application is that the different app features and roles can be partitioned in the database down to views, so that even if dynamic SQL and generic select statements are needed, the damage from an SQL vulnerability can be constrained. The flip side of this is, of course, lost flexibility. An ORM is obviously going to be faster to develop to than a constant negotiation with a DBA over stored procedure parameters. And, as the pressure on those stored procedures grows, it's more and more likely that the procedures themselves will resort to dynamic SQL, which will be just as vulnerable as app composed SQL to attack. There's a happy middle ground here, and you should try to find it. I've worked on projects recently that were saved from pretty terrible SQL injection problems because a DBA had carefully configured the database, its connections, and its stored procedures for "least privilege", so that any one database user had access only to what they needed to know. Obviously, as you write SQL code in your app logic, be sure that you're consistently using parameterized prepared statements, that you're sanitizing your input, that you're mindful of internationalized input (there are many, many ways to say single-quote over HTTP), and that you're mindful of how your database behaves when inputs are too large for column widths. A: My opinion is that the application itself should handle authentication and authorisation. On the database side you should only handle encryption of data as needed. A: I have built stored procedure based applications in the past. In your case there maybe a way to keep authentication at the database layer and have your business logic in C#. Use views to limit data (you only see the rows you have authority to). These views can be used in LINQ with the same ease as tables. You set your updates to happen with stored procedures. This allows linq, business logic in C#, and a common authentication layer in the database that controls access to the data.
Should data security be performed on the database side?
We're in the process of setting up a new framework and way of doing business for our new internal apps. Our current design dictates that all security logic should be handled by our database, and all information (and I mean all) will be going in and out of the database via stored procedures. The theory is, the data access layer requests info from a stored procedure and passes over authentication to the database. The database determines the user's role/permissions and decides whether or not to perform the task (whether that be retrieving data or making an update). I guess this means fewer database transactions. One call to the database. If the security was in our data access layer, this would require 1 database call to determine if the user had proper permissions, and then 1 separate database call to perform the action. I, for one, find the SQL Management studio completely lacking as an IDE. My main concern is we will end up having to maintain some nasty amount of business logic in our stored procedures for some very minimal performance gains. Right now, we're using LINQ for our ORM. It seems light and fast, but best of all, its really easy to rapidly develop in. Is the maintenance cost worth the performance gain? Are we fooling ourselves into thinking there will even be a noticeable performance gain? Or are we just making a nightmare for ourselves? Our environment: Internal, non-mission critical business apps C#/ASP.NET 3.5 Windows 2003 MS SQL Server 2005 35 Medium sized web apps with approx 500 users
[ "Don't do that. We recently had a VERY BAD experience when the \"database guru\" decided to go to another company. The maintenance of all the logic in the procedures are just horrible!!\nYes, you're going to have some performance improvement, but that's not worth it. In fact, performance is not even a big concern in internal application. Invest more money in good servers. It'll pay off.\n", "Unfortunately there is no \"one true answer\". The choice you must make depends on multiple factors, like:\n\nThe familiarity of the team with the given solutions (ie if a majority of them is comfortable writing SQL, it can be in the database, however if a majority of them is more comfortable with C#, it should be in the code)\nThe \"political power\" of each party\netc\n\nThere is no decisive advantage in any direction (as you said performance gains are minimal), the one thing to keep in mind is the DRY (Don't Repeat Yourself) principle: don't reimplement the functionality twice (in the code and in the DB), because keeping them in synch will be a nightmare. Pick one solution and stick to it.\n", "You could do it but its a huge pain to develop against and maintain. Take it from someone who is on a project where almost all business logic is coded in stored procedures.\nFor security, ASP.NET has user and role management baked into it so you might be saving trips to the database but so what? In exchange it becomes far more annoying to handle and debug system and validation errors because they have to bubble up from the database.\nUnit testing is far more difficult since the frameworks available for unit testing sprocs are far less developed.\nProper oop and domain driven design is all but out the window.\nAnd the performance gain is going to be tiny if any. We talked about this here.\nI would recommend that if you want to save your sanity as a developer you fight tooth and nail to keep the database as the persistence layer only\n", "IMHO:\nApplication service tier -> application logic and validation\nApplication data tier -> data logic and security\nDatabase -> data consistency \nYou will be bitten by the sproc approach sooner or later, I have learned this the hard way.\nProcs are great for one shot operations that need a lot of performance, but the CRUD part is the data tiers job\n", "It all depends on your case it is probably better not to go the SP route and do everything the DDD way (make a Domain model in code and use that). \nHowever, if you have a database that is not only used by your application but by many then you should probably consider web services. In any way, the database should only be accessible via one layer that enforces the business rules else you are going to end up with \"dirty\" data and sanitizing your data afterward is a much bigger pain than writing a few business rules beforehand. A good database should have check-constraints and indexes set, so it will have some business rules whether you like it or not.\nAnd if you have to deal with millions and billions of records you will be happy to have a good DB-guy that solves the problem for you. \n", "Stored procedures are usually a win for security. Simplifying the relationship between your application and the database reduces the number of places where you can have errors; errors in code that interfaces business logic to the database tend to be security problems. So, your DBA isn't wrong about locking things down to stored procedures.\nAnother benefit to locking the application down to stored procedures is that the app stack's database connection can have its privileges locked down to specific stored procedure calls and nothing else. \nA benefit to having a DBA involved in security logic for your application is that the different app features and roles can be partitioned in the database down to views, so that even if dynamic SQL and generic select statements are needed, the damage from an SQL vulnerability can be constrained.\nThe flip side of this is, of course, lost flexibility. An ORM is obviously going to be faster to develop to than a constant negotiation with a DBA over stored procedure parameters. And, as the pressure on those stored procedures grows, it's more and more likely that the procedures themselves will resort to dynamic SQL, which will be just as vulnerable as app composed SQL to attack. \nThere's a happy middle ground here, and you should try to find it. I've worked on projects recently that were saved from pretty terrible SQL injection problems because a DBA had carefully configured the database, its connections, and its stored procedures for \"least privilege\", so that any one database user had access only to what they needed to know. \nObviously, as you write SQL code in your app logic, be sure that you're consistently using parameterized prepared statements, that you're sanitizing your input, that you're mindful of internationalized input (there are many, many ways to say single-quote over HTTP), and that you're mindful of how your database behaves when inputs are too large for column widths.\n", "My opinion is that the application itself should handle authentication and authorisation. On the database side you should only handle encryption of data as needed.\n", "I have built stored procedure based applications in the past. In your case there maybe a way to keep authentication at the database layer and have your business logic in C#. Use views to limit data (you only see the rows you have authority to). These views can be used in LINQ with the same ease as tables. You set your updates to happen with stored procedures.\nThis allows linq, business logic in C#, and a common authentication layer in the database that controls access to the data.\n" ]
[ 8, 3, 2, 2, 1, 1, 0, 0 ]
[]
[]
[ "security", "sql", "stored_procedures" ]
stackoverflow_0000055845_security_sql_stored_procedures.txt
Q: Best way to reduce sequences in an array of strings Please, now that I've re-written the question, and before it suffers from further fast-gun answers or premature closure by eager editors let me point out that this is not a duplicate of this question. I know how to remove duplicates from an array. This question is about removing sequences from an array, not duplicates in the strict sense. Consider this sequence of elements in an array; [0] a [1] a [2] b [3] c [4] c [5] a [6] c [7] d [8] c [9] d In this example I want to obtain the following... [0] a [1] b [2] c [3] a [4] c [5] d Notice that duplicate elements are retained but that sequences of the same element have been reduced to a single instance of that element. Further, notice that when two lines repeat they should be reduced to one set (of two lines). [0] c [1] d [2] c [3] d ...reduces to... [0] c [1] d I'm coding in C# but algorithms in any language appreciated. A: EDIT: made some changes and new suggestions What about a sliding window... REMOVE LENGTH 2: (no other length has other matches) //the lower case letters are the matches ABCBAbabaBBCbcbcbVbvBCbcbcAB __ABCBABABABBCBCBCBVBVBCBCBCAB REMOVE LENGTH 1 (duplicate characters): //* denote that a string was removed to prevent continual contraction //of the string, unless this is what you want. ABCBA*BbC*V*BC*AB _ABCBA*BBC*V*BC*AB RESULT: ABCBA*B*C*V*BC*AB == ABCBABCVBCAB This is of course starting with length=2, increase it to L/2 and iterate down. I'm also thinking of two other approaches: digraph - Set a stateful digraph with the data and iterate over it with the string, if a cycle is found you'll have a duplication. I'm not sure how easy it is check check for these cycles... possibly some dynamic programming, so it could be equivlent to method 2 below. I'm going to have to think about this one as well longer. distance matrix - using a levenstein distance matrix you might be able to detect duplication from diagonal movement (off the diagonal) with cost 0. This could indicate duplication of data. I will have to think about this more. A: Here's C# app i wrote that solves this problem. takes aabccacdcd outputs abcacd Probably looks pretty messy, took me a bit to get my head around the dynamic pattern length bit. class Program { private static List<string> values; private const int MAX_PATTERN_LENGTH = 4; static void Main(string[] args) { values = new List<string>(); values.AddRange(new string[] { "a", "b", "c", "c", "a", "c", "d", "c", "d" }); for (int i = MAX_PATTERN_LENGTH; i > 0; i--) { RemoveDuplicatesOfLength(i); } foreach (string s in values) { Console.WriteLine(s); } } private static void RemoveDuplicatesOfLength(int dupeLength) { for (int i = 0; i < values.Count; i++) { if (i + dupeLength > values.Count) break; if (i + dupeLength + dupeLength > values.Count) break; var patternA = values.GetRange(i, dupeLength); var patternB = values.GetRange(i + dupeLength, dupeLength); bool isPattern = ComparePatterns(patternA, patternB); if (isPattern) { values.RemoveRange(i, dupeLength); } } } private static bool ComparePatterns(List<string> pattern, List<string> candidate) { for (int i = 0; i < pattern.Count; i++) { if (pattern[i] != candidate[i]) return false; } return true; } } fixed the initial values to match the questions values A: I would dump them all into your favorite Set implementation. EDIT: Now that I understand the question, your original solution looks like the best way to do this. Just loop through the array once, keeping an array of flags to mark which elements to keep, plus a counter to keep track to the size of the new array. Then loop through again to copy all the keepers to a new array. A: I agree that if you can just dump the strings into a Set, then that might be the easiest solution. If you don't have access to a Set implementation for some reason, I would just sort the strings alphabetically and then go through once and remove the duplicates. How to sort them and remove duplicates from the list will depend on what language and environment you are running your code. EDIT: Oh, ick.... I see based on your clarification that you expect that patterns might occur even over separate lines. My approach won't solve your problem. Sorry. Here is a question for you. If I had the following file. a a b c c a a b c c Would you expect it to simplify to a b c
Best way to reduce sequences in an array of strings
Please, now that I've re-written the question, and before it suffers from further fast-gun answers or premature closure by eager editors let me point out that this is not a duplicate of this question. I know how to remove duplicates from an array. This question is about removing sequences from an array, not duplicates in the strict sense. Consider this sequence of elements in an array; [0] a [1] a [2] b [3] c [4] c [5] a [6] c [7] d [8] c [9] d In this example I want to obtain the following... [0] a [1] b [2] c [3] a [4] c [5] d Notice that duplicate elements are retained but that sequences of the same element have been reduced to a single instance of that element. Further, notice that when two lines repeat they should be reduced to one set (of two lines). [0] c [1] d [2] c [3] d ...reduces to... [0] c [1] d I'm coding in C# but algorithms in any language appreciated.
[ "EDIT: made some changes and new suggestions\nWhat about a sliding window...\nREMOVE LENGTH 2: (no other length has other matches)\n//the lower case letters are the matches\nABCBAbabaBBCbcbcbVbvBCbcbcAB \n__ABCBABABABBCBCBCBVBVBCBCBCAB\n\nREMOVE LENGTH 1 (duplicate characters):\n//* denote that a string was removed to prevent continual contraction\n//of the string, unless this is what you want.\nABCBA*BbC*V*BC*AB\n_ABCBA*BBC*V*BC*AB\n\nRESULT:\nABCBA*B*C*V*BC*AB == ABCBABCVBCAB\n\nThis is of course starting with length=2, increase it to L/2 and iterate down. \nI'm also thinking of two other approaches:\n\ndigraph - Set a stateful digraph with the data and iterate over it with the string, if a cycle is found you'll have a duplication. I'm not sure how easy it is check check for these cycles... possibly some dynamic programming, so it could be equivlent to method 2 below. I'm going to have to think about this one as well longer.\ndistance matrix - using a levenstein distance matrix you might be able to detect duplication from diagonal movement (off the diagonal) with cost 0. This could indicate duplication of data. I will have to think about this more.\n\n", "Here's C# app i wrote that solves this problem.\ntakes\naabccacdcd \noutputs\nabcacd \nProbably looks pretty messy, took me a bit to get my head around the dynamic pattern length bit.\nclass Program\n{\n private static List<string> values;\n private const int MAX_PATTERN_LENGTH = 4;\n\n static void Main(string[] args)\n {\n values = new List<string>();\n values.AddRange(new string[] { \"a\", \"b\", \"c\", \"c\", \"a\", \"c\", \"d\", \"c\", \"d\" });\n\n\n for (int i = MAX_PATTERN_LENGTH; i > 0; i--)\n {\n RemoveDuplicatesOfLength(i);\n }\n\n foreach (string s in values)\n {\n Console.WriteLine(s);\n }\n }\n\n private static void RemoveDuplicatesOfLength(int dupeLength)\n {\n for (int i = 0; i < values.Count; i++)\n {\n if (i + dupeLength > values.Count)\n break;\n\n if (i + dupeLength + dupeLength > values.Count)\n break;\n\n var patternA = values.GetRange(i, dupeLength);\n var patternB = values.GetRange(i + dupeLength, dupeLength);\n\n bool isPattern = ComparePatterns(patternA, patternB);\n\n if (isPattern)\n {\n values.RemoveRange(i, dupeLength);\n }\n }\n }\n\n private static bool ComparePatterns(List<string> pattern, List<string> candidate)\n {\n for (int i = 0; i < pattern.Count; i++)\n {\n if (pattern[i] != candidate[i])\n return false;\n }\n\n return true;\n }\n}\n\nfixed the initial values to match the questions values\n", "I would dump them all into your favorite Set implementation.\nEDIT: Now that I understand the question, your original solution looks like the best way to do this. Just loop through the array once, keeping an array of flags to mark which elements to keep, plus a counter to keep track to the size of the new array. Then loop through again to copy all the keepers to a new array.\n", "I agree that if you can just dump the strings into a Set, then that might be the easiest solution. \nIf you don't have access to a Set implementation for some reason, I would just sort the strings alphabetically and then go through once and remove the duplicates. How to sort them and remove duplicates from the list will depend on what language and environment you are running your code. \nEDIT: Oh, ick.... I see based on your clarification that you expect that patterns might occur even over separate lines. My approach won't solve your problem. Sorry. Here is a question for you. If I had the following file.\na\na\nb\nc\nc\na\na\nb\nc\nc\nWould you expect it to simplify to \na\nb\nc\n" ]
[ 3, 2, 1, 0 ]
[]
[]
[ ".net", "algorithm", "c#" ]
stackoverflow_0000057010_.net_algorithm_c#.txt
Q: Where should interfaces "physically live"? I like the idea of having Interfaces and Implementation separate. But how separate? Are the Interface definitions in a separate .Net assembly? Do you have a single project that defines all Interfaces for a solution? Otherwise are there issues with circular dependencies of Interfaces? A: Put your domain objects and interfaces in a seperate "domain" assembly. This assembly should never reference anything but the core .net assemblies. This way you get a clean seperation from your domain/service model and your implementation. Edit: http://jeffreypalermo.com/blog/the-onion-architecture-part-1/ A: I wouldn't put the interfaces into a separate assembly just for the sake of it. However, if the interfaces take part in any form of IPC or extensibility architecture then it often makes sense to give them their own assembly. If you have projects that need to reference each other, then yes, you will need a separate assembly for the interfaces, but you should also carefully examine architecture to see if there is another way of resolving the circular dependency. A: I prefer keeping the most common or simple implementations of the interface in a sub-folder (and namespace) following the name of the interface. \project\ \project\IAppender.cs \project\Appender\ \project\Appender\FileAppender.cs \project\Appender\ConsoleAppender.cs If I extend this class outside the project. In a special project, repeat the folders/namespace similarly. \specialproject\ \specialproject\Appender\ \specialproject\Appender\MemoryAppender.cs A: In the project I'm working on right now, the interfaces and related base classes go into assemblies that are logically divided among functions. The implementations of these providers and classes go inside a core assembly. The idea being that people who use our API can reference more or one of the API dlls in a clear and logical manner. Smaller applications don't need this kind of separation. But, no matter where I keep the interfaces, I would keep them in the same namespace as any base classes.
Where should interfaces "physically live"?
I like the idea of having Interfaces and Implementation separate. But how separate? Are the Interface definitions in a separate .Net assembly? Do you have a single project that defines all Interfaces for a solution? Otherwise are there issues with circular dependencies of Interfaces?
[ "Put your domain objects and interfaces in a seperate \"domain\" assembly.\nThis assembly should never reference anything but the core .net assemblies.\nThis way you get a clean seperation from your domain/service model and your implementation.\nEdit:\nhttp://jeffreypalermo.com/blog/the-onion-architecture-part-1/\n", "I wouldn't put the interfaces into a separate assembly just for the sake of it. However, if the interfaces take part in any form of IPC or extensibility architecture then it often makes sense to give them their own assembly.\nIf you have projects that need to reference each other, then yes, you will need a separate assembly for the interfaces, but you should also carefully examine architecture to see if there is another way of resolving the circular dependency. \n", "I prefer keeping the most common or simple implementations of the interface in a sub-folder (and namespace) following the name of the interface.\n\n\\project\\\n\\project\\IAppender.cs\n\\project\\Appender\\\n\\project\\Appender\\FileAppender.cs\n\\project\\Appender\\ConsoleAppender.cs\n\nIf I extend this class outside the project. In a special project, repeat the folders/namespace similarly.\n\n\\specialproject\\\n\\specialproject\\Appender\\\n\\specialproject\\Appender\\MemoryAppender.cs\n\n", "In the project I'm working on right now, the interfaces and related base classes go into assemblies that are logically divided among functions. The implementations of these providers and classes go inside a core assembly. The idea being that people who use our API can reference more or one of the API dlls in a clear and logical manner.\nSmaller applications don't need this kind of separation. But, no matter where I keep the interfaces, I would keep them in the same namespace as any base classes.\n" ]
[ 7, 3, 1, 0 ]
[]
[]
[ ".net", "architecture" ]
stackoverflow_0000057386_.net_architecture.txt
Q: <asp:RegularExpressionValidator and RegexOptions.IgnorePatternWhitespace Is there an easy way of using the RegularExpressionValidator control while ignoring white space? I can use a custom validator control with Regex and IgnorePatternWhitespace, but it would be good to just have an option in the RegularExpressionValidator control. A: Surround your regex with (?x: ) so "a b c" becomes "(?x:a b c) A: Remember that the regular expression validator want to validate with javascript, too, so you want to make sure your expression will work with both the .Net and javascript regex engines. That means that using .IgnorePatterWhitespace isn't the best idea.
<asp:RegularExpressionValidator and RegexOptions.IgnorePatternWhitespace
Is there an easy way of using the RegularExpressionValidator control while ignoring white space? I can use a custom validator control with Regex and IgnorePatternWhitespace, but it would be good to just have an option in the RegularExpressionValidator control.
[ "Surround your regex with\n\n(?x: )\n\nso \"a b c\" becomes \"(?x:a b c)\n", "Remember that the regular expression validator want to validate with javascript, too, so you want to make sure your expression will work with both the .Net and javascript regex engines. That means that using .IgnorePatterWhitespace isn't the best idea.\n" ]
[ 2, 0 ]
[]
[]
[ "asp.net", "regex" ]
stackoverflow_0000057322_asp.net_regex.txt
Q: Ajax and a restricted uri I would like to make an ajax call to a different server (same domain and box, just a different port.) e.g. My page is http://localhost/index.html I would like to make a ajax get request to: http://localhost:7076/?word=foo I am getting this error: Access to restricted URI denied (NS_ERROR_DOM_BAD_URI) I know that you can not make an ajax request to a different domain, but it seem this also included different ports? are there any workarounds? A: Have a certain page on your port 80 server proxy requests to the other port. For example: http://localhost/proxy?port=7076&url=%2f%3fword%3dfoo Note the url encoding on the last query string argument value. A: You could use JSONP. This is where you specify a callback with the request, the response from your ajax request gets wrapped with the callback function name. Rather than using XmlHttpRequest you insert a tag into the HTML document with the URL. Then when the response is retrieved the callback function is called, passing the data as a parameter. Check this blog post out for an example A: This is a browser restriction. All javascript calls must be to the same server and port of the home of the script. This will require something server-side to get around. I.E. have the process at localhost forward the request to localhost:7076. A: It sucks, but it's necessary... Basically what you're going to need to do is proxy your AJAX request through a local proxy - some server side script / page / whatever on the same domain you're on - receive the call and forward it on to the other resource server-side. There might be some IFRAME tricks you could do but I don't think they work very well...could be wrong though, been awhile.
Ajax and a restricted uri
I would like to make an ajax call to a different server (same domain and box, just a different port.) e.g. My page is http://localhost/index.html I would like to make a ajax get request to: http://localhost:7076/?word=foo I am getting this error: Access to restricted URI denied (NS_ERROR_DOM_BAD_URI) I know that you can not make an ajax request to a different domain, but it seem this also included different ports? are there any workarounds?
[ "Have a certain page on your port 80 server proxy requests to the other port. For example:\nhttp://localhost/proxy?port=7076&url=%2f%3fword%3dfoo\n\nNote the url encoding on the last query string argument value.\n", "You could use JSONP. This is where you specify a callback with the request, the response from your ajax request gets wrapped with the callback function name. Rather than using XmlHttpRequest you insert a tag into the HTML document with the URL. Then when the response is retrieved the callback function is called, passing the data as a parameter.\nCheck this blog post out for an example\n", "This is a browser restriction. All javascript calls must be to the same server and port of the home of the script. This will require something server-side to get around. I.E. have the process at localhost forward the request to localhost:7076.\n", "It sucks, but it's necessary... Basically what you're going to need to do is proxy your AJAX request through a local proxy - some server side script / page / whatever on the same domain you're on - receive the call and forward it on to the other resource server-side. There might be some IFRAME tricks you could do but I don't think they work very well...could be wrong though, been awhile.\n" ]
[ 4, 1, 0, 0 ]
[]
[]
[ "ajax", "xmlhttprequest" ]
stackoverflow_0000057421_ajax_xmlhttprequest.txt
Q: What are some gotchas when retargeting .net 2.0 to 3.5? I am currently working on a project that is moving from .NET 2.0 to 3.5 across the board. I am well aware that 3.5 is basically a set of added functionality (libraries, if you will) on top of what 2.0 offers. Are there any gotchas that I might hit by simply re-targeting the compiler to 3.5? A: This isn't a gotcha, it's more of a heads up. .NET v3.0 and v3.5 are not new CLRs but simply an added set up assemblies, compilers, resources etc... Both .NET v3.0 AND v3.5 use the v2.0 CLR. Because of this you won't be able to say set an IIS App Pool to use a v3.5 CLR...cause it doesn't exist. Discussed in a little more detail here: http://www.hanselman.com/blog/HowToSetAnIISApplicationOrAppPoolToUseASPNET35RatherThan20.aspx A: The only issue I've seen is with name conflicts. You'll need to dis-ambiguate any class or method names in your code that share names with ones added to the .net framework between .net 2.0 and 3.5 A: Nope 3.5 is completely compatible with 2.0, not the other way around of course A: I recently migrated a small project from 2.0 to 3.5 and didn't encounter any specific problems, as the framework versions are backwards compatible. That said, there are a good number of optimisations and improvements that can be made by taking advantage of available features in the later framework versions. You may get some deprecated feature warnings, but nothing that will stop your project compiling. A: Other than that some users of the application will have to download the new framework run-time, none that I know of.
What are some gotchas when retargeting .net 2.0 to 3.5?
I am currently working on a project that is moving from .NET 2.0 to 3.5 across the board. I am well aware that 3.5 is basically a set of added functionality (libraries, if you will) on top of what 2.0 offers. Are there any gotchas that I might hit by simply re-targeting the compiler to 3.5?
[ "This isn't a gotcha, it's more of a heads up. .NET v3.0 and v3.5 are not new CLRs but simply an added set up assemblies, compilers, resources etc...\nBoth .NET v3.0 AND v3.5 use the v2.0 CLR. Because of this you won't be able to say set an IIS App Pool to use a v3.5 CLR...cause it doesn't exist.\nDiscussed in a little more detail here:\nhttp://www.hanselman.com/blog/HowToSetAnIISApplicationOrAppPoolToUseASPNET35RatherThan20.aspx\n", "The only issue I've seen is with name conflicts. You'll need to dis-ambiguate any class or method names in your code that share names with ones added to the .net framework between .net 2.0 and 3.5\n", "Nope\n3.5 is completely compatible with 2.0, not the other way around of course\n", "I recently migrated a small project from 2.0 to 3.5 and didn't encounter any specific problems, as the framework versions are backwards compatible. That said, there are a good number of optimisations and improvements that can be made by taking advantage of available features in the later framework versions. You may get some deprecated feature warnings, but nothing that will stop your project compiling.\n", "Other than that some users of the application will have to download the new framework run-time, none that I know of.\n" ]
[ 6, 2, 1, 1, 1 ]
[]
[]
[ ".net" ]
stackoverflow_0000057458_.net.txt
Q: Is it possible for SelectNodes on an XmlDocument to return null? Is it possible for SelectNodes() called on an XmlDocument to return null? My predicament is that I am trying to reach 100% unit test code coverage; ReSharper tells me that I need to guard against a null return from the SelectNodes() method, but I can see no way that an XmlDocument can return null (and therefore, no way to test my guard clause and reach 100% unit test coverage!) A: Looking at Reflector, the SelectNodes() method on XmlDocument's base class, XmlNode, can return null if its attempt to create a navigator returns null. CreateNavigator() is pretty complex and will indeed return null under a few circumstances. Those circumstances appear to be around a malformed XML document - so there's your test case for failure of SelectNodes(). A: If you are calling SelectNodes on the XmlDocument itself and it really is an XmlDocument and not a derived class than SelectNodes won't return null. If you create a descendant class and override the CreateNavigator(XmlNode) method then SelectNodes could return null. Similarly, if you call SelectNodes on an EntityReference, DocumentType or XmlDeclaration node, you'll get null as well In short, for 100% coverage on an XmlDocument or XmlNode you didn't just create, you have to test for null. A: Is it necessary to reach 100% code coverage? Indeed, is it even possible under normal (i.e. controllable, testable) circumstances? We often find that using "syntactic sugar" constructions like the using {} block, there are "hidden" code paths created (most likely finally {} or catch {} blocks) that can't be exercised unless some environmental condition (like a broken socket or broken disk) gets in the way.
Is it possible for SelectNodes on an XmlDocument to return null?
Is it possible for SelectNodes() called on an XmlDocument to return null? My predicament is that I am trying to reach 100% unit test code coverage; ReSharper tells me that I need to guard against a null return from the SelectNodes() method, but I can see no way that an XmlDocument can return null (and therefore, no way to test my guard clause and reach 100% unit test coverage!)
[ "Looking at Reflector, the SelectNodes() method on XmlDocument's base class, XmlNode, can return null if its attempt to create a navigator returns null. CreateNavigator() is pretty complex and will indeed return null under a few circumstances. Those circumstances appear to be around a malformed XML document - so there's your test case for failure of SelectNodes().\n", "If you are calling SelectNodes on the XmlDocument itself and it really is an XmlDocument and not a derived class than SelectNodes won't return null.\nIf you create a descendant class and override the CreateNavigator(XmlNode) method then SelectNodes could return null.\nSimilarly, if you call SelectNodes on an EntityReference, DocumentType or XmlDeclaration node, you'll get null as well\nIn short, for 100% coverage on an XmlDocument or XmlNode you didn't just create, you have to test for null.\n", "Is it necessary to reach 100% code coverage? Indeed, is it even possible under normal (i.e. controllable, testable) circumstances? \nWe often find that using \"syntactic sugar\" constructions like the using {} block, there are \"hidden\" code paths created (most likely finally {} or catch {} blocks) that can't be exercised unless some environmental condition (like a broken socket or broken disk) gets in the way.\n" ]
[ 10, 4, 2 ]
[]
[]
[ ".net", "resharper", "unit_testing", "xml" ]
stackoverflow_0000057518_.net_resharper_unit_testing_xml.txt
Q: Is it possible to programmatically push files to a wireless SD card? Is it possible to programmatically push files to a wireless SD card - like a www.eye.fi card? I use a Mac and thought I could do this using some AppleScript - but have not found a way... Derek A: The eye-fi card relies on image files being written to a specific directory in the card before they'll transfer them. Beyond that it works exactly like a memory card. Write a file to it as if you're writing a regular memory card, and as long as it's a jpg image file of reasonable size, and in an appropriate directory (something under \DCIM\ probably) and they should transfer. If you're having trouble, double check that it works with your camera, and find out where your camera puts the images on the card, and duplicate that. You might even try naming them similar names to the types of images your camera produces. -Adam A: It looks like you can treat it just like an external hard drive (plug the memory card in and figure out where the mount point is). A: I think he wants to send files to it while its in another device, not plug it in and use it to transmit files like an antena directly connected to the machine.
Is it possible to programmatically push files to a wireless SD card?
Is it possible to programmatically push files to a wireless SD card - like a www.eye.fi card? I use a Mac and thought I could do this using some AppleScript - but have not found a way... Derek
[ "The eye-fi card relies on image files being written to a specific directory in the card before they'll transfer them. Beyond that it works exactly like a memory card.\nWrite a file to it as if you're writing a regular memory card, and as long as it's a jpg image file of reasonable size, and in an appropriate directory (something under \\DCIM\\ probably) and they should transfer.\nIf you're having trouble, double check that it works with your camera, and find out where your camera puts the images on the card, and duplicate that. You might even try naming them similar names to the types of images your camera produces.\n-Adam\n", "It looks like you can treat it just like an external hard drive (plug the memory card in and figure out where the mount point is).\n", "I think he wants to send files to it while its in another device, not plug it in and use it to transmit files like an antena directly connected to the machine.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "applescript", "wifi", "wireless" ]
stackoverflow_0000056951_applescript_wifi_wireless.txt
Q: Secure-Wave and click once applications I have users who are using "secure-wave" security. Evidently it is some sort of white-listing application monitor. With my click-once application, the name of the folders that are used are runtime generated, so the administrators are not able to properly whitelist the application and its files. Any suggestions? A: There's no way to override the ClickOnce installation location. As you said, it's runtime generated, and resides within the user ClickOnce App Cache within the individual users directory. Have you considered having having the admins whitelisting this specific folder? I guess the only other way to handle it would be to switch to Windows Installer and implement your update code yourself, which is obviously less than idea. Whitelisting the Click Once cache would be the easiest way, but obviously bare in mind the security considerations of doing this.
Secure-Wave and click once applications
I have users who are using "secure-wave" security. Evidently it is some sort of white-listing application monitor. With my click-once application, the name of the folders that are used are runtime generated, so the administrators are not able to properly whitelist the application and its files. Any suggestions?
[ "There's no way to override the ClickOnce installation location. As you said, it's runtime generated, and resides within the user ClickOnce App Cache within the individual users directory. Have you considered having having the admins whitelisting this specific folder?\nI guess the only other way to handle it would be to switch to Windows Installer and implement your update code yourself, which is obviously less than idea. Whitelisting the Click Once cache would be the easiest way, but obviously bare in mind the security considerations of doing this.\n" ]
[ 1 ]
[]
[]
[ "clickonce" ]
stackoverflow_0000057576_clickonce.txt
Q: How do I merge XML from distinct DomDocuments What is the easiest way to merge XML from two distinct DOM Documents? Is there a way other than using the Canonical DataReader approach and then messing with the outputted DOM. What I basically want is to AppendChild to XmlElements without getting: The node to be inserted is from a different document context. Here is C# code that I want to work, that obviously won't (what I am doing is merging two documents which have bunch of nodes that I am interested in parts of): XmlDocument doc1 = new XmlDocument(); doc1.LoadXml("<a><items><item1/><item2/><item3/></items></a>"); XmlDocument doc2 = new XmlDocument(); doc2.LoadXml("<b><items><item4/><item5/><item6/></items></b>"); XmlNode doc2Node = doc2.SelectSingleNode("/b/items"); XmlNodeList doc1Nodes = doc1.SelectNodes("/a/items/*"); foreach (XmlNode doc1Node in doc1Nodes) { doc2Node.AppendChild(doc1Node); } A: You can use the XmlDocument.ImportNode method to copy a node from a XmlDocument to another. A: You might be interested in http://msdn.microsoft.com/en-us/library/system.xml.xmldocument.importnode.aspx. But take a close look at the "The following table describes the specific behavior for each XmlNodeType."-part of that document.
How do I merge XML from distinct DomDocuments
What is the easiest way to merge XML from two distinct DOM Documents? Is there a way other than using the Canonical DataReader approach and then messing with the outputted DOM. What I basically want is to AppendChild to XmlElements without getting: The node to be inserted is from a different document context. Here is C# code that I want to work, that obviously won't (what I am doing is merging two documents which have bunch of nodes that I am interested in parts of): XmlDocument doc1 = new XmlDocument(); doc1.LoadXml("<a><items><item1/><item2/><item3/></items></a>"); XmlDocument doc2 = new XmlDocument(); doc2.LoadXml("<b><items><item4/><item5/><item6/></items></b>"); XmlNode doc2Node = doc2.SelectSingleNode("/b/items"); XmlNodeList doc1Nodes = doc1.SelectNodes("/a/items/*"); foreach (XmlNode doc1Node in doc1Nodes) { doc2Node.AppendChild(doc1Node); }
[ "You can use the XmlDocument.ImportNode method to copy a node from a XmlDocument to another.\n", "You might be interested in http://msdn.microsoft.com/en-us/library/system.xml.xmldocument.importnode.aspx. But take a close look at the \"The following table describes the specific behavior for each XmlNodeType.\"-part of that document.\n" ]
[ 5, 1 ]
[]
[]
[ ".net", "xml" ]
stackoverflow_0000057577_.net_xml.txt
Q: IMAP forwarder I'm wondering what is the quickest and most reliable way to forward mail from an IMAP account. My university does not allow our student-mailbox to forward to a private e-mail account (everybody uses either Gmail or Hotmail here). It's a political thing, not technical. We do have IMAP access to the mailbox. I would like to have a service which downloads the mail through IMAP, and forwards. And it would be nice to scale it, so thousands of students can use it. Eventually, I want to build a public signup page, and have it processed automatically from there. So far, I've made a decent PHP script which connects, downloads headers and body parts, and ties it all together. I have two problems with that. 1) I'm downloading all kind of parts, and sticking them back together. I hope that every exotic attached file, weird encoded piece of text and every type of header survives this. I'm not even sure I have the complete header. 2) The to: e-mail address becomes the private e-mail address, not the original student e-mail address. I think this is lame, and inconvenient in searching and archiving. Is the PHP script the way to go? Is there a trick using a particular linux mail service/daemon? Does IMAP have a 'forward' command, I'm missing? A: You might want to look at Fetchmail, as this sounds like the problem it was designed to solve. Fetchmail retrieves mail from POP/IMAP/etc servers and forwards it to SMTP/LMTP/etc servers. Fetchmail has the advantage of a few years and lots of users ironing out problems with various IMAP servers. A: Fetchmail seems like the way to go. I can use PHP to generate/edit a fetchmail command file, so that will cover the public sign-up. I'm looking for a package/script who allready does this. The Gmail pull only works with POP3, not with IMAP.
IMAP forwarder
I'm wondering what is the quickest and most reliable way to forward mail from an IMAP account. My university does not allow our student-mailbox to forward to a private e-mail account (everybody uses either Gmail or Hotmail here). It's a political thing, not technical. We do have IMAP access to the mailbox. I would like to have a service which downloads the mail through IMAP, and forwards. And it would be nice to scale it, so thousands of students can use it. Eventually, I want to build a public signup page, and have it processed automatically from there. So far, I've made a decent PHP script which connects, downloads headers and body parts, and ties it all together. I have two problems with that. 1) I'm downloading all kind of parts, and sticking them back together. I hope that every exotic attached file, weird encoded piece of text and every type of header survives this. I'm not even sure I have the complete header. 2) The to: e-mail address becomes the private e-mail address, not the original student e-mail address. I think this is lame, and inconvenient in searching and archiving. Is the PHP script the way to go? Is there a trick using a particular linux mail service/daemon? Does IMAP have a 'forward' command, I'm missing?
[ "You might want to look at Fetchmail, as this sounds like the problem it was designed to solve. Fetchmail retrieves mail from POP/IMAP/etc servers and forwards it to SMTP/LMTP/etc servers. Fetchmail has the advantage of a few years and lots of users ironing out problems with various IMAP servers.\n", "Fetchmail seems like the way to go. I can use PHP to generate/edit a fetchmail command file, so that will cover the public sign-up. I'm looking for a package/script who allready does this.\nThe Gmail pull only works with POP3, not with IMAP.\n" ]
[ 4, 0 ]
[ "If using Gmail you can configure GMAIL to pick up mail from other accounts.\n" ]
[ -1 ]
[ "email", "forwarding", "imap", "web_applications", "web_services" ]
stackoverflow_0000057547_email_forwarding_imap_web_applications_web_services.txt
Q: How do you create a process-wide singleton object? I read that the unit of granularity for static fields in .Net are per AppDomain, not per process. Is it possible to create a process-wide singleton object? A: You must use marshalled calls to communicate information across AppDomains. So you need to create the state object in your parent AppDomain and then pass it to any children that want to use it. If you didn't have to do this, you'd be sharing memory across AppDomains, which defeats the purpose. Within each AppDomain you could have a singleton that holds a reference to the (marshalled) reference to the actual singleton in the primary domain. So your code would still look "singleton-y", but there would be some hidden wiring behind it.
How do you create a process-wide singleton object?
I read that the unit of granularity for static fields in .Net are per AppDomain, not per process. Is it possible to create a process-wide singleton object?
[ "You must use marshalled calls to communicate information across AppDomains. So you need to create the state object in your parent AppDomain and then pass it to any children that want to use it. If you didn't have to do this, you'd be sharing memory across AppDomains, which defeats the purpose.\nWithin each AppDomain you could have a singleton that holds a reference to the (marshalled) reference to the actual singleton in the primary domain. So your code would still look \"singleton-y\", but there would be some hidden wiring behind it.\n" ]
[ 2 ]
[]
[]
[ ".net", "singleton" ]
stackoverflow_0000057677_.net_singleton.txt
Q: parametrization in VBScript/ASP Classic and ADO I'm a bit confused here. Microsoft as far as I can tell claims that parametrization is the best way to protect your database from SQL injection attacks. But I find two conflicting sources of information here: This page says to use the ADO command object. But this page says that the command object isn't safe for scripting. I seem to recall reading somewhere that the command object shouldn't be used in VBScript or JScript because of security vulnerabilities, but I can't seem to find that article. Am I missing something here, or do those two articles seem to contradict each other? A: I could be wrong here, but I think this just means that someone could use the Command object to do bad things. I.e. it's not to be trusted if someone else is scripting it. See safe for scripting in this article. Every instance that talks about this phrase online, references it as if you are marking an ActiveX control saying "This control does no I/O or only talks back to the server that it came from" but the Command object doesn't do that. It can be used to do a lot of things which could be unsafe. The "safe" they are talking about and the "safe" to prevent from SQL injection are two different things. The article about using the ADO Command object to parametrize your data is spot on. You should do that. And, Microsoft further confirms this here: http://msdn.microsoft.com/en-us/library/ms676585(v=VS.85).aspx A: I think "safe for scripting" means "safe to be run from a webpage we just retrieved from some Nigerian prince". The command object should be safe to run on the server. At work though, back in the day my colleagues didn't trust it so we had an in-house framework that basically did the same thing.
parametrization in VBScript/ASP Classic and ADO
I'm a bit confused here. Microsoft as far as I can tell claims that parametrization is the best way to protect your database from SQL injection attacks. But I find two conflicting sources of information here: This page says to use the ADO command object. But this page says that the command object isn't safe for scripting. I seem to recall reading somewhere that the command object shouldn't be used in VBScript or JScript because of security vulnerabilities, but I can't seem to find that article. Am I missing something here, or do those two articles seem to contradict each other?
[ "I could be wrong here, but I think this just means that someone could use the Command object to do bad things. I.e. it's not to be trusted if someone else is scripting it.\nSee safe for scripting in this article. Every instance that talks about this phrase online, references it as if you are marking an ActiveX control saying \"This control does no I/O or only talks back to the server that it came from\" but the Command object doesn't do that. It can be used to do a lot of things which could be unsafe.\nThe \"safe\" they are talking about and the \"safe\" to prevent from SQL injection are two different things. The article about using the ADO Command object to parametrize your data is spot on. You should do that.\nAnd, Microsoft further confirms this here:\nhttp://msdn.microsoft.com/en-us/library/ms676585(v=VS.85).aspx\n", "I think \"safe for scripting\" means \"safe to be run from a webpage we just retrieved from some Nigerian prince\". The command object should be safe to run on the server.\nAt work though, back in the day my colleagues didn't trust it so we had an in-house framework that basically did the same thing.\n" ]
[ 4, 1 ]
[]
[]
[ "ado", "asp_classic", "sql_server", "vbscript" ]
stackoverflow_0000057528_ado_asp_classic_sql_server_vbscript.txt
Q: web page cache setexpires Will the code below work if the clock on the server is ahead of the clock on the client? Response.Cache.SetExpires(DateTime.Now.AddSeconds(-1)) EDIT: the reason I ask is on one of our web apps some users are claiming they are seeing the pages ( account numbers, etc ) from a user that previously used that machine. Yet we use the line above and others to 'prevent' this from happening. A: This question covers making sure a webpage is not cached. It seems you have to set several properties to ensure a web page is not cached across all browsers. A: Your problem could be caused by the browser remembering data entered into form fields. You can turn this off like this: <input autocomplete="off"> A: As far as I can tell, the browser will check the expiry date against the local clock (although it will account for the time zone), so the code in your question may not work as you expect if the client's clock is inaccurate. Most commonly, this happens when their time looks right but is set to the wrong timezone, meaning the UTC timestamps are actually out by several hours. You could try setting a much older timestamp, say: 0000 1st Jan 1970 GMT (epoch) I think the code you have should work with the server side caching, but you can more explicitly disable it with: Response.Cache.SetNoServerCaching();
web page cache setexpires
Will the code below work if the clock on the server is ahead of the clock on the client? Response.Cache.SetExpires(DateTime.Now.AddSeconds(-1)) EDIT: the reason I ask is on one of our web apps some users are claiming they are seeing the pages ( account numbers, etc ) from a user that previously used that machine. Yet we use the line above and others to 'prevent' this from happening.
[ "This question covers making sure a webpage is not cached. It seems you have to set several properties to ensure a web page is not cached across all browsers.\n", "Your problem could be caused by the browser remembering data entered into form fields. You can turn this off like this:\n<input autocomplete=\"off\">\n\n", "As far as I can tell, the browser will check the expiry date against the local clock (although it will account for the time zone), so the code in your question may not work as you expect if the client's clock is inaccurate. Most commonly, this happens when their time looks right but is set to the wrong timezone, meaning the UTC timestamps are actually out by several hours.\nYou could try setting a much older timestamp, say: 0000 1st Jan 1970 GMT (epoch)\nI think the code you have should work with the server side caching, but you can more explicitly disable it with:\nResponse.Cache.SetNoServerCaching();\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "caching" ]
stackoverflow_0000057380_caching.txt
Q: Options for distribution of an offline Ruby on Rails application I am developing an application in using Ruby on Rails, mostly as an excuse to learn the language. This is not intended to be a web-based application - and perhaps I have chosen the wrong language, but... My understanding is, that in order to run an instance of this application on somebody else's computer, they would need to install ruby on rails, and a webserver (or webrick, perhaps), as well as my application code. I am just curious if there are any other options for distributing my application as a standalone app, or perhaps just a simple way to package up a web browser and ROR together with my app for a simple, one-step install? A: I have personally never needed to do this. But, I have ran across this tutorial http://www.erikveen.dds.nl/distributingrubyapplications/rails.html that I think will be helpful. The tutorial covers how to actually convert a rails app into a standalone exe file. A: Note, Slingshot appears to be a dead project (see comments). I'll leave this answer here for historical purposes and the off-chance that it comes back Joyent's Slingshot might be a good bet. Joyent Slingshot allows developers to deploy Rails applications like a standard desktop application, which work online and offline (with synchronization), have drag and drop, and interact with all the other desktop applications. With Joyent Slingshot: Create a hybrid Web/desktop application Synchronize online and offline data Use the same code for online and offline application(s) Deploy and update your application easily Drag into and out of application Here are some further links to help with your evaluation and/or to help you get started: Introducing Joyent Slingshot Basic application walkthrough Slingshot wiki A: The way most people ship ruby programs, including Rails webapps, as a standalone exe is via rubyscript2exe. They describe how to package a Rails application at http://www.erikveen.dds.nl/distributingrubyapplications/rails.html. Ruby, Rails, and all the associated libraries will be included in the EXE file. As others mentioned, Ruby is not necessarily Rails and if you really want an easy way to write a distributable GUI application in Ruby, Shoes is an excellent place to start looking. A: Gears on Rails maybe? A: You can include Ruby on Rails by freezing it to the version of Rails you want to use in your project. They call this Freezing. The user will not have to install Rails to use your application. You can do this with any library you use in your project. If the project uses a library, just place it under the Vendor folder in your project. Then use a tool similar to what @Josh answered with to package it. You will need a web server to run the project though. There is no way around this. Ruby on Rails is just like ASP.NET in this regard, in that it is a server side framework. The server runs the code and outputs the HTML to the browser by using the Rails framework. Unfortunately, you may have picked the wrong framework to do what you want. Instead of Ruby on Rails, you may want to check out Shoes, which is a framework for developing GUI applications using Ruby. A: You could always consider compiling your Ruby to JVM byte-code (via JRuby) or .NET byte-code (via IronRuby) to distribute to people who have those virtual machines and don't want to install a Ruby runtime. You might want to check out Shoes for building desktop applications in Ruby. Rails really is tuned for building websites. A: You do not specifically say whether it is supposed to be a GUI application or not. From the other answers, I would guess so. Therefore, you need to clarify what your goals are. RoR is a specialized framework for web applications. If your goal is to learn RoR, I'd say to get yourself some inexpensive web hosting and make yourself an app. If your goal is to learn Ruby, not necessarily Rails, then Shoes, IronRuby, JRuby, MacRuby and others may be good options to look at.
Options for distribution of an offline Ruby on Rails application
I am developing an application in using Ruby on Rails, mostly as an excuse to learn the language. This is not intended to be a web-based application - and perhaps I have chosen the wrong language, but... My understanding is, that in order to run an instance of this application on somebody else's computer, they would need to install ruby on rails, and a webserver (or webrick, perhaps), as well as my application code. I am just curious if there are any other options for distributing my application as a standalone app, or perhaps just a simple way to package up a web browser and ROR together with my app for a simple, one-step install?
[ "I have personally never needed to do this. But, I have ran across this tutorial http://www.erikveen.dds.nl/distributingrubyapplications/rails.html that I think will be helpful. The tutorial covers how to actually convert a rails app into a standalone exe file.\n", "Note, Slingshot appears to be a dead project (see comments). I'll leave this answer here for historical purposes and the off-chance that it comes back\nJoyent's Slingshot might be a good bet.\n\nJoyent Slingshot allows developers to deploy Rails applications like a standard desktop application, which work online and offline (with synchronization), have drag and drop, and interact with all the other desktop applications.\nWith Joyent Slingshot:\n\nCreate a hybrid Web/desktop application\nSynchronize online and offline data\nUse the same code for online and offline application(s)\nDeploy and update your application easily\nDrag into and out of application\n\n\nHere are some further links to help with your evaluation and/or to help you get started:\n\nIntroducing Joyent Slingshot\nBasic application walkthrough\nSlingshot wiki\n\n", "The way most people ship ruby programs, including Rails webapps, as a standalone exe is via rubyscript2exe. They describe how to package a Rails application at http://www.erikveen.dds.nl/distributingrubyapplications/rails.html. Ruby, Rails, and all the associated libraries will be included in the EXE file.\nAs others mentioned, Ruby is not necessarily Rails and if you really want an easy way to write a distributable GUI application in Ruby, Shoes is an excellent place to start looking.\n", "Gears on Rails maybe?\n", "You can include Ruby on Rails by freezing it to the version of Rails you want to use in your project. They call this Freezing. The user will not have to install Rails to use your application. You can do this with any library you use in your project. If the project uses a library, just place it under the Vendor folder in your project. Then use a tool similar to what @Josh answered with to package it.\nYou will need a web server to run the project though. There is no way around this. Ruby on Rails is just like ASP.NET in this regard, in that it is a server side framework. The server runs the code and outputs the HTML to the browser by using the Rails framework.\nUnfortunately, you may have picked the wrong framework to do what you want. Instead of Ruby on Rails, you may want to check out Shoes, which is a framework for developing GUI applications using Ruby.\n", "You could always consider compiling your Ruby to JVM byte-code (via JRuby) or .NET byte-code (via IronRuby) to distribute to people who have those virtual machines and don't want to install a Ruby runtime.\nYou might want to check out Shoes for building desktop applications in Ruby. Rails really is tuned for building websites.\n", "You do not specifically say whether it is supposed to be a GUI application or not. From the other answers, I would guess so. \nTherefore, you need to clarify what your goals are. RoR is a specialized framework for web applications. If your goal is to learn RoR, I'd say to get yourself some inexpensive web hosting and make yourself an app. If your goal is to learn Ruby, not necessarily Rails, then Shoes, IronRuby, JRuby, MacRuby and others may be good options to look at. \n" ]
[ 7, 3, 2, 1, 1, 1, 0 ]
[]
[]
[ "desktop_application", "offline", "ruby", "ruby_on_rails", "software_distribution" ]
stackoverflow_0000055711_desktop_application_offline_ruby_ruby_on_rails_software_distribution.txt
Q: Is there any disadvantage to returning this instead of void? Say instead of returning void a method you returned a reference to the class even if it didn't make any particular semantic sense. It seems to me like it would give you more options on how the methods are called, allowing you to use it in a fluent-interface-like style and I can't really think of any disadvantages since you don't have to do anything with the return value (even store it). So suppose you're in a situation where you want to update an object and then return its current value. instead of saying myObj.Update(); var val = myObj.GetCurrentValue(); you will be able to combine the two lines to say var val = myObj.Update().GetCurrentValue(); EDIT: I asked the below on a whim, in retrospect, I agree that its likely to be unnecessary and complicating, however my question regarding returning this rather than void stands. On a related note, what do you guys think of having the language include a new bit of syntactic sugar: var val = myObj.Update()<.GetCurrentValue(); This operator would have a low order of precedence so myObj.Update() would execute first and then call GetCurrentValue() on myObj instead of the void return of Update. Essentially I'm imagining an operator that will say "call the method on the right-hand side of the operator on the first valid object on the left". Any thoughts? A: I think as a general policy, it simply doesn't make sense. Method chaining in this manner works with a properly defined interface but it's only appropriate if it makes semantic sense. Your example is a prime one where it's not appropriate, because it makes no semantic sense. Similarly, your syntactic sugar is unnecessary with a properly designed fluent interface. Fluent interfaces or method chaining can work very well, but need to be designed carefully. A: I know in Java they're actually thinking about making this standard behaviour for void methods. If you do that you don't need the extra syntactic sugar. The only downside I can think of is performance. But that's easilly measured. I'll get back to you with the results in a few minutes :-) Edit: Returning a reference is a bit slower than returning void .. what a surprise. So that's the only downside. A few more ticks when calling your function. A: The only disadvantage I can see is that it makes the API slightly more confusing. Let's say you have some collection object with a remove() method that would normally return void. Now you want to return a reference to the collection itself. The new signature would look like: public MyCollection remove(Object someElement) Just looking at the signature, it's not clear that you're returning a reference to the same instance. Maybe MyCollection is immutable and you're returning a new instance. In some cases, like here, you would need some external documentation to clarify this. I actually like this idea, and I believe that there was some talk in retrofitting all void methods in Java7 to return a reference to 'this', but it ultimately fell through. A: Isn't this how "fluent interfaces" - like those that JQuery utilizes - are built? One benefit is supposed to be code readability (though the wikipedia entry at http://en.wikipedia.org/wiki/Fluent_interface mentions that some find it NOT readable). Another benefit is in code terseness, you lose the need to set properties in 7 lines of code and then call a method on that object in the 8th line. Martin Fowler (who coined the term here - http://martinfowler.com/bliki/FluentInterface.html) says that there is more to fluent interfaces than method chaining, however method chaining is a common technique to use with fluent interfaces. EDIT: I was actually coming back here to edit my answer and add that there is no disadvantage to returning this instead of void in any measurable way, when I saw George's comment pointing out that I did forget to discuss the point of the question. Sorry for the initial "pointless" rambling. A: Returning "self" or "this" is a common pattern, sometimes referred to as "method chaining". As for your proposed syntax sugar, I'm not so sure. I'm not a .NET guy, but it doesn't seem terribly useful to me. A: The NeXTSTEP Objective-C framework used to use this pattern. It was discontinued in that framework once distributed objects (remote procedure calls, basically) were added to the language—a function that returned self had to be a synchronous invocation, since the distributed object system saw the return type and assumed that the caller would need to know the result of the function. A: At first sight it may look good, but for a consistent interface you will need that all methods return a reference to this (which has it own problems). Let say you have a class with two methods GetA which return this and GetB which return another object: Then you can call obj.GetA().GetB(), but not obj.GetB().GetA(), which at least doesn't seems consistent. With Pascal (and Visual Basic) you can call several methods of the same object. with obj .GetA(); .GetB(); end with; The problem with this feature is that you easily can write code that is harder to understand than it should be. Also adding a new operator probably make it ever harder.
Is there any disadvantage to returning this instead of void?
Say instead of returning void a method you returned a reference to the class even if it didn't make any particular semantic sense. It seems to me like it would give you more options on how the methods are called, allowing you to use it in a fluent-interface-like style and I can't really think of any disadvantages since you don't have to do anything with the return value (even store it). So suppose you're in a situation where you want to update an object and then return its current value. instead of saying myObj.Update(); var val = myObj.GetCurrentValue(); you will be able to combine the two lines to say var val = myObj.Update().GetCurrentValue(); EDIT: I asked the below on a whim, in retrospect, I agree that its likely to be unnecessary and complicating, however my question regarding returning this rather than void stands. On a related note, what do you guys think of having the language include a new bit of syntactic sugar: var val = myObj.Update()<.GetCurrentValue(); This operator would have a low order of precedence so myObj.Update() would execute first and then call GetCurrentValue() on myObj instead of the void return of Update. Essentially I'm imagining an operator that will say "call the method on the right-hand side of the operator on the first valid object on the left". Any thoughts?
[ "I think as a general policy, it simply doesn't make sense. Method chaining in this manner works with a properly defined interface but it's only appropriate if it makes semantic sense. \nYour example is a prime one where it's not appropriate, because it makes no semantic sense.\nSimilarly, your syntactic sugar is unnecessary with a properly designed fluent interface.\nFluent interfaces or method chaining can work very well, but need to be designed carefully.\n", "I know in Java they're actually thinking about making this standard behaviour for void methods. If you do that you don't need the extra syntactic sugar.\nThe only downside I can think of is performance. But that's easilly measured. I'll get back to you with the results in a few minutes :-)\nEdit:\nReturning a reference is a bit slower than returning void .. what a surprise. So that's the only downside. A few more ticks when calling your function.\n", "The only disadvantage I can see is that it makes the API slightly more confusing. Let's say you have some collection object with a remove() method that would normally return void. Now you want to return a reference to the collection itself. The new signature would look like:\npublic MyCollection remove(Object someElement)\n\nJust looking at the signature, it's not clear that you're returning a reference to the same instance. Maybe MyCollection is immutable and you're returning a new instance. In some cases, like here, you would need some external documentation to clarify this.\nI actually like this idea, and I believe that there was some talk in retrofitting all void methods in Java7 to return a reference to 'this', but it ultimately fell through. \n", "Isn't this how \"fluent interfaces\" - like those that JQuery utilizes - are built? One benefit is supposed to be code readability (though the wikipedia entry at http://en.wikipedia.org/wiki/Fluent_interface mentions that some find it NOT readable). Another benefit is in code terseness, you lose the need to set properties in 7 lines of code and then call a method on that object in the 8th line.\nMartin Fowler (who coined the term here - http://martinfowler.com/bliki/FluentInterface.html) says that there is more to fluent interfaces than method chaining, however method chaining is a common technique to use with fluent interfaces.\nEDIT:\nI was actually coming back here to edit my answer and add that there is no disadvantage to returning this instead of void in any measurable way, when I saw George's comment pointing out that I did forget to discuss the point of the question. Sorry for the initial \"pointless\" rambling.\n", "Returning \"self\" or \"this\" is a common pattern, sometimes referred to as \"method chaining\". As for your proposed syntax sugar, I'm not so sure. I'm not a .NET guy, but it doesn't seem terribly useful to me. \n", "The NeXTSTEP Objective-C framework used to use this pattern. It was discontinued in that framework once distributed objects (remote procedure calls, basically) were added to the language—a function that returned self had to be a synchronous invocation, since the distributed object system saw the return type and assumed that the caller would need to know the result of the function.\n", "At first sight it may look good, but for a consistent interface you will need that all methods return a reference to this (which has it own problems).\nLet say you have a class with two methods GetA which return this and GetB which return another object:\nThen you can call obj.GetA().GetB(), but not obj.GetB().GetA(), which at least doesn't seems consistent.\nWith Pascal (and Visual Basic) you can call several methods of the same object.\n\nwith obj\n .GetA();\n .GetB();\nend with;\n\n\nThe problem with this feature is that you easily can write code that is harder to understand than it should be. Also adding a new operator probably make it ever harder.\n" ]
[ 12, 10, 3, 3, 2, 2, 0 ]
[]
[]
[ ".net", "fluent_interface" ]
stackoverflow_0000057140_.net_fluent_interface.txt
Q: Visual Studio 2008 / Web site problem I am using VS 2008 with SP1 and the IE 8 beta 2. Whenever I start a new Web site or when I double-click an ASPX in the solution explorer, VS insists on attempting to the display the ASPX page in a free-standing IE browser instance. The address is the local file path to the ASPX it's trying to load and an error that says, "The XML page cannot be displayed" is shown. Otherwise, things work work correctly (I just close the offending browser window. ASP.NET is registered with IIS and I have no other problems. I have tested my same configuration on other PCs and it works fine. Has anyone had this problem? Thanks rp A: Right click on the file, select 'Open With' and choose "Web Form Editor" and click "Set as Default".
Visual Studio 2008 / Web site problem
I am using VS 2008 with SP1 and the IE 8 beta 2. Whenever I start a new Web site or when I double-click an ASPX in the solution explorer, VS insists on attempting to the display the ASPX page in a free-standing IE browser instance. The address is the local file path to the ASPX it's trying to load and an error that says, "The XML page cannot be displayed" is shown. Otherwise, things work work correctly (I just close the offending browser window. ASP.NET is registered with IIS and I have no other problems. I have tested my same configuration on other PCs and it works fine. Has anyone had this problem? Thanks rp
[ "Right click on the file, select 'Open With' and choose \"Web Form Editor\" and click \"Set as Default\".\n" ]
[ 3 ]
[]
[]
[ "internet_explorer_8", "visual_studio_2008", "visual_studio_2008_sp1" ]
stackoverflow_0000057790_internet_explorer_8_visual_studio_2008_visual_studio_2008_sp1.txt
Q: Finding missing emails in SQL Server I am trying to do something I've done a million times and it's not working, can anyone tell me why? I have a table for people who sent in resumes, and it has their email address in it... I want to find out if any of these people have NOT signed up on the web site. The aspnet_Membership table has all the people who ARE signed up on the web site. There are 9472 job seekers, with unique email addresses. This query produces 1793 results: select j.email from jobseeker j join aspnet_Membership m on j.email = m.email This suggests that there should be 7679 (9472-1793) emails of people who are not signed up on the web site. Since 1793 of them DID match, I would expect the rest of them DON'T match... but when I do the query for that, I get nothing! Why is this query giving me nothing??? select j.email from jobseeker j where j.email not in (select email from aspnet_Membership) I don't know how that could be not working - it basically says "show me all the emails which are IN the jobseeker table, but NOT IN the aspnet_Membership table... A: We had a very similar problem recently where the subquery was returning null values sometimes. Then, the in statement treats null in a weird way, I think always matching the value, so if you change your query to: select j.email from jobseeker j where j.email not in (select email from aspnet_Membership where email is not null) it may work.... A: You could have a lot of duplicates out there. I'm not seeing the query error off the top of my head, but you might try writing it this way: SELECT j.email FROM jobseeker j LEFT JOIN aspnet_Membership m ON m.email = j.email WHERE m.email IS NULL You might also throw a GROUP BY or DISTINCT in there to get rid of duplicates. A: Also see Five ways to return all rows from one table which are not in another table A: You could use exists instead of in like this: Select J.Email From Jobseeker j Where not exists (Select * From aspnetMembership a where j.email = a.email) You should get better performance and avoid the 'weird' behaviour (which I suspect is to do with null values/results) when using in.
Finding missing emails in SQL Server
I am trying to do something I've done a million times and it's not working, can anyone tell me why? I have a table for people who sent in resumes, and it has their email address in it... I want to find out if any of these people have NOT signed up on the web site. The aspnet_Membership table has all the people who ARE signed up on the web site. There are 9472 job seekers, with unique email addresses. This query produces 1793 results: select j.email from jobseeker j join aspnet_Membership m on j.email = m.email This suggests that there should be 7679 (9472-1793) emails of people who are not signed up on the web site. Since 1793 of them DID match, I would expect the rest of them DON'T match... but when I do the query for that, I get nothing! Why is this query giving me nothing??? select j.email from jobseeker j where j.email not in (select email from aspnet_Membership) I don't know how that could be not working - it basically says "show me all the emails which are IN the jobseeker table, but NOT IN the aspnet_Membership table...
[ "We had a very similar problem recently where the subquery was returning null values sometimes. Then, the in statement treats null in a weird way, I think always matching the value, so if you change your query to:\nselect j.email \nfrom jobseeker j\nwhere j.email not in (select email from aspnet_Membership\n where email is not null)\n\nit may work....\n", "You could have a lot of duplicates out there. I'm not seeing the query error off the top of my head, but you might try writing it this way:\nSELECT j.email\nFROM jobseeker j\nLEFT JOIN aspnet_Membership m ON m.email = j.email\nWHERE m.email IS NULL\n\nYou might also throw a GROUP BY or DISTINCT in there to get rid of duplicates.\n", "Also see Five ways to return all rows from one table which are not in another table \n", "You could use exists instead of in like this:\nSelect J.Email\nFrom Jobseeker j\nWhere not exists (Select * From aspnetMembership a where j.email = a.email)\n\nYou should get better performance and avoid the 'weird' behaviour (which I suspect is to do with null values/results) when using in. \n" ]
[ 3, 2, 1, 0 ]
[]
[]
[ "anti_join", "join", "sql" ]
stackoverflow_0000057243_anti_join_join_sql.txt
Q: What are the performance characteristics of 'is' reflection in C#? It's shown that 'as' casting is much faster than prefix casting, but what about 'is' reflection? How bad is it? As you can imagine, searching for 'is' on Google isn't terribly effective. A: There are a few options: The classic cast: Foo foo = (Foo)bar The as cast operator: Foo foo = bar as Foo The is test: bool is = bar is Foo The classic cast needs to check if bar can be safely cast to Foo (quick), and then actually do it (slower), or throw an exception (really slow). The as operator needs to check if bar can be cast, then do the cast, or if it cannot be safely cast, then it just returns null. The is operator just checks if bar can be cast to Foo, and return a boolean. The is test is quick, because it only does the first part of a full casting operation. The as operator is quicker than a classic cast because doesn't throw an exception if the cast fails (which makes it good for situations where you legitimately expect that the cast might fail). If you just need to know if the variable baris a Foo then use the is operator, BUT, if you're going to test if bar is a Foo, and if so, then cast it, then you should use the as operator. Essentially every cast needs to do the equivalent of an is check internally to begin with, in order to ensure that the cast is valid. So if you do an is check followed by a full cast (either an as cast, or with the classic cast operator) you are effectively doing the is check twice, which is a slight extra overhead. A: The way I learned it is that this: if (obj is Foo) { Foo f = (Foo)obj; f.doSomething(); } is slower than this: Foo f = obj as Foo; if (f != null) { f.doSomething(); } Is it slow enough to matter? Probably not, but it's such a simple thing to pay attention for, that you might as well do it. A: "is" is basically equivalent to the "isinst" IL operator -- which that article describes as fast. A: It should be quick enough to not matter. If you are checking the type of an object enough for it to make a noticeable impact on performance you need to rethink your design
What are the performance characteristics of 'is' reflection in C#?
It's shown that 'as' casting is much faster than prefix casting, but what about 'is' reflection? How bad is it? As you can imagine, searching for 'is' on Google isn't terribly effective.
[ "There are a few options:\n\nThe classic cast: Foo foo = (Foo)bar\nThe as cast operator: Foo foo = bar as Foo\nThe is test: bool is = bar is Foo\n\n\n\nThe classic cast needs to check if bar can be safely cast to Foo (quick), and then actually do it (slower), or throw an exception (really slow).\nThe as operator needs to check if bar can be cast, then do the cast, or if it cannot be safely cast, then it just returns null.\nThe is operator just checks if bar can be cast to Foo, and return a boolean.\n\nThe is test is quick, because it only does the first part of a full casting operation. The as operator is quicker than a classic cast because doesn't throw an exception if the cast fails (which makes it good for situations where you legitimately expect that the cast might fail).\nIf you just need to know if the variable baris a Foo then use the is operator, BUT, if you're going to test if bar is a Foo, and if so, then cast it, then you should use the as operator.\nEssentially every cast needs to do the equivalent of an is check internally to begin with, in order to ensure that the cast is valid. So if you do an is check followed by a full cast (either an as cast, or with the classic cast operator) you are effectively doing the is check twice, which is a slight extra overhead.\n", "The way I learned it is that this:\nif (obj is Foo) {\n Foo f = (Foo)obj;\n f.doSomething();\n}\n\nis slower than this:\nFoo f = obj as Foo;\nif (f != null) {\n f.doSomething();\n}\n\nIs it slow enough to matter? Probably not, but it's such a simple thing to pay attention for, that you might as well do it.\n", "\"is\" is basically equivalent to the \"isinst\" IL operator -- which that article describes as fast.\n", "It should be quick enough to not matter. If you are checking the type of an object enough for it to make a noticeable impact on performance you need to rethink your design\n" ]
[ 20, 8, 6, 2 ]
[]
[]
[ "c#", "reflection" ]
stackoverflow_0000057701_c#_reflection.txt
Q: Best way to get a list of differences between 2 of the same objects I would like to generate a list of differences between 2 instances of the the same object. Object in question: public class Step { [DataMember] public StepInstanceInfo InstanceInfo { get; set; } [DataMember] public Collection<string> AdHocRules { get; set; } [DataMember] public Collection<StepDoc> StepDocs {...} [DataMember] public Collection<StepUsers> StepUsers {...} } What I would like to do is find an intelligent way to return an object that lists the differences between the two instances (for example, let me know that 2 specific StepDocs were added, 1 specific StepUser was removed, and one rule was changed from "Go" to "Stop"). I have been looking into using a MD5 hash, but I can't find any good examples of traversing an object like this and returning a manifest of the specific differences (not just indicating that they are different). Additional Background: the reason that I need to do this is the API that I am supporting allows clients to SaveStep(Step step)...this works great for persisting the Step object to the db using entities and repositories. I need to raise specific events (like this user was added, etc) from this SaveStep method, though, in order to alert another system (workflow engine) that a specific element in the step has changed. Thank you. A: You'll need a separate object, like StepDiff with collections for removed and added items. The easiest way to do something like this is to copy the collections from each of the old and new objects, so that StepDiff has collectionOldStepDocs and collectionNewStepDocs. Grab the shorter collection and iterate through it and see if each StepDoc exists in the other collection. If so, delete the StepDoc reference from both collections. Then when you're finished iterating, collectionOldStepDocs contains stepDocs that were deleted and collectionNewStepDocs contains the stepDocs that were added. From there you should be able to build your manifest in whatever way necessary. A: Implementing the IComparable interface in your object may provide you with the functionality you need. This will provide you a custom way to determine differences between objects without resorting to checksums which really won't help you track what the differences are in usable terms. Otherwise, there's no way to determine equality between two user objects in .NET that I know of. There are some decent examples of the usage of this interface in the help file for Visual Studio, or here. You might be able to glean some directives from the examples on clean ways to compare the properties and store the values in some usable manner for tracking purposes (perhaps a collection, or dictionary object?). Hope this helps, Greg
Best way to get a list of differences between 2 of the same objects
I would like to generate a list of differences between 2 instances of the the same object. Object in question: public class Step { [DataMember] public StepInstanceInfo InstanceInfo { get; set; } [DataMember] public Collection<string> AdHocRules { get; set; } [DataMember] public Collection<StepDoc> StepDocs {...} [DataMember] public Collection<StepUsers> StepUsers {...} } What I would like to do is find an intelligent way to return an object that lists the differences between the two instances (for example, let me know that 2 specific StepDocs were added, 1 specific StepUser was removed, and one rule was changed from "Go" to "Stop"). I have been looking into using a MD5 hash, but I can't find any good examples of traversing an object like this and returning a manifest of the specific differences (not just indicating that they are different). Additional Background: the reason that I need to do this is the API that I am supporting allows clients to SaveStep(Step step)...this works great for persisting the Step object to the db using entities and repositories. I need to raise specific events (like this user was added, etc) from this SaveStep method, though, in order to alert another system (workflow engine) that a specific element in the step has changed. Thank you.
[ "You'll need a separate object, like StepDiff with collections for removed and added items. The easiest way to do something like this is to copy the collections from each of the old and new objects, so that StepDiff has collectionOldStepDocs and collectionNewStepDocs. \nGrab the shorter collection and iterate through it and see if each StepDoc exists in the other collection. If so, delete the StepDoc reference from both collections. Then when you're finished iterating, collectionOldStepDocs contains stepDocs that were deleted and collectionNewStepDocs contains the stepDocs that were added. \nFrom there you should be able to build your manifest in whatever way necessary. \n", "Implementing the IComparable interface in your object may provide you with the functionality you need. This will provide you a custom way to determine differences between objects without resorting to checksums which really won't help you track what the differences are in usable terms. Otherwise, there's no way to determine equality between two user objects in .NET that I know of. There are some decent examples of the usage of this interface in the help file for Visual Studio, or here. You might be able to glean some directives from the examples on clean ways to compare the properties and store the values in some usable manner for tracking purposes (perhaps a collection, or dictionary object?). \nHope this helps,\nGreg\n" ]
[ 2, 1 ]
[]
[]
[ "c#", "oop" ]
stackoverflow_0000056698_c#_oop.txt
Q: Crash Instantiating System.Xml.Serialization.XmlSerializer in C# We're seeing a crash when instantiating an instance of the System.Xml.Serialization.XmlSerializer class in a C# library. The crash occurs in the constructor, when it tries to add a duplicate key to a dictionary. I've included a stack trace below. This crash is only occurring on one machine, and repairing our installation of .NET 3.5 didn't help. Has anyone else seen any similar issues? System.ArgumentException was unhandled Message="Item has already been added. Key in dictionary: 'mainbuild' Key being added: 'mainbuild'" Source="mscorlib" StackTrace: at System.Collections.Hashtable.Insert(Object key, Object nvalue, Boolean add) at System.Collections.Hashtable.Add(Object key, Object value) at System.Collections.Specialized.StringDictionary.Add(String key, String value) at System.CodeDom.Compiler.Executor.ExecWaitWithCaptureUnimpersonated(SafeUserTokenHandle userToken, String cmd, String currentDir, TempFileCollection tempFiles, String& outputName, String& errorName, String trueCmdLine) at System.CodeDom.Compiler.Executor.ExecWaitWithCapture(SafeUserTokenHandle userToken, String cmd, String currentDir, TempFileCollection tempFiles, String& outputName, String& errorName, String trueCmdLine) at Microsoft.CSharp.CSharpCodeGenerator.Compile(CompilerParameters options, String compilerDirectory, String compilerExe, String arguments, String& outputFile, Int32& nativeReturnValue, String trueArgs) at Microsoft.CSharp.CSharpCodeGenerator.FromFileBatch(CompilerParameters options, String[] fileNames) at Microsoft.CSharp.CSharpCodeGenerator.FromSourceBatch(CompilerParameters options, String[] sources) at Microsoft.CSharp.CSharpCodeGenerator.System.CodeDom.Compiler.ICodeCompiler.CompileAssemblyFromSourceBatch(CompilerParameters options, String[] sources) at System.CodeDom.Compiler.CodeDomProvider.CompileAssemblyFromSource(CompilerParameters options, String[] sources) at System.Xml.Serialization.Compiler.Compile(Assembly parent, String ns, XmlSerializerCompilerParameters xmlParameters, Evidence evidence) at System.Xml.Serialization.TempAssembly.GenerateAssembly(XmlMapping[] xmlMappings, Type[] types, String defaultNamespace, Evidence evidence, XmlSerializerCompilerParameters parameters, Assembly assembly, Hashtable assemblies) at System.Xml.Serialization.TempAssembly..ctor(XmlMapping[] xmlMappings, Type[] types, String defaultNamespace, String location, Evidence evidence) at System.Xml.Serialization.XmlSerializer.GenerateTempAssembly(XmlMapping xmlMapping, Type type, String defaultNamespace) at System.Xml.Serialization.XmlSerializer..ctor(Type type, String defaultNamespace) at System.Xml.Serialization.XmlSerializer..ctor(Type type) at OurTools.Tools.Common.XML.DataAccess`1.DeserializeFromXml(String strFilePath) in c:\AutomatedBuild\projects\1.0\OurTools.Tools.Common\OurTools.Tools.Common\XML\DataAcess.cs:line 100 at OurTools.Tools.Common.ProjectFileManager.GetProjectInfoModel() in c:\AutomatedBuild\projects\1.0\OurTools.Tools.Common\OurTools.Tools.Common\ProjectFileManager.cs:line 252 at OurTools.Tools.Common.ProjectFileManager.GetAvailableCultures() in c:\AutomatedBuild\projects\1.0\OurTools.Tools.Common\OurTools.Tools.Common\ProjectFileManager.cs:line 299 at OurAppLib.GeneratorOptions.DefaultCultures() in c:\AutomatedBuild\projects\1.0\OurApp\OurAppLib\GeneratorOptions.cs:line 192 at OurAppLib.GeneratorOptions.ReadCulturesFromArgs(List`1 arglist, String& errormsg) in c:\AutomatedBuild\projects\1.0\OurApp\OurAppLib\GeneratorOptions.cs:line 358 at OurAppLib.GeneratorOptions.ReadFromArgs(String[] args, String& errormsg) in c:\AutomatedBuild\projects\1.0\OurApp\OurAppLib\GeneratorOptions.cs:line 261 at OurApp.Program.Main(String[] args) in c:\AutomatedBuild\projects\1.0\OurApp\OurApp\Program.cs:line 76`print("code sample");` A: Found this link, which explains the issue: http://social.msdn.microsoft.com/forums/en-US/asmxandxml/thread/4476f044-bab9-492d-bb94-4e0960bd2d26 A quick summary: When serializing, the object makes a dictionary out of all environment variables, but appears to run a ToLower() on all entries. So, if you have two environment variables that are the same except for casing, you'll get a crash. This is only going to be a problem when running from inside a system like cygwin which enforces case sensitivity for variables. In our case, we're using make. There are a couple solutions, but they all revolve around making sure that your environment doesn't have any duplicated variables when your c# app runs.
Crash Instantiating System.Xml.Serialization.XmlSerializer in C#
We're seeing a crash when instantiating an instance of the System.Xml.Serialization.XmlSerializer class in a C# library. The crash occurs in the constructor, when it tries to add a duplicate key to a dictionary. I've included a stack trace below. This crash is only occurring on one machine, and repairing our installation of .NET 3.5 didn't help. Has anyone else seen any similar issues? System.ArgumentException was unhandled Message="Item has already been added. Key in dictionary: 'mainbuild' Key being added: 'mainbuild'" Source="mscorlib" StackTrace: at System.Collections.Hashtable.Insert(Object key, Object nvalue, Boolean add) at System.Collections.Hashtable.Add(Object key, Object value) at System.Collections.Specialized.StringDictionary.Add(String key, String value) at System.CodeDom.Compiler.Executor.ExecWaitWithCaptureUnimpersonated(SafeUserTokenHandle userToken, String cmd, String currentDir, TempFileCollection tempFiles, String& outputName, String& errorName, String trueCmdLine) at System.CodeDom.Compiler.Executor.ExecWaitWithCapture(SafeUserTokenHandle userToken, String cmd, String currentDir, TempFileCollection tempFiles, String& outputName, String& errorName, String trueCmdLine) at Microsoft.CSharp.CSharpCodeGenerator.Compile(CompilerParameters options, String compilerDirectory, String compilerExe, String arguments, String& outputFile, Int32& nativeReturnValue, String trueArgs) at Microsoft.CSharp.CSharpCodeGenerator.FromFileBatch(CompilerParameters options, String[] fileNames) at Microsoft.CSharp.CSharpCodeGenerator.FromSourceBatch(CompilerParameters options, String[] sources) at Microsoft.CSharp.CSharpCodeGenerator.System.CodeDom.Compiler.ICodeCompiler.CompileAssemblyFromSourceBatch(CompilerParameters options, String[] sources) at System.CodeDom.Compiler.CodeDomProvider.CompileAssemblyFromSource(CompilerParameters options, String[] sources) at System.Xml.Serialization.Compiler.Compile(Assembly parent, String ns, XmlSerializerCompilerParameters xmlParameters, Evidence evidence) at System.Xml.Serialization.TempAssembly.GenerateAssembly(XmlMapping[] xmlMappings, Type[] types, String defaultNamespace, Evidence evidence, XmlSerializerCompilerParameters parameters, Assembly assembly, Hashtable assemblies) at System.Xml.Serialization.TempAssembly..ctor(XmlMapping[] xmlMappings, Type[] types, String defaultNamespace, String location, Evidence evidence) at System.Xml.Serialization.XmlSerializer.GenerateTempAssembly(XmlMapping xmlMapping, Type type, String defaultNamespace) at System.Xml.Serialization.XmlSerializer..ctor(Type type, String defaultNamespace) at System.Xml.Serialization.XmlSerializer..ctor(Type type) at OurTools.Tools.Common.XML.DataAccess`1.DeserializeFromXml(String strFilePath) in c:\AutomatedBuild\projects\1.0\OurTools.Tools.Common\OurTools.Tools.Common\XML\DataAcess.cs:line 100 at OurTools.Tools.Common.ProjectFileManager.GetProjectInfoModel() in c:\AutomatedBuild\projects\1.0\OurTools.Tools.Common\OurTools.Tools.Common\ProjectFileManager.cs:line 252 at OurTools.Tools.Common.ProjectFileManager.GetAvailableCultures() in c:\AutomatedBuild\projects\1.0\OurTools.Tools.Common\OurTools.Tools.Common\ProjectFileManager.cs:line 299 at OurAppLib.GeneratorOptions.DefaultCultures() in c:\AutomatedBuild\projects\1.0\OurApp\OurAppLib\GeneratorOptions.cs:line 192 at OurAppLib.GeneratorOptions.ReadCulturesFromArgs(List`1 arglist, String& errormsg) in c:\AutomatedBuild\projects\1.0\OurApp\OurAppLib\GeneratorOptions.cs:line 358 at OurAppLib.GeneratorOptions.ReadFromArgs(String[] args, String& errormsg) in c:\AutomatedBuild\projects\1.0\OurApp\OurAppLib\GeneratorOptions.cs:line 261 at OurApp.Program.Main(String[] args) in c:\AutomatedBuild\projects\1.0\OurApp\OurApp\Program.cs:line 76`print("code sample");`
[ "Found this link, which explains the issue:\nhttp://social.msdn.microsoft.com/forums/en-US/asmxandxml/thread/4476f044-bab9-492d-bb94-4e0960bd2d26\nA quick summary: When serializing, the object makes a dictionary out of all environment variables, but appears to run a ToLower() on all entries. So, if you have two environment variables that are the same except for casing, you'll get a crash.\nThis is only going to be a problem when running from inside a system like cygwin which enforces case sensitivity for variables. In our case, we're using make.\nThere are a couple solutions, but they all revolve around making sure that your environment doesn't have any duplicated variables when your c# app runs.\n" ]
[ 5 ]
[]
[]
[ ".net", "c#", "serialization" ]
stackoverflow_0000057839_.net_c#_serialization.txt
Q: Casting between multi- and single-dimentional arrays This came up from this answer to a previous question of mine. Is it guaranteed for the compiler to treat array[4][4] the same as array[16]? For instance, would either of the below calls to api_func() be safe? void api_func(const double matrix[4][4]); // ... { typedef double Matrix[4][4]; double* array1 = new double[16]; double array2[16]; // ... api_func(reinterpret_cast<Matrix&>(array1)); api_func(reinterpret_cast<Matrix&>(array2)); } A: From the C++ standard, referring to the sizeof operator: When applied to an array, the result is the total number of bytes in the array. This implies that the size of an array of n elements is n times the size of an element. From this, I'd say that double[4][4] and double[16] would have to have the same underlying representation. I.e., given sizeof(double[4]) = 4*sizeof(double) and sizeof(double[4][4]) = 4*sizeof(double[4]) then we have sizeof(double[4][4]) = 4*4*sizeof(double) = 16*sizeof(double) = sizeof(double[16]) I think a standards-compliant compiler would have to implement these the same, and I think that this isn't something that a compiler would accidentally break. The standard way of implementing multi-dimensional arrays works as expected. Breaking the standard would require extra work, for likely no benefit. The C++ standard also states that an array consists of contiguously-allocated elements, which eliminates the possibility of doing anything strange using pointers and padding. A: I don't think there is a problem with padding introduced by having a multi-dimensional array. Each element in an array must satisfy the padding requirements imposed by the architecture. An array [N][M] is always going to have the same in memory representation as one of [M*N]. A: Each array element should be laid out sequentially in memory by the compiler. The two declarations whilst different types are the same underlying memory structure. A: @Konrad Rudolph: I get those two (row major/column major) mixed up myself, but I do know this: It's well-defined. int x[3][5], for example, is an array of size 3, whose elements are int arrays of size 5. (§6.5.2.1) Adding all the rules from the standard about arrays, addressing, etc. you get that the second subscript references consecutive integers, wheras the first subscript will reference consecutive 5-int objects. (So 3 is the bigger number; you have 5 ints between x[1][0] and x[2][0].) A: I would be worried about padding being added for things like Matrix[5][5] to make each row word aligned, but that could be simply my own superstition. A: A bigger question is: do you really need to perform such a cast? Although you might be able to get away with it, it would still be more readable and maintainable to avoid altogether. For example, you could consistently use double[m*n] as the actual type, and then work with a class that wraps this type, and perhaps overloads the [] operator for ease of use. In that case, you might also need an intermediate class to encapsulate a single row -- so that code like my_matrix[3][5] still works as expected.
Casting between multi- and single-dimentional arrays
This came up from this answer to a previous question of mine. Is it guaranteed for the compiler to treat array[4][4] the same as array[16]? For instance, would either of the below calls to api_func() be safe? void api_func(const double matrix[4][4]); // ... { typedef double Matrix[4][4]; double* array1 = new double[16]; double array2[16]; // ... api_func(reinterpret_cast<Matrix&>(array1)); api_func(reinterpret_cast<Matrix&>(array2)); }
[ "From the C++ standard, referring to the sizeof operator:\n\nWhen applied to an array, the result is the total number of bytes in the array. This implies that the size of an array of n elements is n times the size of an element.\n\nFrom this, I'd say that double[4][4] and double[16] would have to have the same underlying representation. \nI.e., given \nsizeof(double[4]) = 4*sizeof(double)\n\nand\nsizeof(double[4][4]) = 4*sizeof(double[4])\n\nthen we have\nsizeof(double[4][4]) = 4*4*sizeof(double) = 16*sizeof(double) = sizeof(double[16])\n\nI think a standards-compliant compiler would have to implement these the same, and I think that this isn't something that a compiler would accidentally break. The standard way of implementing multi-dimensional arrays works as expected. Breaking the standard would require extra work, for likely no benefit.\nThe C++ standard also states that an array consists of contiguously-allocated elements, which eliminates the possibility of doing anything strange using pointers and padding.\n", "I don't think there is a problem with padding introduced by having a multi-dimensional array.\nEach element in an array must satisfy the padding requirements imposed by the architecture. An array [N][M] is always going to have the same in memory representation as one of [M*N].\n", "Each array element should be laid out sequentially in memory by the compiler. The two declarations whilst different types are the same underlying memory structure.\n", "@Konrad Rudolph:\nI get those two (row major/column major) mixed up myself, but I do know this: It's well-defined.\nint x[3][5], for example, is an array of size 3, whose elements are int arrays of size 5. (§6.5.2.1) Adding all the rules from the standard about arrays, addressing, etc. you get that the second subscript references consecutive integers, wheras the first subscript will reference consecutive 5-int objects. (So 3 is the bigger number; you have 5 ints between x[1][0] and x[2][0].)\n", "I would be worried about padding being added for things like Matrix[5][5] to make each row word aligned, but that could be simply my own superstition.\n", "A bigger question is: do you really need to perform such a cast?\nAlthough you might be able to get away with it, it would still be more readable and maintainable to avoid altogether. For example, you could consistently use double[m*n] as the actual type, and then work with a class that wraps this type, and perhaps overloads the [] operator for ease of use. In that case, you might also need an intermediate class to encapsulate a single row -- so that code like my_matrix[3][5] still works as expected.\n" ]
[ 3, 2, 1, 1, 0, 0 ]
[]
[]
[ "arrays", "c++" ]
stackoverflow_0000055532_arrays_c++.txt
Q: How can I determine why a jQuery ajax $.post request wasn't successful? I'm trying to determine, based on the result of this call, if it was successful. The successFunction doesn't get called, so I'm assuming it was not. How do I know what went wrong? xmlRequest = $.post("/url/file/", { 'id' : object.id }, successFunction, 'json'); Do I use the xmlRequest object? A: You can use: $.ajax({ url:"/url/file/", dataType:"json" data:{ 'id' : object.id } error:function(request){alert(request.statusText)} success:successFunction }) A: You could use the $.ajaxComplete() and/or $.ajaxError() methods to attach function to those events. I would also recommend using the Firefox browser with the Firebug pluging, you can get a lot of information about the requests and responses.
How can I determine why a jQuery ajax $.post request wasn't successful?
I'm trying to determine, based on the result of this call, if it was successful. The successFunction doesn't get called, so I'm assuming it was not. How do I know what went wrong? xmlRequest = $.post("/url/file/", { 'id' : object.id }, successFunction, 'json'); Do I use the xmlRequest object?
[ "You can use:\n$.ajax({\n url:\"/url/file/\",\n dataType:\"json\"\n data:{ 'id' : object.id }\n error:function(request){alert(request.statusText)}\n success:successFunction\n})\n\n", "You could use the $.ajaxComplete() and/or $.ajaxError() methods to attach function to those events. I would also recommend using the Firefox browser with the Firebug pluging, you can get a lot of information about the requests and responses.\n" ]
[ 9, 5 ]
[]
[]
[ "ajax", "jquery", "post" ]
stackoverflow_0000057679_ajax_jquery_post.txt
Q: What is the difference between dllexport and dllimport? I'm just looking for a simple, concise explanation of the difference between these two. MSDN doesn't go into a hell of a lot of detail here. A: __declspec( dllexport ) - The class or function so tagged will be exported from the DLL it is built in. If you're building a DLL and you want an API, you'll need to use this or a separate .DEF file that defines the exports (MSDN). This is handy because it keeps the definition in one place, but the .DEF file provides more options. __declspec( dllimport ) - The class or function so tagged will be imported from a DLL. This is not actually required - you need an import library anyway to make the linker happy. But when properly marked with dllimport, the compiler and linker have enough information to optimize the call; without it, you get normal static linking to a stub function in the import library, which adds unnecessary indirection. ONT1 ONT2 A: __declspec(dllexport) tells the linker that you want this object to be made available for other DLL's to import. It is used when creating a DLL that others can link to. __declspec(dllimport) imports the implementation from a DLL so your application can use it. I'm only a novice C/C++ developer, so perhaps someone's got a better explanation than I. A: Two different use cases: 1) You are defining a class implementation within a dll. You want another program to use the class. Here you use dllexport as you are creating a class that you wish the dll to expose. 2) You are using a function provided by a dll. You include a header supplied with the dll. Here the header uses dllimport to bring in the implementation to be used by the current program. Often the same header file is used in both cases and a macro defined. The build configuration defines the macro to be import or export depending which it needs. A: Dllexport is used to mark a function as exported. You implement the function in your DLL and export it so it becomes available to anyone using your DLL. Dllimport is the opposite: it marks a function as being imported from a DLL. In this case you only declare the function's signature and link your code with the library.
What is the difference between dllexport and dllimport?
I'm just looking for a simple, concise explanation of the difference between these two. MSDN doesn't go into a hell of a lot of detail here.
[ "__declspec( dllexport ) - The class or function so tagged will be exported from the DLL it is built in. If you're building a DLL and you want an API, you'll need to use this or a separate .DEF file that defines the exports (MSDN). This is handy because it keeps the definition in one place, but the .DEF file provides more options.\n__declspec( dllimport ) - The class or function so tagged will be imported from a DLL. This is not actually required - you need an import library anyway to make the linker happy. But when properly marked with dllimport, the compiler and linker have enough information to optimize the call; without it, you get normal static linking to a stub function in the import library, which adds unnecessary indirection. ONT1 ONT2\n", "\n__declspec(dllexport) tells the linker that you want this object to be made available for other DLL's to import. It is used when creating a DLL that others can link to.\n__declspec(dllimport) imports the implementation from a DLL so your application can use it.\n\nI'm only a novice C/C++ developer, so perhaps someone's got a better explanation than I.\n", "Two different use cases:\n1) You are defining a class implementation within a dll. You want another program to use the class. Here you use dllexport as you are creating a class that you wish the dll to expose.\n2) You are using a function provided by a dll. You include a header supplied with the dll. Here the header uses dllimport to bring in the implementation to be used by the current program.\nOften the same header file is used in both cases and a macro defined. The build configuration defines the macro to be import or export depending which it needs.\n", "Dllexport is used to mark a function as exported. You implement the function in your DLL and export it so it becomes available to anyone using your DLL.\nDllimport is the opposite: it marks a function as being imported from a DLL. In this case you only declare the function's signature and link your code with the library.\n" ]
[ 93, 56, 13, 10 ]
[]
[]
[ "dll", "export", "import", "visual_c++" ]
stackoverflow_0000057999_dll_export_import_visual_c++.txt
Q: Crash reporting in C for Linux Following this question: Good crash reporting library in c# Is there any library like CrashRpt.dll that does the same on Linux? That is, generate a failure report including a core dump and any necessary environment and notify the developer about it? Edit: This seems to be a duplicate of this question A: See Getting stack traces on Unix systems, automatically on Stack Overflow. A: Compile your code with debug symbols, enter unlimit coredumpsize in your shell and you'll get a coredump in the same folder as the binary. Use gdb/ddd - open the program first and then open the core dump. You can check this out for additional info. A: @Ionut This handles generating the core dump, but it doesn't handle notifying the developer when other users had crashes. A: Note: there are two interesting registers in an x86 seg-fault crash. The first, EIP, specifies the code address at which the exception occurred. In RichQ's answer, he uses addr2line to show the source line that corresponds to the crash address. But EIP can be invalid; if you call a function pointer that is null, it can be 0x00000000, and if you corrupt your call stack, the return can pop any random value into EIP. The second, CR2, specifies the data address which caused the segmentation fault. In RichQ's example, he is setting up i as a null pointer, then accessing it. In this case, CR2 would be 0x00000000. But if you change: int j = *i to: int j = i[2]; Then you are trying to access address 0x00000008, and that's what would be found in CR2. A: Nathan, under what circumstances in a segment base non-zero? I've never seen that occur in my 5 years of Linux application development. Thanks. A: @Martin I do architectural validation for x86, so I'm very familiar with the architecture the processor provides, but very unfamiliar with how it's used. That's what I based my comment on. If CR2 can be counted on to give the correct answer, then I stand corrected. A: Nathan, I wasn't insisting that you were incorrect; I was just saying that in my (limited) experience with Linux, the segment base is always zero. Maybe that's a good question for me to ask...
Crash reporting in C for Linux
Following this question: Good crash reporting library in c# Is there any library like CrashRpt.dll that does the same on Linux? That is, generate a failure report including a core dump and any necessary environment and notify the developer about it? Edit: This seems to be a duplicate of this question
[ "See Getting stack traces on Unix systems, automatically on Stack Overflow.\n", "Compile your code with debug symbols, enter unlimit coredumpsize in your shell and you'll get a coredump in the same folder as the binary. Use gdb/ddd - open the program first and then open the core dump. You can check this out for additional info.\n", "@Ionut\nThis handles generating the core dump, but it doesn't handle notifying the developer when other users had crashes.\n", "Note: there are two interesting registers in an x86 seg-fault crash.\nThe first, EIP, specifies the code address at which the exception occurred. In RichQ's answer, he uses addr2line to show the source line that corresponds to the crash address. But EIP can be invalid; if you call a function pointer that is null, it can be 0x00000000, and if you corrupt your call stack, the return can pop any random value into EIP.\nThe second, CR2, specifies the data address which caused the segmentation fault. In RichQ's example, he is setting up i as a null pointer, then accessing it. In this case, CR2 would be 0x00000000. But if you change:\nint j = *i\n\nto:\nint j = i[2];\n\nThen you are trying to access address 0x00000008, and that's what would be found in CR2. \n", "Nathan, under what circumstances in a segment base non-zero? I've never seen that occur in my 5 years of Linux application development.\nThanks.\n", "@Martin\nI do architectural validation for x86, so I'm very familiar with the architecture the processor provides, but very unfamiliar with how it's used. That's what I based my comment on. If CR2 can be counted on to give the correct answer, then I stand corrected.\n", "Nathan, I wasn't insisting that you were incorrect; I was just saying that in my (limited) experience with Linux, the segment base is always zero. Maybe that's a good question for me to ask...\n" ]
[ 3, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "c", "crashrpt", "linux" ]
stackoverflow_0000049251_c_crashrpt_linux.txt
Q: Javascript array with a mix of literals and arrays I can create the following and reference it using area[0].states[0] area[0].cities[0] var area = [ { "State" : "Texas", "Cities" : ['Austin','Dallas','San Antonio'] }, { "State" :"Arkansas", "Cities" : ['Little Rock','Texarkana','Hot Springs'] } ] ; How could I restructure "area" so that if I know the name of the state, I can use it in a reference to get the array of cities? Thanks EDIT Attempting to implement with the answers I received (thanks @Eli Courtwright, @17 of 26, and @JasonBunting) I realize my question was incomplete. I need to loop through "area" the first time referencing "state" by index, then when I have the selection of the "state", I need to loop back through a structure using the value of "state" to get the associated "cities". I do want to start with the above structure (although I am free to build it how I want) and I don't mind a conversion similar to @eli's answer (although I was not able to get that conversion to work). Should have been more complete in first question. Trying to implement 2 select boxes where the selection from the first populates the second...I will load this array structure in a js file when the page loads. A: var area = { "Texas" : { "Cities" : ['Austin','Dallas','San Antonio'] }, "Arkansas" : { "Cities" : ['Little Rock','Texarkana','Hot Springs'] } }; Then you can do: area["Texas"].Cities[0]; A: (With help from the answers, I got this to work like I wanted. I fixed the syntax in selected answer, in the below code) With the following select boxes <select id="states" size="2"></select> <select id="cities" size="3"></select> and data in this format (either in .js file or received as JSON) var area = [ { "states" : "Texas", "cities" : ['Austin','Dallas','San Antonio'] }, { "states" :"Arkansas", "cities" : ['Little Rock','Texarkana','Hot Springs'] } ] ; These JQuery functions will populate the city select box based on the state select box selection $(function() { // create an array to be referenced by state name state = [] ; for(var i=0; i<area.length; i++) { state[area[i].states] = area[i].cities ; } }); $(function() { // populate states select box var options = '' ; for (var i = 0; i < area.length; i++) { options += '<option value="' + area[i].states + '">' + area[i].states + '</option>'; } $("#states").html(options); // populate select box with array // selecting state (change) will populate cities select box $("#states").bind("change", function() { $("#cities").children().remove() ; // clear select box var options = '' ; for (var i = 0; i < state[this.value].length; i++) { options += '<option value="' + state[this.value][i] + '">' + state[this.value][i] + '</option>'; } $("#cities").html(options); // populate select box with array } // bind function end ); // bind end }); A: If you want to just create it that way to begin with, just say area = { "Texas": ['Austin','Dallas','San Antonio'] } and so on. If you're asking how to take an existing object and convert it into this, just say states = {} for(var j=0; j<area.length; j++) states[ area[0].State ] = area[0].Cities After running the above code, you could say states["Texas"] which would return ['Austin','Dallas','San Antonio'] A: This would give you the array of cities based on knowing the state's name: var area = { "Texas" : ["Austin","Dallas","San Antonio"], "Arkansas" : ["Little Rock","Texarkana","Hot Springs"] }; // area["Texas"] would return ["Austin","Dallas","San Antonio"]
Javascript array with a mix of literals and arrays
I can create the following and reference it using area[0].states[0] area[0].cities[0] var area = [ { "State" : "Texas", "Cities" : ['Austin','Dallas','San Antonio'] }, { "State" :"Arkansas", "Cities" : ['Little Rock','Texarkana','Hot Springs'] } ] ; How could I restructure "area" so that if I know the name of the state, I can use it in a reference to get the array of cities? Thanks EDIT Attempting to implement with the answers I received (thanks @Eli Courtwright, @17 of 26, and @JasonBunting) I realize my question was incomplete. I need to loop through "area" the first time referencing "state" by index, then when I have the selection of the "state", I need to loop back through a structure using the value of "state" to get the associated "cities". I do want to start with the above structure (although I am free to build it how I want) and I don't mind a conversion similar to @eli's answer (although I was not able to get that conversion to work). Should have been more complete in first question. Trying to implement 2 select boxes where the selection from the first populates the second...I will load this array structure in a js file when the page loads.
[ "var area = \n{\n \"Texas\" : { \"Cities\" : ['Austin','Dallas','San Antonio'] },\n \"Arkansas\" : { \"Cities\" : ['Little Rock','Texarkana','Hot Springs'] }\n};\n\nThen you can do:\narea[\"Texas\"].Cities[0];\n\n", "(With help from the answers, I got this to work like I wanted. I fixed the syntax in selected answer, in the below code)\nWith the following select boxes\n<select id=\"states\" size=\"2\"></select>\n<select id=\"cities\" size=\"3\"></select>\n\nand data in this format (either in .js file or received as JSON)\nvar area = [\n {\n \"states\" : \"Texas\",\n \"cities\" : ['Austin','Dallas','San Antonio']\n },\n {\n \"states\" :\"Arkansas\",\n \"cities\" : ['Little Rock','Texarkana','Hot Springs']\n }\n ] ;\n\nThese JQuery functions will populate the city select box based on the state select box selection\n$(function() { // create an array to be referenced by state name\n state = [] ;\n for(var i=0; i<area.length; i++) {\n state[area[i].states] = area[i].cities ;\n }\n});\n\n$(function() {\n // populate states select box\n var options = '' ;\n for (var i = 0; i < area.length; i++) {\n options += '<option value=\"' + area[i].states + '\">' + area[i].states + '</option>'; \n }\n $(\"#states\").html(options); // populate select box with array\n\n // selecting state (change) will populate cities select box\n $(\"#states\").bind(\"change\",\n function() {\n $(\"#cities\").children().remove() ; // clear select box\n var options = '' ;\n for (var i = 0; i < state[this.value].length; i++) { \n options += '<option value=\"' + state[this.value][i] + '\">' + state[this.value][i] + '</option>'; \n }\n $(\"#cities\").html(options); // populate select box with array\n } // bind function end\n ); // bind end \n});\n\n", "If you want to just create it that way to begin with, just say\narea = {\n \"Texas\": ['Austin','Dallas','San Antonio']\n}\n\nand so on. If you're asking how to take an existing object and convert it into this, just say\nstates = {}\nfor(var j=0; j<area.length; j++)\n states[ area[0].State ] = area[0].Cities\n\nAfter running the above code, you could say\nstates[\"Texas\"]\n\nwhich would return\n['Austin','Dallas','San Antonio']\n\n", "This would give you the array of cities based on knowing the state's name:\nvar area = {\n \"Texas\" : [\"Austin\",\"Dallas\",\"San Antonio\"], \n \"Arkansas\" : [\"Little Rock\",\"Texarkana\",\"Hot Springs\"]\n};\n\n// area[\"Texas\"] would return [\"Austin\",\"Dallas\",\"San Antonio\"]\n\n" ]
[ 2, 2, 1, 1 ]
[]
[]
[ "javascript", "jquery" ]
stackoverflow_0000057522_javascript_jquery.txt
Q: How to use getaddrinfo_a to do async resolve with glibc An often overlooked function that requires no external library, but basically has no documentation whatsoever. A: UPDATE (2010-10-11): The linux man-pages now have documentation of the getaddrinfo_a, you can find it here: http://www.kernel.org/doc/man-pages/online/pages/man3/getaddrinfo_a.3.html As a disclaimer I should add that I'm quite new to C but not exactly a newbie, so there might be bugs, or bad coding practices, please do correct me (and my grammar sucks too). I personally didn't know about it until I came upon this post by Adam Langley, I shall give a few code snippets to illustrate the usage of it and clarify some things that might not be that clear on first use. The benefits of using this is that you get back data readily usable in socket(), listen() and other functions, and if done right you won't have to worry about ipv4/v6 either. So to start off with the basics, as taken from the link above (you will need to link against libanl (-lanl)) : Here is the function prototype: int getaddrinfo_a(int mode, struct gaicb *list[], int ent, struct sigevent *); The mode is either GAI_WAIT (which is probably not what you want) and GAI_NOWAIT for async lookups The gaicb argument accepts an array of hosts to lookup with the ent argument specifying how many elements the array has The sigevent will be responsible for telling the function how we are to be notified, more on this in a moment A gaicb struct looks like this: struct gaicb { const char *ar_name; const char *ar_service; const struct addrinfo *ar_request; struct addrinfo *ar_result; }; If you're familiar with getaddrinfo, then these fields correspond to them like so: int getaddrinfo(const char *node, const char *service, const struct addrinfo *hints, struct addrinfo **res); The node is the ar_name field, service is the port, the hints argument corresponds to the ar_request member and the result is stored in the rest. Now you specify how you want to be notified through the sigevent structure: struct sigevent { sigval_t sigev_value; int sigev_signo; int sigev_notify; void (*sigev_notify_function) (sigval_t); pthread_addr_t *sigev_notify_attributes; }; You can ignore the notification via setting _sigev_notify_ to SIGEV_NONE You can trigger a signal via setting sigev_notify to SIGEV_SIGNAL and sigev_signo to the desired signal. Note that when using a real-time signal (SIGRTMIN-SIGRTMAX, always use it via the macros and addition SIGRTMIN+2 etc.) you can pass along a pointer or value in the sigev_value.sival_ptr or sigev_value.sival_int member respectivley You can request a callback in a new thread via setting sigev_notify to SIGEV_NONE So basically if you want to look up a hostname you set ar_name to the host and set everything else to NULL, if you want to connect to a host you set ar_name and ar_service , and if you want to create a server you specify ar_service and the ar_result field. You can of course customize the ar_request member to your hearts content, look at man getaddrinfo for more info. If you have an event loop with select/poll/epoll/kqueue you might want to use signalfd for convenience. Signalfd creates a file descriptor on which you can use the usuall event polling mechanisms like so: #define _GNU_SOURCE //yes this will not be so standardish #include <netdb.h> #include <signal.h> #include <sys/signalfd.h> void signalfd_setup(void) { int sfd; sigset_t mask; sigemptyset(&mask); sigaddset(&mask, SIGRTMIN); sigprocmask(SIG_BLOCK, &mask, NULL); //we block the signal sfd = signalfd(-1, &mask, 0); //add it to the event queue } void signalfd_read(int fd) { ssize_t s; struct signalfd_siginfo fdsi; struct gaicb *host; while((s = read(fd, &fdsi, sizeof(struct signalfd_siginfo))) > 0){ if (s != sizeof(struct signalfd_siginfo)) return; //thats bad host = fdsi.ssi_ptr; //the pointer passed to the sigevent structure //the result is in the host->ar_result member create_server(host); } } void create_server(struct gaicb *host) { struct addrinfo *rp, *result; int fd; result = host->ar_result; for(rp = result; rp != NULL; rp = rp->ai_next) { fd = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol); bind(fd, rp->ai_addr, rp->ai_addrlen); listen(fd, SOMAXCONN); //error checks are missing! freeaddrinfo(host->ar_request); freeaddrinfo(result); //you should free everything you put into the gaicb } } int main(int argc, char *argv[]) { struct gaicb *host; struct addrinfo *hints; struct sigevent sig; host = calloc(1, sizeof(struct gaicb)); hints = calloc(1, sizeof(struct addrinfo)); hints->ai_family = AF_UNSPEC; //we dont care if its v4 or v6 hints->ai_socktype = SOCK_STREAM; hints->ai_flags = AI_PASSIVE; //every other field is NULL-d by calloc host->ar_service = "8888"; //the port we will listen on host->ar_request = hints; sig.sigev_notify = SIGEV_SIGNAL; sig.sigev_value.sival_ptr = host; sig.sigev_signo = SIGRTMIN; getaddrinfo_a(GAI_NOWAIT, &host, 1, &sig); signalfd_setup(); //start your event loop return 0; } You can of course use a simple signal handler for this job too, look at man sigaction for more info.
How to use getaddrinfo_a to do async resolve with glibc
An often overlooked function that requires no external library, but basically has no documentation whatsoever.
[ "UPDATE (2010-10-11): The linux man-pages now have documentation of the getaddrinfo_a, you can find it here: http://www.kernel.org/doc/man-pages/online/pages/man3/getaddrinfo_a.3.html\nAs a disclaimer I should add that I'm quite new to C but not exactly a newbie, so there might be bugs, or bad coding practices, please do correct me (and my grammar sucks too). \nI personally didn't know about it until I came upon this post by Adam Langley, I shall give a few code snippets to illustrate the usage of it and clarify some things that might not be that clear on first use. The benefits of using this is that you get back data readily usable in socket(), listen() and other functions, and if done right you won't have to worry about ipv4/v6 either.\nSo to start off with the basics, as taken from the link above (you will need to link against libanl (-lanl)) :\nHere is the function prototype:\nint getaddrinfo_a(int mode, struct gaicb *list[], int ent, \n struct sigevent *);\n\n\nThe mode is either GAI_WAIT (which is probably not what you want) and GAI_NOWAIT for async lookups \nThe gaicb argument accepts an array of hosts to lookup with the ent argument specifying how many elements the array has\nThe sigevent will be responsible for telling the function how we are to be notified, more on this in a moment\n\nA gaicb struct looks like this:\nstruct gaicb {\n const char *ar_name;\n const char *ar_service;\n const struct addrinfo *ar_request;\n struct addrinfo *ar_result;\n};\n\nIf you're familiar with getaddrinfo, then these fields correspond to them like so:\nint getaddrinfo(const char *node, const char *service,\n const struct addrinfo *hints,\n struct addrinfo **res);\n\nThe node is the ar_name field, service is the port, the hints argument corresponds to the ar_request member and the result is stored in the rest.\nNow you specify how you want to be notified through the sigevent structure:\nstruct sigevent {\n sigval_t sigev_value;\n int sigev_signo;\n int sigev_notify;\n void (*sigev_notify_function) (sigval_t);\n pthread_addr_t *sigev_notify_attributes;\n};\n\n\nYou can ignore the notification via setting _sigev_notify_ to SIGEV_NONE\nYou can trigger a signal via setting sigev_notify to SIGEV_SIGNAL and sigev_signo to the desired signal. Note that when using a real-time signal (SIGRTMIN-SIGRTMAX, always use it via the macros and addition SIGRTMIN+2 etc.) you can pass along a pointer or value in the sigev_value.sival_ptr or sigev_value.sival_int member respectivley\nYou can request a callback in a new thread via setting sigev_notify to SIGEV_NONE\n\nSo basically if you want to look up a hostname you set ar_name to the host and set everything else to NULL, if you want to connect to a host you set ar_name and ar_service , and if you want to create a server you specify ar_service and the ar_result field. You can of course customize the ar_request member to your hearts content, look at man getaddrinfo for more info. \nIf you have an event loop with select/poll/epoll/kqueue you might want to use signalfd for convenience. Signalfd creates a file descriptor on which you can use the usuall event polling mechanisms like so:\n#define _GNU_SOURCE //yes this will not be so standardish\n#include <netdb.h>\n#include <signal.h>\n#include <sys/signalfd.h>\n\nvoid signalfd_setup(void) {\n int sfd;\n sigset_t mask;\n\n sigemptyset(&mask);\n sigaddset(&mask, SIGRTMIN);\n sigprocmask(SIG_BLOCK, &mask, NULL); //we block the signal\n sfd = signalfd(-1, &mask, 0);\n //add it to the event queue\n}\nvoid signalfd_read(int fd) {\n ssize_t s;\n struct signalfd_siginfo fdsi;\n struct gaicb *host;\n\n while((s = read(fd, &fdsi, sizeof(struct signalfd_siginfo))) > 0){\n if (s != sizeof(struct signalfd_siginfo)) return; //thats bad\n host = fdsi.ssi_ptr; //the pointer passed to the sigevent structure\n //the result is in the host->ar_result member\n create_server(host);\n }\n}\nvoid create_server(struct gaicb *host) {\n struct addrinfo *rp, *result;\n int fd;\n\n result = host->ar_result;\n for(rp = result; rp != NULL; rp = rp->ai_next) {\n fd = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol);\n bind(fd, rp->ai_addr, rp->ai_addrlen);\n listen(fd, SOMAXCONN);\n //error checks are missing!\n\n freeaddrinfo(host->ar_request);\n freeaddrinfo(result);\n //you should free everything you put into the gaicb\n }\n}\nint main(int argc, char *argv[]) {\n struct gaicb *host;\n struct addrinfo *hints;\n struct sigevent sig;\n\n host = calloc(1, sizeof(struct gaicb));\n hints = calloc(1, sizeof(struct addrinfo));\n\n hints->ai_family = AF_UNSPEC; //we dont care if its v4 or v6\n hints->ai_socktype = SOCK_STREAM;\n hints->ai_flags = AI_PASSIVE;\n //every other field is NULL-d by calloc\n\n host->ar_service = \"8888\"; //the port we will listen on\n host->ar_request = hints;\n\n sig.sigev_notify = SIGEV_SIGNAL;\n sig.sigev_value.sival_ptr = host;\n sig.sigev_signo = SIGRTMIN;\n\n getaddrinfo_a(GAI_NOWAIT, &host, 1, &sig);\n\n signalfd_setup();\n\n //start your event loop\n return 0;\n}\n\nYou can of course use a simple signal handler for this job too, look at man sigaction for more info.\n" ]
[ 22 ]
[]
[]
[ "c", "dns", "getaddrinfo_a", "glibc" ]
stackoverflow_0000058069_c_dns_getaddrinfo_a_glibc.txt
Q: OOP class design, Is this design inherently 'anti' OOP? I remember back when MS released a forum sample application, the design of the application was like this: /Classes/User.cs /Classes/Post.cs ... /Users.cs /Posts.cs So the classes folder had just the class i.e. properties and getters/setters. The Users.cs, Post.cs, etc. have the actual methods that access the Data Access Layer, so Posts.cs might look like: public class Posts { public static Post GetPostByID(int postID) { SqlDataProvider dp = new SqlDataProvider(); return dp.GetPostByID(postID); } } Another more traditional route would be to put all of the methods in Posts.cs into the class definition also (Post.cs). Splitting things into 2 files makes it much more procedural doesn't it? Isn't this breaking OOP rules since it is taking the behavior out of the class and putting it into another class definition? A: If every method is just a static call straight to the data source, then the "Posts" class is really a Factory. You could certainly put the static methods in "Posts" into the "Post" class (this is how CSLA works), but they are still factory methods. I would say that a more modern and accurate name for the "Posts" class would be "PostFactory" (assuming that all it has is static methods). I guess I wouldn't say this is a "procedural" approach necessarily -- it's just a misleading name, you would assume in the modern OO world that a "Posts" object would be stateful and provide methods to manipulate and manage a set of "Post" objects. A: Well it depends where and how you define your separation of concerns. If you put the code to populate the Post in the Post class, then your Business Layer is interceded with Data Access Code, and vice versa. To me it makes sense to do the data fetching and populating outside the actual domain object, and let the domain object be responsible for using the data. A: Are you sure the classes aren't partial classes. In which case they really aren't two classes, just a single class spread across multiple files for better readability. A: Based on your code snippet, Posts is primarily a class of static helper methods. Posts is not the same object as Post. Instead of Posts, a better name might be PostManager or PostHelper. If you think of it that way, it may help you understand why they broke it out that way. A: This is also an important step for a decoupling (or loosely coupling) you applications. A: What's anti-OOP or pro-OOP depends entirely on the functionality of the software and what's needed to make it work.
OOP class design, Is this design inherently 'anti' OOP?
I remember back when MS released a forum sample application, the design of the application was like this: /Classes/User.cs /Classes/Post.cs ... /Users.cs /Posts.cs So the classes folder had just the class i.e. properties and getters/setters. The Users.cs, Post.cs, etc. have the actual methods that access the Data Access Layer, so Posts.cs might look like: public class Posts { public static Post GetPostByID(int postID) { SqlDataProvider dp = new SqlDataProvider(); return dp.GetPostByID(postID); } } Another more traditional route would be to put all of the methods in Posts.cs into the class definition also (Post.cs). Splitting things into 2 files makes it much more procedural doesn't it? Isn't this breaking OOP rules since it is taking the behavior out of the class and putting it into another class definition?
[ "If every method is just a static call straight to the data source, then the \"Posts\" class is really a Factory. You could certainly put the static methods in \"Posts\" into the \"Post\" class (this is how CSLA works), but they are still factory methods.\nI would say that a more modern and accurate name for the \"Posts\" class would be \"PostFactory\" (assuming that all it has is static methods).\nI guess I wouldn't say this is a \"procedural\" approach necessarily -- it's just a misleading name, you would assume in the modern OO world that a \"Posts\" object would be stateful and provide methods to manipulate and manage a set of \"Post\" objects.\n", "Well it depends where and how you define your separation of concerns. If you put the code to populate the Post in the Post class, then your Business Layer is interceded with Data Access Code, and vice versa. \nTo me it makes sense to do the data fetching and populating outside the actual domain object, and let the domain object be responsible for using the data.\n", "Are you sure the classes aren't partial classes. In which case they really aren't two classes, just a single class spread across multiple files for better readability.\n", "Based on your code snippet, Posts is primarily a class of static helper methods. Posts is not the same object as Post. Instead of Posts, a better name might be PostManager or PostHelper. If you think of it that way, it may help you understand why they broke it out that way.\n", "This is also an important step for a decoupling (or loosely coupling) you applications.\n", "What's anti-OOP or pro-OOP depends entirely on the functionality of the software and what's needed to make it work.\n" ]
[ 3, 1, 0, 0, 0, 0 ]
[]
[]
[ "oop" ]
stackoverflow_0000058070_oop.txt
Q: How did my process exit? From C# on a Windows box, is there a way to find out how a process was stopped? I've had a look at the Process class, managed to get a nice friendly callback from the Exited event once I set EnableRaisingEvents = true; but I have not managed to find out whether the process was killed or whether it exited naturally? A: Fire up Process Monitor (from Sysinternals, part of Microsoft), run your process and let it die, then filter the Process Monitor results by your process name -- you will be able to see everything that it did, including exit codes. A: You can use the return code of the process for that. If your process returns a non-zero value from its Main method, you can then check whether or not the process exited by itself (the return value matches).
How did my process exit?
From C# on a Windows box, is there a way to find out how a process was stopped? I've had a look at the Process class, managed to get a nice friendly callback from the Exited event once I set EnableRaisingEvents = true; but I have not managed to find out whether the process was killed or whether it exited naturally?
[ "Fire up Process Monitor (from Sysinternals, part of Microsoft), run your process and let it die, then filter the Process Monitor results by your process name -- you will be able to see everything that it did, including exit codes.\n", "You can use the return code of the process for that. If your process returns a non-zero value from its Main method, you can then check whether or not the process exited by itself (the return value matches).\n" ]
[ 3, 0 ]
[]
[]
[ ".net", "c#", "system.diagnostics" ]
stackoverflow_0000058036_.net_c#_system.diagnostics.txt
Q: Java Right Click does not make a selection. What is the easiest way to solve this globally? Is there a way to globally make right click also select the element that you right click on? From what I understand this has been a bug in Swing for a long time likely to never be fixed because at this point applications depend on it. Any advice on doing this on a global scale? Perhaps on the L&F? A: Using the Glass Pane will do the trick. Here's a tutorial on how to use the glass pane to get the right click button and redispatch it to the right component. As the glass pane is not a solution in this case, I suggest you take a look at the Toolkit class. Specificaly the addAWTEventListener method. You can add a global event listener with it. To add a mouse event listener: Toolkit.getDefaultToolkit(). addAWTEventListener(listener, AWTEvent.MOUSE_EVENT_MASK); Cheers
Java Right Click does not make a selection. What is the easiest way to solve this globally?
Is there a way to globally make right click also select the element that you right click on? From what I understand this has been a bug in Swing for a long time likely to never be fixed because at this point applications depend on it. Any advice on doing this on a global scale? Perhaps on the L&F?
[ "Using the Glass Pane will do the trick.\nHere's a tutorial on how to use the glass pane to get the right click button and redispatch it to the right component.\n\nAs the glass pane is not a solution in this case, I suggest you take a look at the Toolkit class. Specificaly the addAWTEventListener method. You can add a global event listener with it. To add a mouse event listener:\nToolkit.getDefaultToolkit().\n addAWTEventListener(listener, AWTEvent.MOUSE_EVENT_MASK);\n\nCheers\n" ]
[ 3 ]
[]
[]
[ "java", "swing" ]
stackoverflow_0000056707_java_swing.txt
Q: Does re.compile() or any given Python library call throw an exception? I can't tell from the Python documentation whether the re.compile(x) function may throw an exception (assuming you pass in a string). I imagine there is something that could be considered an invalid regular expression. The larger question is, where do I go to find if a given Python library call may throw exception(s) and what those are? A: Well, re.compile certainly may: >>> import re >>> re.compile('he(lo') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python25\lib\re.py", line 180, in compile return _compile(pattern, flags) File "C:\Python25\lib\re.py", line 233, in _compile raise error, v # invalid expression sre_constants.error: unbalanced parenthesis The documentation does support this, in a roundabout way - check the bottom of the "Module Contents" page for (brief) description of the error exception. Unfortunately, I don't have any answer to the general question. I suppose the documentation for the various modules varies in quality and thoroughness. If there were particular modules you were interested in, you might be able to decompile them (if written in Python) or even look at the source, if they're in the standard library. A: Unlike Java, where there are exceptions that must be declared to be raised (and some that don't have to be, but that's another story), any Python code may raise any exception at any time. There are a list of built-in exceptions, which generally has some description of when these exceptions might be raised. Its up to you as to how much exception handling you will do, and if you will handle stuff gracefully, or just fail with a traceback.
Does re.compile() or any given Python library call throw an exception?
I can't tell from the Python documentation whether the re.compile(x) function may throw an exception (assuming you pass in a string). I imagine there is something that could be considered an invalid regular expression. The larger question is, where do I go to find if a given Python library call may throw exception(s) and what those are?
[ "Well, re.compile certainly may:\n>>> import re\n>>> re.compile('he(lo')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"C:\\Python25\\lib\\re.py\", line 180, in compile\n return _compile(pattern, flags)\n File \"C:\\Python25\\lib\\re.py\", line 233, in _compile\n raise error, v # invalid expression\nsre_constants.error: unbalanced parenthesis\n\nThe documentation does support this, in a roundabout way - check the bottom of the \"Module Contents\" page for (brief) description of the error exception.\nUnfortunately, I don't have any answer to the general question. I suppose the documentation for the various modules varies in quality and thoroughness. If there were particular modules you were interested in, you might be able to decompile them (if written in Python) or even look at the source, if they're in the standard library.\n", "Unlike Java, where there are exceptions that must be declared to be raised (and some that don't have to be, but that's another story), any Python code may raise any exception at any time.\nThere are a list of built-in exceptions, which generally has some description of when these exceptions might be raised. Its up to you as to how much exception handling you will do, and if you will handle stuff gracefully, or just fail with a traceback.\n" ]
[ 13, 7 ]
[]
[]
[ "exception", "python", "regex" ]
stackoverflow_0000058119_exception_python_regex.txt
Q: Using C++ classes in .so libraries I'm trying to write a small class library for a C++ course. I was wondering if it was possible to define a set of classes in my shared object and then using them directly in my main program that demos the library. Are there any tricks involved? I remember reading this long ago (before I started really programming) that C++ classes only worked with MFC .dlls and not plain ones, but that's just the windows side. A: C++ classes work fine in .so shared libraries (they also work in non-MFC DLLs on Windows, but that's not really your question). It's actually easier than Windows, because you don't have to explicitly export any symbols from the libraries. This document will answer most of your questions: http://people.redhat.com/drepper/dsohowto.pdf The main things to remember are to use the -fPIC option when compiling, and the -shared option when linking. You can find plenty of examples on the net. A: My solution/testing Here's my solution and it does what i expected. Code cat.hh : #include <string> class Cat { std::string _name; public: Cat(const std::string & name); void speak(); }; cat.cpp : #include <iostream> #include <string> #include "cat.hh" using namespace std; Cat::Cat(const string & name):_name(name){} void Cat::speak() { cout << "Meow! I'm " << _name << endl; } main.cpp : #include <iostream> #include <string> #include "cat.hh" using std::cout;using std::endl;using std::string; int main() { string name = "Felix"; cout<< "Meet my cat, " << name << "!" <<endl; Cat kitty(name); kitty.speak(); return 0; } Compilation You compile the shared lib first: $ g++ -Wall -g -fPIC -c cat.cpp $ g++ -shared -Wl,-soname,libcat.so.1 -o libcat.so.1 cat.o Then compile the main executable or C++ program using the classes in the libraries: $ g++ -Wall -g -c main.cpp $ g++ -Wall -Wl,-rpath,. -o main main.o libcat.so.1 # -rpath linker option prevents the need to use LD_LIBRARY_PATH when testing $ ./main Meet my cat, Felix! Meow! I'm Felix $ A: As I understand it, this is fine so long as you are linking .so files which were all compiled using the same compiler. Different compilers mangle the symbols in different ways and will fail to link. That is one of the advantages in using COM on Windows, it defines a standard for putting OOP objects in DLLs. I can compile a DLL using GNU g++ and link it to an EXE compiled with MSVC - or even VB!
Using C++ classes in .so libraries
I'm trying to write a small class library for a C++ course. I was wondering if it was possible to define a set of classes in my shared object and then using them directly in my main program that demos the library. Are there any tricks involved? I remember reading this long ago (before I started really programming) that C++ classes only worked with MFC .dlls and not plain ones, but that's just the windows side.
[ "C++ classes work fine in .so shared libraries (they also work in non-MFC DLLs on Windows, but that's not really your question). It's actually easier than Windows, because you don't have to explicitly export any symbols from the libraries.\nThis document will answer most of your questions: http://people.redhat.com/drepper/dsohowto.pdf\nThe main things to remember are to use the -fPIC option when compiling, and the -shared option when linking. You can find plenty of examples on the net.\n", "My solution/testing\nHere's my solution and it does what i expected.\nCode\ncat.hh : \n#include <string>\n\nclass Cat\n{\n std::string _name;\npublic:\n Cat(const std::string & name);\n void speak();\n};\n\ncat.cpp :\n#include <iostream>\n#include <string>\n\n#include \"cat.hh\"\n\nusing namespace std;\n\nCat::Cat(const string & name):_name(name){}\nvoid Cat::speak()\n{\n cout << \"Meow! I'm \" << _name << endl;\n}\n\nmain.cpp :\n#include <iostream>\n#include <string>\n#include \"cat.hh\"\n\nusing std::cout;using std::endl;using std::string;\nint main()\n{\n string name = \"Felix\";\n cout<< \"Meet my cat, \" << name << \"!\" <<endl;\n Cat kitty(name);\n kitty.speak();\n return 0;\n}\n\nCompilation\nYou compile the shared lib first:\n$ g++ -Wall -g -fPIC -c cat.cpp\n$ g++ -shared -Wl,-soname,libcat.so.1 -o libcat.so.1 cat.o\n\nThen compile the main executable or C++ program using the classes in the libraries:\n$ g++ -Wall -g -c main.cpp\n$ g++ -Wall -Wl,-rpath,. -o main main.o libcat.so.1 # -rpath linker option prevents the need to use LD_LIBRARY_PATH when testing\n$ ./main\nMeet my cat, Felix!\nMeow! I'm Felix\n$\n\n", "As I understand it, this is fine so long as you are linking .so files which were all compiled using the same compiler. Different compilers mangle the symbols in different ways and will fail to link.\nThat is one of the advantages in using COM on Windows, it defines a standard for putting OOP objects in DLLs. I can compile a DLL using GNU g++ and link it to an EXE compiled with MSVC - or even VB!\n" ]
[ 12, 10, 3 ]
[]
[]
[ "c++", "class_library", "linux" ]
stackoverflow_0000058058_c++_class_library_linux.txt
Q: SQL Server Freetext match - how do I sort by relevance Is it possible to order results in SQL Server 2005 by the relevance of a freetext match? In MySQL you can use the (roughly equivalent) MATCH function in the ORDER BY section, but I haven't found any equivalence in SQL Server. From the MySQL docs: For each row in the table, MATCH() returns a relevance value; that is, a similarity measure between the search string and the text in that row in the columns named in the MATCH() list. So for example you could order by the number of votes, then this relevance, and finally by a creation date. Is this something that can be done, or am I stuck with just returning the matching values and not having this ordering ability? A: If you are using FREETEXTTABLE then it returns a column name Rank, so order by Rank should work. I don't know if other freetext search methods are also returning this value or not. You can have a try. A: Both FREETEXTTABLE and CONTAINSTABLE will return the [RANK] column, but make sure you are using either the correct variation or union both of them to get all appropriate results.
SQL Server Freetext match - how do I sort by relevance
Is it possible to order results in SQL Server 2005 by the relevance of a freetext match? In MySQL you can use the (roughly equivalent) MATCH function in the ORDER BY section, but I haven't found any equivalence in SQL Server. From the MySQL docs: For each row in the table, MATCH() returns a relevance value; that is, a similarity measure between the search string and the text in that row in the columns named in the MATCH() list. So for example you could order by the number of votes, then this relevance, and finally by a creation date. Is this something that can be done, or am I stuck with just returning the matching values and not having this ordering ability?
[ "If you are using FREETEXTTABLE then it returns a column name Rank, so order by Rank should work. I don't know if other freetext search methods are also returning this value or not. You can have a try.\n", "Both FREETEXTTABLE and CONTAINSTABLE will return the [RANK] column, but make sure you are using either the correct variation or union both of them to get all appropriate results.\n" ]
[ 4, 2 ]
[]
[]
[ "freetext", "full_text_search", "sql", "sql_server" ]
stackoverflow_0000053538_freetext_full_text_search_sql_sql_server.txt
Q: How to call the AllocateAndInitializeSid function from C#? Can somebody give me a complete and working example of calling the AllocateAndInitializeSid function from C# code? I found this: BOOL WINAPI AllocateAndInitializeSid( __in PSID_IDENTIFIER_AUTHORITY pIdentifierAuthority, __in BYTE nSubAuthorityCount, __in DWORD dwSubAuthority0, __in DWORD dwSubAuthority1, __in DWORD dwSubAuthority2, __in DWORD dwSubAuthority3, __in DWORD dwSubAuthority4, __in DWORD dwSubAuthority5, __in DWORD dwSubAuthority6, __in DWORD dwSubAuthority7, __out PSID *pSid ); and I don't know how to construct the signature of this method - what should I do with PSID_IDENTIFIER_AUTHORITY and PSID types? How should I pass them - using ref or out? A: Using P/Invoke Interop Assistant: [System.Runtime.InteropServices.StructLayoutAttribute(System.Runtime.InteropServices.LayoutKind.Sequential)] public struct SidIdentifierAuthority { /// BYTE[6] [System.Runtime.InteropServices.MarshalAsAttribute( System.Runtime.InteropServices.UnmanagedType.ByValArray, SizeConst = 6, ArraySubType = System.Runtime.InteropServices.UnmanagedType.I1)] public byte[] Value; } public partial class NativeMethods { /// Return Type: BOOL->int ///pIdentifierAuthority: PSID_IDENTIFIER_AUTHORITY->_SID_IDENTIFIER_AUTHORITY* ///nSubAuthorityCount: BYTE->unsigned char ///nSubAuthority0: DWORD->unsigned int ///nSubAuthority1: DWORD->unsigned int ///nSubAuthority2: DWORD->unsigned int ///nSubAuthority3: DWORD->unsigned int ///nSubAuthority4: DWORD->unsigned int ///nSubAuthority5: DWORD->unsigned int ///nSubAuthority6: DWORD->unsigned int ///nSubAuthority7: DWORD->unsigned int ///pSid: PSID* [System.Runtime.InteropServices.DllImportAttribute("advapi32.dll", EntryPoint = "AllocateAndInitializeSid")] [return: System.Runtime.InteropServices.MarshalAsAttribute(System.Runtime.InteropServices.UnmanagedType.Bool)] public static extern bool AllocateAndInitializeSid( [System.Runtime.InteropServices.InAttribute()] ref SidIdentifierAuthority pIdentifierAuthority, byte nSubAuthorityCount, uint nSubAuthority0, uint nSubAuthority1, uint nSubAuthority2, uint nSubAuthority3, uint nSubAuthority4, uint nSubAuthority5, uint nSubAuthority6, uint nSubAuthority7, out System.IntPtr pSid); } A: If you are targeting .NET 2.0 or later, the class System.Security.Principal.SecurityIdentifier wraps a SID and allows you to avoid the error-prone Win32 APIs. Not exactly an answer to your question, but who knows it may be useful. A: For Platform Invoke www.pinvoke.net is your new best friend! http://www.pinvoke.net/default.aspx/advapi32/AllocateAndInitializeSid.html
How to call the AllocateAndInitializeSid function from C#?
Can somebody give me a complete and working example of calling the AllocateAndInitializeSid function from C# code? I found this: BOOL WINAPI AllocateAndInitializeSid( __in PSID_IDENTIFIER_AUTHORITY pIdentifierAuthority, __in BYTE nSubAuthorityCount, __in DWORD dwSubAuthority0, __in DWORD dwSubAuthority1, __in DWORD dwSubAuthority2, __in DWORD dwSubAuthority3, __in DWORD dwSubAuthority4, __in DWORD dwSubAuthority5, __in DWORD dwSubAuthority6, __in DWORD dwSubAuthority7, __out PSID *pSid ); and I don't know how to construct the signature of this method - what should I do with PSID_IDENTIFIER_AUTHORITY and PSID types? How should I pass them - using ref or out?
[ "Using P/Invoke Interop Assistant:\n [System.Runtime.InteropServices.StructLayoutAttribute(System.Runtime.InteropServices.LayoutKind.Sequential)]\n public struct SidIdentifierAuthority {\n\n /// BYTE[6]\n [System.Runtime.InteropServices.MarshalAsAttribute(\n System.Runtime.InteropServices.UnmanagedType.ByValArray, \n SizeConst = 6, \n ArraySubType = \n System.Runtime.InteropServices.UnmanagedType.I1)]\n public byte[] Value;\n }\n\n public partial class NativeMethods {\n\n /// Return Type: BOOL->int\n ///pIdentifierAuthority: PSID_IDENTIFIER_AUTHORITY->_SID_IDENTIFIER_AUTHORITY*\n ///nSubAuthorityCount: BYTE->unsigned char\n ///nSubAuthority0: DWORD->unsigned int\n ///nSubAuthority1: DWORD->unsigned int\n ///nSubAuthority2: DWORD->unsigned int\n ///nSubAuthority3: DWORD->unsigned int\n ///nSubAuthority4: DWORD->unsigned int\n ///nSubAuthority5: DWORD->unsigned int\n ///nSubAuthority6: DWORD->unsigned int\n ///nSubAuthority7: DWORD->unsigned int\n ///pSid: PSID*\n [System.Runtime.InteropServices.DllImportAttribute(\"advapi32.dll\", EntryPoint = \"AllocateAndInitializeSid\")]\n [return: System.Runtime.InteropServices.MarshalAsAttribute(System.Runtime.InteropServices.UnmanagedType.Bool)]\n public static extern bool AllocateAndInitializeSid(\n [System.Runtime.InteropServices.InAttribute()] \n ref SidIdentifierAuthority pIdentifierAuthority, \n byte nSubAuthorityCount, \n uint nSubAuthority0, \n uint nSubAuthority1, \n uint nSubAuthority2, \n uint nSubAuthority3, \n uint nSubAuthority4, \n uint nSubAuthority5, \n uint nSubAuthority6, \n uint nSubAuthority7, \n out System.IntPtr pSid);\n\n }\n\n", "If you are targeting .NET 2.0 or later, the class System.Security.Principal.SecurityIdentifier wraps a SID and allows you to avoid the error-prone Win32 APIs.\nNot exactly an answer to your question, but who knows it may be useful.\n", "For Platform Invoke www.pinvoke.net is your new best friend!\nhttp://www.pinvoke.net/default.aspx/advapi32/AllocateAndInitializeSid.html\n" ]
[ 3, 2, 1 ]
[]
[]
[ ".net", "c#", "winapi" ]
stackoverflow_0000056729_.net_c#_winapi.txt
Q: Update schema and rows in one transaction, SQL Server 2005 I'm currently updating a legacy system which allows users to dictate part of the schema of one of its tables. Users can create and remove columns from the table through this interface. This legacy system is using ADO 2.8, and is using SQL Server 2005 as its database (you don't even WANT to know what database it was using before the attempt to modernize this beast began... but I digress. =) ) In this same editing process, users can define (and change) a list of valid values that can be stored in these user created fields (if the user wants to limit what can be in the field). When the user changes the list of valid entries for a field, if they remove one of the valid values, they are allowed to choose a new "valid value" to map any rows that have this (now invalid) value in it, so that they now have a valid value again. In looking through the old code, I noticed that it is extremely vulnerable to putting the system into an invalid state, because the changes mentioned above are not done within a transaction (so if someone else came along halfway through the process mentioned above and made their own changes... well, you can imagine the problems that might cause). The problem is, I've been trying to get them to update under a single transaction, but whenever the code gets to the part where it changes the schema of that table, all of the other changes (updating values in rows, be it in the table where the schema changed or not... they can be completely unrelated tables even) made up to that point in the transaction appear to be silently dropped. I receive no error message indicating that they were dropped, and when I commit the transaction at the end no error is raised... but when I go to look in the tables that were supposed to be updated in the transaction, only the new columns are there. None of the non-schema changes made are saved. Looking on the net for answers has, thus far, proved to be a waste of a couple hours... so I turn here for help. Has anyone ever tried to perform a transaction through ADO that both updates the schema of a table and updates rows in tables (be it that same table, or others)? Is it not allowed? Is there any documentation out there that could be helpful in this situation? EDIT: Okay, I did a trace, and these commands were sent to the database (explanations in parenthesis) (I don't know what's happening here, looks like it's creating a temporary stored procedure...?) declare @p1 int set @p1=180150003 declare @p3 int set @p3=2 declare @p4 int set @p4=4 declare @p5 int set @p5=-1 (Retreiving the table that holds definition information for the user-generated fields) exec sp_cursoropen @p1 output,N'SELECT * FROM CustomFieldDefs ORDER BY Sequence',@p3 output,@p4 output,@p5 output select @p1, @p3, @p4, @p5 go (I think my code was iterating through the list of them here, grabbing the current information) exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,1025,1,1 go exec sp_cursorfetch 180150003,1028,1,1 go exec sp_cursorfetch 180150003,32,1,1 go (This appears to be where I'm entering the modified data for the definitions, I go through each and update any changes that occurred in the definitions for the custom fields themselves) exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=1,@Description='asdf',@Format='U|',@IsLookUp=1,@Length=50,@Properties='U|',@Required=1,@Title='__asdf',@Type='',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=2,@Description='give',@Format='Y',@IsLookUp=0,@Length=0,@Properties='',@Required=0,@Title='_give',@Type='B',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=3,@Description='up',@Format='###-##-####',@IsLookUp=0,@Length=0,@Properties='',@Required=0,@Title='_up',@Type='N',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=4,@Description='Testy',@Format='',@IsLookUp=0,@Length=50,@Properties='',@Required=0,@Title='_Testy',@Type='',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=5,@Description='you',@Format='U|',@IsLookUp=0,@Length=250,@Properties='U|',@Required=0,@Title='_you',@Type='',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=6,@Description='never',@Format='mm/dd/yyyy',@IsLookUp=0,@Length=0,@Properties='',@Required=0,@Title='_never',@Type='D',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=7,@Description='gonna',@Format='###-###-####',@IsLookUp=0,@Length=0,@Properties='',@Required=0,@Title='_gonna',@Type='C',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go (This is where my code removes the deleted through the interface before this saving began]... it is also the ONLY thing as far as I can tell that actually happens during this transaction) ALTER TABLE CustomizableTable DROP COLUMN _weveknown; (Now if any of the definitions were altered in such a way that the user-created column's properties need to be changed or indexes on the columns need to be added/removed, it is done here, along with giving a default value to any rows that didn't have a value yet for the given column... note that, as far as I can tell, NONE of this actually happens when the stored procedure finishes.) go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '__asdf' go ALTER TABLE CustomizableTable ALTER COLUMN __asdf VarChar(50) NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx___asdf') CREATE NONCLUSTERED INDEX idx___asdf ON CustomizableTable ( __asdf ASC) WITH (PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF); go select * from IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx___asdf') CREATE NONCLUSTERED INDEX idx___asdf ON CustomizableTable ( __asdf ASC) WITH (PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF); go UPDATE CustomizableTable SET [__asdf] = '' WHERE [__asdf] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_give' go ALTER TABLE CustomizableTable ALTER COLUMN _give Bit NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__give') DROP INDEX idx__give ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_give] = 0 WHERE [_give] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_up' go ALTER TABLE CustomizableTable ALTER COLUMN _up Int NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__up') DROP INDEX idx__up ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_up] = 0 WHERE [_up] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_Testy' go ALTER TABLE CustomizableTable ADD _Testy VarChar(50) NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__Testy') DROP INDEX idx__Testy ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_Testy] = '' WHERE [_Testy] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_you' go ALTER TABLE CustomizableTable ALTER COLUMN _you VarChar(250) NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__you') DROP INDEX idx__you ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_you] = '' WHERE [_you] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_never' go ALTER TABLE CustomizableTable ALTER COLUMN _never DateTime NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__never') DROP INDEX idx__never ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_never] = '1/1/1900' WHERE [_never] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_gonna' go ALTER TABLE CustomizableTable ALTER COLUMN _gonna Money NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__gonna') DROP INDEX idx__gonna ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_gonna] = 0 WHERE [_gonna] IS NULL go (Closing the Transaction...?) exec sp_cursorclose 180150003 go After all that ado above, only the deletion of the column occurs. Everything before and after it in the transaction appears to be ignored, and there were no messages in the SQL Trace to indicate that something went wrong during the transaction. A: The code is using a server-side cursor, that's what those calls are for. The first set of calls is preparing/opening the cursor. Then fetching rows from the cursor. Finally closing the cursor. Those sprocs are analogous to the OPEN CURSOR, FETCH NEXT, CLOSE CURSOR T-SQL statements. I'd have to take a closer look (which I will), but my guess is there is something going on with the server-side cursor, the encapsulating transaction, and the DDL. Some more questions: Are you meaning to use server-side cursors in this case? Are the ADO Commands all using the same active connection? Update: I'm not exactly sure what's going on. It looks like you're using server-side cursors so you can use Recordset.Update() to push changes back to the server, in addition to executing generated SQL statements to alter schema and update data in the dynamic table(s). Using the same connection, inside an explicit transaction. I'm not sure what effect the cursor operations will have on the rest of the transaction, or vice-versa, and to be honest I'm surprised this isn't working. I don't know how large of a change it would be, but I would recommend moving away from the server-side cursors and building the UPDATE statements for your table updates. Sorry I couldn't be of more help. BTW- I found the following information on the sp_cursor calls: http://jtds.sourceforge.net/apiCursors.html A: The behavior you describe is allowed. How is the code making the schema changes? Building SQL on the fly and executing through an ADO Command? Or using ADOX? If you have access to the database server, try running a SQL Profiler trace while testing the scenario you outlined. See if the trace logs any errors/rollbacks.
Update schema and rows in one transaction, SQL Server 2005
I'm currently updating a legacy system which allows users to dictate part of the schema of one of its tables. Users can create and remove columns from the table through this interface. This legacy system is using ADO 2.8, and is using SQL Server 2005 as its database (you don't even WANT to know what database it was using before the attempt to modernize this beast began... but I digress. =) ) In this same editing process, users can define (and change) a list of valid values that can be stored in these user created fields (if the user wants to limit what can be in the field). When the user changes the list of valid entries for a field, if they remove one of the valid values, they are allowed to choose a new "valid value" to map any rows that have this (now invalid) value in it, so that they now have a valid value again. In looking through the old code, I noticed that it is extremely vulnerable to putting the system into an invalid state, because the changes mentioned above are not done within a transaction (so if someone else came along halfway through the process mentioned above and made their own changes... well, you can imagine the problems that might cause). The problem is, I've been trying to get them to update under a single transaction, but whenever the code gets to the part where it changes the schema of that table, all of the other changes (updating values in rows, be it in the table where the schema changed or not... they can be completely unrelated tables even) made up to that point in the transaction appear to be silently dropped. I receive no error message indicating that they were dropped, and when I commit the transaction at the end no error is raised... but when I go to look in the tables that were supposed to be updated in the transaction, only the new columns are there. None of the non-schema changes made are saved. Looking on the net for answers has, thus far, proved to be a waste of a couple hours... so I turn here for help. Has anyone ever tried to perform a transaction through ADO that both updates the schema of a table and updates rows in tables (be it that same table, or others)? Is it not allowed? Is there any documentation out there that could be helpful in this situation? EDIT: Okay, I did a trace, and these commands were sent to the database (explanations in parenthesis) (I don't know what's happening here, looks like it's creating a temporary stored procedure...?) declare @p1 int set @p1=180150003 declare @p3 int set @p3=2 declare @p4 int set @p4=4 declare @p5 int set @p5=-1 (Retreiving the table that holds definition information for the user-generated fields) exec sp_cursoropen @p1 output,N'SELECT * FROM CustomFieldDefs ORDER BY Sequence',@p3 output,@p4 output,@p5 output select @p1, @p3, @p4, @p5 go (I think my code was iterating through the list of them here, grabbing the current information) exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursorfetch 180150003,1025,1,1 go exec sp_cursorfetch 180150003,1028,1,1 go exec sp_cursorfetch 180150003,32,1,1 go (This appears to be where I'm entering the modified data for the definitions, I go through each and update any changes that occurred in the definitions for the custom fields themselves) exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=1,@Description='asdf',@Format='U|',@IsLookUp=1,@Length=50,@Properties='U|',@Required=1,@Title='__asdf',@Type='',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=2,@Description='give',@Format='Y',@IsLookUp=0,@Length=0,@Properties='',@Required=0,@Title='_give',@Type='B',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=3,@Description='up',@Format='###-##-####',@IsLookUp=0,@Length=0,@Properties='',@Required=0,@Title='_up',@Type='N',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=4,@Description='Testy',@Format='',@IsLookUp=0,@Length=50,@Properties='',@Required=0,@Title='_Testy',@Type='',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=5,@Description='you',@Format='U|',@IsLookUp=0,@Length=250,@Properties='U|',@Required=0,@Title='_you',@Type='',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=6,@Description='never',@Format='mm/dd/yyyy',@IsLookUp=0,@Length=0,@Properties='',@Required=0,@Title='_never',@Type='D',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go exec sp_cursor 180150003,33,1,N'[CustomFieldDefs]',@Sequence=7,@Description='gonna',@Format='###-###-####',@IsLookUp=0,@Length=0,@Properties='',@Required=0,@Title='_gonna',@Type='C',@_Version=1 go exec sp_cursorfetch 180150003,32,1,1 go (This is where my code removes the deleted through the interface before this saving began]... it is also the ONLY thing as far as I can tell that actually happens during this transaction) ALTER TABLE CustomizableTable DROP COLUMN _weveknown; (Now if any of the definitions were altered in such a way that the user-created column's properties need to be changed or indexes on the columns need to be added/removed, it is done here, along with giving a default value to any rows that didn't have a value yet for the given column... note that, as far as I can tell, NONE of this actually happens when the stored procedure finishes.) go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '__asdf' go ALTER TABLE CustomizableTable ALTER COLUMN __asdf VarChar(50) NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx___asdf') CREATE NONCLUSTERED INDEX idx___asdf ON CustomizableTable ( __asdf ASC) WITH (PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF); go select * from IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx___asdf') CREATE NONCLUSTERED INDEX idx___asdf ON CustomizableTable ( __asdf ASC) WITH (PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF); go UPDATE CustomizableTable SET [__asdf] = '' WHERE [__asdf] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_give' go ALTER TABLE CustomizableTable ALTER COLUMN _give Bit NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__give') DROP INDEX idx__give ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_give] = 0 WHERE [_give] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_up' go ALTER TABLE CustomizableTable ALTER COLUMN _up Int NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__up') DROP INDEX idx__up ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_up] = 0 WHERE [_up] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_Testy' go ALTER TABLE CustomizableTable ADD _Testy VarChar(50) NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__Testy') DROP INDEX idx__Testy ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_Testy] = '' WHERE [_Testy] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_you' go ALTER TABLE CustomizableTable ALTER COLUMN _you VarChar(250) NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__you') DROP INDEX idx__you ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_you] = '' WHERE [_you] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_never' go ALTER TABLE CustomizableTable ALTER COLUMN _never DateTime NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__never') DROP INDEX idx__never ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_never] = '1/1/1900' WHERE [_never] IS NULL go SELECT * FROM sys.columns WHERE object_id = OBJECT_ID(N'CustomizableTable') AND name = '_gonna' go ALTER TABLE CustomizableTable ALTER COLUMN _gonna Money NULL go IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[CustomizableTable]') AND name = N'idx__gonna') DROP INDEX idx__gonna ON CustomizableTable WITH ( ONLINE = OFF ); go UPDATE CustomizableTable SET [_gonna] = 0 WHERE [_gonna] IS NULL go (Closing the Transaction...?) exec sp_cursorclose 180150003 go After all that ado above, only the deletion of the column occurs. Everything before and after it in the transaction appears to be ignored, and there were no messages in the SQL Trace to indicate that something went wrong during the transaction.
[ "The code is using a server-side cursor, that's what those calls are for. The first set of calls is preparing/opening the cursor. Then fetching rows from the cursor. Finally closing the cursor. Those sprocs are analogous to the OPEN CURSOR, FETCH NEXT, CLOSE CURSOR T-SQL statements.\nI'd have to take a closer look (which I will), but my guess is there is something going on with the server-side cursor, the encapsulating transaction, and the DDL.\nSome more questions:\n\nAre you meaning to use server-side cursors in this case?\nAre the ADO Commands all using the same active connection?\n\nUpdate:\nI'm not exactly sure what's going on.\nIt looks like you're using server-side cursors so you can use Recordset.Update() to push changes back to the server, in addition to executing generated SQL statements to alter schema and update data in the dynamic table(s). Using the same connection, inside an explicit transaction.\nI'm not sure what effect the cursor operations will have on the rest of the transaction, or vice-versa, and to be honest I'm surprised this isn't working.\nI don't know how large of a change it would be, but I would recommend moving away from the server-side cursors and building the UPDATE statements for your table updates.\nSorry I couldn't be of more help.\nBTW- I found the following information on the sp_cursor calls:\nhttp://jtds.sourceforge.net/apiCursors.html\n", "The behavior you describe is allowed. How is the code making the schema changes? Building SQL on the fly and executing through an ADO Command? Or using ADOX?\nIf you have access to the database server, try running a SQL Profiler trace while testing the scenario you outlined. See if the trace logs any errors/rollbacks.\n" ]
[ 1, 0 ]
[]
[]
[ "ado", "sql", "sql_server", "sql_server_2005", "transactions" ]
stackoverflow_0000057912_ado_sql_sql_server_sql_server_2005_transactions.txt
Q: MS SQL FTI - searching on "n*" returns numbers This seems like odd behaviour from SQL's full-text-index. FTI stores number in its index with an "NN" prefix, so "123" is saved as "NN123". Now when a user searches for words beginning with N (i.e. contains "n*" ) they also get all numbers. So: select [TextField] from [MyTable] where contains([TextField], '"n*"') Returns: MyTable.TextField -------------------------------------------------- This text contains the word navigator This text is nice This text only has 123, and shouldn't be returned Is there a good way to exclude that last row? Is there a consistent workaround for this? Those extra "" are needed to make the wildcard token work: select [TextField] from [MyTable] where contains([TextField], 'n*') Would search for literal n* - and there aren't any. --return rows with the word text select [TextField] from [MyTable] where contains([TextField], 'text') --return rows with the word tex* select [TextField] from [MyTable] where contains([TextField], 'tex*') --return rows with words that begin tex... select [TextField] from [MyTable] where contains([TextField], '"tex*"') A: There are a couple of ways to handle this, though neither is really all that great. First, add a column to your table that says that TextField is really a number. If you could do that and filter, you would have the most performant version. If that's not an option, then you will need to add a further filter. While I haven't extensively tested it, you could add the filter AND TextField NOT LIKE 'NN%[0-9]%' The downside is that this would filter out 'NN12NOO' but that may be an edge case not represented by your data.
MS SQL FTI - searching on "n*" returns numbers
This seems like odd behaviour from SQL's full-text-index. FTI stores number in its index with an "NN" prefix, so "123" is saved as "NN123". Now when a user searches for words beginning with N (i.e. contains "n*" ) they also get all numbers. So: select [TextField] from [MyTable] where contains([TextField], '"n*"') Returns: MyTable.TextField -------------------------------------------------- This text contains the word navigator This text is nice This text only has 123, and shouldn't be returned Is there a good way to exclude that last row? Is there a consistent workaround for this? Those extra "" are needed to make the wildcard token work: select [TextField] from [MyTable] where contains([TextField], 'n*') Would search for literal n* - and there aren't any. --return rows with the word text select [TextField] from [MyTable] where contains([TextField], 'text') --return rows with the word tex* select [TextField] from [MyTable] where contains([TextField], 'tex*') --return rows with words that begin tex... select [TextField] from [MyTable] where contains([TextField], '"tex*"')
[ "There are a couple of ways to handle this, though neither is really all that great.\nFirst, add a column to your table that says that TextField is really a number. If you could do that and filter, you would have the most performant version.\nIf that's not an option, then you will need to add a further filter. While I haven't extensively tested it, you could add the filter AND TextField NOT LIKE 'NN%[0-9]%'\nThe downside is that this would filter out 'NN12NOO' but that may be an edge case not represented by your data.\n" ]
[ 1 ]
[]
[]
[ "full_text_search", "sql_server" ]
stackoverflow_0000045438_full_text_search_sql_server.txt
Q: Understanding .Net Configuration Options I'm really confused by the various configuration options for .Net configuration of dll's, ASP.net websites etc in .Net v2 - especially when considering the impact of a config file at the UI / end-user end of the chain. So, for example, some of the applications I work with use settings which we access with: string blah = AppLib.Properties.Settings.Default.TemplatePath; Now, this option seems cool because the members are stongly typed, and I won't be able to type in a property name that doesn't exist in the Visual Studio 2005 IDE. We end up with lines like this in the App.Config of a command-line executable project: <connectionStrings> <add name="AppConnectionString" connectionString="XXXX" /> <add name="AppLib.Properties.Settings.AppConnectionString" connectionString="XXXX" /> </connectionStrings> (If we don't have the second setting, someone releasing a debug dll to the live box could have built with the debug connection string embedded in it - eek) We also have settings accessed like this: string blah = System.Configuration.ConfigurationManager.AppSettings["TemplatePath_PDF"]; Now, these seem cool because we can access the setting from the dll code, or the exe / aspx code, and all we need in the Web or App.config is: <appSettings> <add key="TemplatePath_PDF" value="xxx"/> </appSettings> However, the value of course may not be set in the config files, or the string name may be mistyped, and so we have a different set of problems. So... if my understanding is correct, the former methods give strong typing but bad sharing of values between the dll and other projects. The latter provides better sharing, but weaker typing. I feel like I must be missing something. For the moment, I'm not even concerned with the application being able to write-back values to the configuration files, encryption or anything like that. Also, I had decided that the best way to store any non-connection strings was in the DB... and then the very next thing that I have to do is store phone numbers to text people in case of DB connection issues, so they must be stored outside the DB! A: If you use the settings tab in VS 2005+, you can add strongly typed settings and get intellisense, such as in your first example. string phoneNum = Properties.Settings.Default.EmergencyPhoneNumber; This is physically stored in App.Config. You could still use the config file's appSettings element, or even roll your own ConfigurationElementCollection, ConfigurationElement, and ConfigurationSection subclasses. As to where to store your settings, database or config file, in the case of non-connection strings: It depends on your application architecture. If you've got an application server that is shared by all the clients, use the aforementioned method, in App.Config on the app server. Otherwise, you may have to use a database. Placing it in the App.Config on each client will cause versioning/deployment headaches. A: Nij, our difference in thinking comes from our different perspectives. I'm thinking about developing enterprise apps that predominantly use WinForms clients. In this instance the business logic is contained on an application server. Each client would need to know the phone number to dial, but placing it in the App.config of each client poses a problem if that phone number changes. In that case it seems obvious to store application configuration information (or application wide settings) in a database and have each client read the settings from there. The other, .NET way, (I make the distinction because we have, in the pre .NET days, stored application settings in DB tables) is to store application settings in the app.config file and access via way of the generated Settings class. I digress. Your situation sounds different. If all different apps are on the same server, you could place the settings in a web.config at a higher level. However if they are not, you could also have a seperate "configuration service" that all three applications talk to get their shared settings. At least in this solution you're not replicating the code in three places, raising the potential of maintenance problems when adding settings. Sounds a bit over engineered though. My personal preference is to use strong typed settings. I actually generate my own strongly typed settings class based on what it's my settings table in the database. That way I can have the best of both worlds. Intellisense to my settings and settings stored in the db (note: that's in the case where there's no app server). I'm interested in learning other peoples strategies for this too :) A: I think your confusion comes from the fact that it looks like your first example is a home-brewed library, not part of .NET. The configurationmanager example is an example of built-in functionality. A: I support Rob Grays answer, but wanted to add to it slightly. This may be overly obvious, but if you are using multiple clients, the app.config should store all settings that are installation specific and the database should store pretty much everything else. Single client (or server) apps are somewhat different. Here it is more personal choice really. A noticable exception would be if the setting is the ID of a record in the database, in which case I would always store the setting in the database with a foreign key to ensure the reference doesn't get deleted. A: Yes - I think I / we are in the headache situation Rob descibes - we have something like 5 or 6 different web-sites and applications across three independent servers that need to access the same DB. As things stand, each one has its own Web or App.config with the settings described setting and / or overriding settings in our main DB-access dll library. Rob - when you say application server, I'm not sure what you mean? The nearest thing I can think is that we could at least share some settings between sites on the same machine by putting them in a web.config higher in the directory hierarchy... but this too is not something I've been able to investigate... having thought it more important to understand which of the strong or weak-typed routes is 'better'.
Understanding .Net Configuration Options
I'm really confused by the various configuration options for .Net configuration of dll's, ASP.net websites etc in .Net v2 - especially when considering the impact of a config file at the UI / end-user end of the chain. So, for example, some of the applications I work with use settings which we access with: string blah = AppLib.Properties.Settings.Default.TemplatePath; Now, this option seems cool because the members are stongly typed, and I won't be able to type in a property name that doesn't exist in the Visual Studio 2005 IDE. We end up with lines like this in the App.Config of a command-line executable project: <connectionStrings> <add name="AppConnectionString" connectionString="XXXX" /> <add name="AppLib.Properties.Settings.AppConnectionString" connectionString="XXXX" /> </connectionStrings> (If we don't have the second setting, someone releasing a debug dll to the live box could have built with the debug connection string embedded in it - eek) We also have settings accessed like this: string blah = System.Configuration.ConfigurationManager.AppSettings["TemplatePath_PDF"]; Now, these seem cool because we can access the setting from the dll code, or the exe / aspx code, and all we need in the Web or App.config is: <appSettings> <add key="TemplatePath_PDF" value="xxx"/> </appSettings> However, the value of course may not be set in the config files, or the string name may be mistyped, and so we have a different set of problems. So... if my understanding is correct, the former methods give strong typing but bad sharing of values between the dll and other projects. The latter provides better sharing, but weaker typing. I feel like I must be missing something. For the moment, I'm not even concerned with the application being able to write-back values to the configuration files, encryption or anything like that. Also, I had decided that the best way to store any non-connection strings was in the DB... and then the very next thing that I have to do is store phone numbers to text people in case of DB connection issues, so they must be stored outside the DB!
[ "If you use the settings tab in VS 2005+, you can add strongly typed settings and get intellisense, such as in your first example.\nstring phoneNum = Properties.Settings.Default.EmergencyPhoneNumber;\n\nThis is physically stored in App.Config.\nYou could still use the config file's appSettings element, or even roll your own ConfigurationElementCollection, ConfigurationElement, and ConfigurationSection subclasses.\nAs to where to store your settings, database or config file, in the case of non-connection strings: It depends on your application architecture. If you've got an application server that is shared by all the clients, use the aforementioned method, in App.Config on the app server. Otherwise, you may have to use a database. Placing it in the App.Config on each client will cause versioning/deployment headaches.\n", "Nij, our difference in thinking comes from our different perspectives. I'm thinking about developing enterprise apps that predominantly use WinForms clients. In this instance the business logic is contained on an application server. Each client would need to know the phone number to dial, but placing it in the App.config of each client poses a problem if that phone number changes. In that case it seems obvious to store application configuration information (or application wide settings) in a database and have each client read the settings from there. \nThe other, .NET way, (I make the distinction because we have, in the pre .NET days, stored application settings in DB tables) is to store application settings in the app.config file and access via way of the generated Settings class.\nI digress. Your situation sounds different. If all different apps are on the same server, you could place the settings in a web.config at a higher level. However if they are not, you could also have a seperate \"configuration service\" that all three applications talk to get their shared settings. At least in this solution you're not replicating the code in three places, raising the potential of maintenance problems when adding settings. Sounds a bit over engineered though.\nMy personal preference is to use strong typed settings. I actually generate my own strongly typed settings class based on what it's my settings table in the database. That way I can have the best of both worlds. Intellisense to my settings and settings stored in the db (note: that's in the case where there's no app server).\nI'm interested in learning other peoples strategies for this too :)\n", "I think your confusion comes from the fact that it looks like your first example is a home-brewed library, not part of .NET.\nThe configurationmanager example is an example of built-in functionality.\n", "I support Rob Grays answer, but wanted to add to it slightly. This may be overly obvious, but if you are using multiple clients, the app.config should store all settings that are installation specific and the database should store pretty much everything else.\nSingle client (or server) apps are somewhat different. Here it is more personal choice really. A noticable exception would be if the setting is the ID of a record in the database, in which case I would always store the setting in the database with a foreign key to ensure the reference doesn't get deleted.\n", "Yes - I think I / we are in the headache situation Rob descibes - we have something like 5 or 6 different web-sites and applications across three independent servers that need to access the same DB. As things stand, each one has its own Web or App.config with the settings described setting and / or overriding settings in our main DB-access dll library.\nRob - when you say application server, I'm not sure what you mean? The nearest thing I can think is that we could at least share some settings between sites on the same machine by putting them in a web.config higher in the directory hierarchy... but this too is not something I've been able to investigate... having thought it more important to understand which of the strong or weak-typed routes is 'better'.\n" ]
[ 3, 2, 0, 0, 0 ]
[]
[]
[ ".net", "c#", "configuration" ]
stackoverflow_0000057947_.net_c#_configuration.txt
Q: Create background process in windows without visible console window How do I create a background process with Haskell on windows without a visible command window being created? I wrote a Haskell program that runs backup processes periodically but every time I run it, a command window opens up to the top of all the windows. I would like to get rid of this window. What is the simplest way to do this? A: You should really tell us how you are trying to do this currently, but on my system (using linux) the following snippet will run a command without opening a new terminal window. It should work the same way on windows. module Main where import System import System.Process import Control.Monad main :: IO () main = do putStrLn "Running command..." pid <- runCommand "mplayer song.mp3" -- or whatever you want replicateM_ 10 $ putStrLn "Doing other stuff" waitForProcess pid >>= exitWith A: Thanks for the responses so far, but I've found my own solution. I did try a lot of different things, from writing a vbs script as suggested to a standalone program called hstart. hstart worked...but it creates a separate process which I didn't like very much because then I can't kill it in the normal way. But I found a simpler solution that required simply Haskell code. My code from before was a simple call to runCommand, which did popup the window. An alternative function you can use is runProcess which has more options. From peeking at the ghc source code file runProcess.c, I found that the CREATE_NO_WINDOW flag is set when you supply redirects for all of STDIN, STOUT, and STDERR. So that's what you need to do, supply redirects for those. My test program looks like: import System.Process import System.IO main = do inH <- openFile "in" ReadMode outH <- openFile "out" WriteMode runProcess "rsync.bat" [] Nothing Nothing (Just inH) (Just outH) (Just outH) This worked! No command window again! A caveat is that you need an empty file for inH to read in as the STDIN eventhough in my situation it was not needed. A: The simplest way I can think of is to run the rsync command from within a Windows Shell script (vbs or cmd). A: I don't know anything about Haskell, but I had this problem in a C project a few months ago. The best way to execute an external program without any windows popping up is to use the ShellExecuteEx() API function with the "open" verb. If ShellExecuteEx() is available to you in Haskell, then you should be able to achieve what you want. The C code looks something like this: SHELLEXECUTEINFO Info; BOOL b; // Execute it memset (&Info, 0, sizeof (Info)); Info.cbSize = sizeof (Info); Info.fMask = SEE_MASK_NOCLOSEPROCESS | SEE_MASK_FLAG_NO_UI; Info.hwnd = NULL; Info.lpVerb = "open"; Info.lpFile = "rsync.exe"; Info.lpParameters = "whatever parameters you like"; Info.lpDirectory = NULL; Info.nShow = SW_HIDE; b = ShellExecuteEx (&Info); if (b) { // Looks good; if there is an instance, wait for it if (Info.hProcess) { // Wait WaitForSingleObject (Info.hProcess, INFINITE); } }
Create background process in windows without visible console window
How do I create a background process with Haskell on windows without a visible command window being created? I wrote a Haskell program that runs backup processes periodically but every time I run it, a command window opens up to the top of all the windows. I would like to get rid of this window. What is the simplest way to do this?
[ "You should really tell us how you are trying to do this currently, but on my system (using linux) the following snippet will run a command without opening a new terminal window. It should work the same way on windows.\nmodule Main where\nimport System\nimport System.Process\nimport Control.Monad\n\nmain :: IO ()\nmain = do\n putStrLn \"Running command...\"\n pid <- runCommand \"mplayer song.mp3\" -- or whatever you want\n replicateM_ 10 $ putStrLn \"Doing other stuff\"\n waitForProcess pid >>= exitWith\n\n", "Thanks for the responses so far, but I've found my own solution. I did try a lot of different things, from writing a vbs script as suggested to a standalone program called hstart. hstart worked...but it creates a separate process which I didn't like very much because then I can't kill it in the normal way. But I found a simpler solution that required simply Haskell code.\nMy code from before was a simple call to runCommand, which did popup the window. An alternative function you can use is runProcess which has more options. From peeking at the ghc source code file runProcess.c, I found that the CREATE_NO_WINDOW flag is set when you supply redirects for all of STDIN, STOUT, and STDERR. So that's what you need to do, supply redirects for those. My test program looks like:\nimport System.Process\nimport System.IO\nmain = do\n inH <- openFile \"in\" ReadMode\n outH <- openFile \"out\" WriteMode\n runProcess \"rsync.bat\" [] Nothing Nothing (Just inH) (Just outH) (Just outH)\n\nThis worked! No command window again! A caveat is that you need an empty file for inH to read in as the STDIN eventhough in my situation it was not needed.\n", "The simplest way I can think of is to run the rsync command from within a Windows Shell script (vbs or cmd).\n", "I don't know anything about Haskell, but I had this problem in a C project a few months ago.\nThe best way to execute an external program without any windows popping up is to use the ShellExecuteEx() API function with the \"open\" verb. If ShellExecuteEx() is available to you in Haskell, then you should be able to achieve what you want.\nThe C code looks something like this:\nSHELLEXECUTEINFO Info;\nBOOL b;\n\n// Execute it\nmemset (&Info, 0, sizeof (Info));\nInfo.cbSize = sizeof (Info);\nInfo.fMask = SEE_MASK_NOCLOSEPROCESS | SEE_MASK_FLAG_NO_UI;\nInfo.hwnd = NULL;\nInfo.lpVerb = \"open\";\nInfo.lpFile = \"rsync.exe\";\nInfo.lpParameters = \"whatever parameters you like\";\nInfo.lpDirectory = NULL;\nInfo.nShow = SW_HIDE;\nb = ShellExecuteEx (&Info);\nif (b)\n {\n // Looks good; if there is an instance, wait for it\n if (Info.hProcess)\n {\n // Wait\n WaitForSingleObject (Info.hProcess, INFINITE);\n }\n }\n\n" ]
[ 5, 5, 0, 0 ]
[]
[]
[ "haskell", "windows" ]
stackoverflow_0000051028_haskell_windows.txt
Q: What's in your .procmailrc Are there any handy general items you put in your .procmailrc file? A: Just simple things - move messages to appropriate folders, forward some stuff to an email2sms address, move spam to spam folder. One thing I'm kind of proud of is how to mark your spam as "read" (this is for Courier IMAP and Maildir, where "read" means "move to different folder and change the filename"): :0 * ^X-Spam # the header our filter inserts for spam { :0 .Junk\ E-mail/ # stores in .Junk E-mail/new/ :0 * LASTFOLDER ?? /\/[^/]+$ # get the stored message's filename { tail=$MATCH } # and put it into $tail # now move the message TRAP="mv .Junk\ E-mail/new/$tail .Junk\ E-mail/cur/$tail:2,S" } A: Many mailers prefix a mail's subject with "Re: " when replying, if that prefix isn't already there. German Outlook instead prefixes with "AW: " (for "AntWort") if that prefix isn't already there. Unfortunately, these two behaviours clash, resulting in mail subjects like "Re: AW: Re: AW: Re: AW: Re: AW: Lunch". So I now have: :0f * ^Subject: (Antwort|AW): |sed -r -e '1,/^$/s/^(Subject: )(((Antwort: )|(Re: )|(AW: ))+)(.*)/\1Re: \7\nX-Orig-Subject: \2\7/' Which curtails these (and an "Antwort: " prefix that I've evidently also been bothered by at some point) down to a single "Re: ". A: I have various filters in my .procmailrc file, but the most useful is this one, which I add to the very top of the file before I make any other changes. :0 c: mail.save This saves a copy of everything and then continues with the rest of the recipes. If I've done something wrong, my e-mail is saved in the file "mail.save". When I'm sure my changes are working, I comment these lines out, until the next time. A: To stop weird russian and chinese spams, I use this procmail configuration. UNREADABLE='[^?"]*big5|iso-2022-jp|ISO-2022-KR|euc-kr|gb2312|ks_c_5601-1987' :0: * ^Content-Type:.*multipart * B ?? $ ^Content-Type:.*^?.*charset="?($UNREADABLE) spam-unreadable
What's in your .procmailrc
Are there any handy general items you put in your .procmailrc file?
[ "Just simple things - move messages to appropriate folders, forward some stuff to an email2sms address, move spam to spam folder. One thing I'm kind of proud of is how to mark your spam as \"read\" (this is for Courier IMAP and Maildir, where \"read\" means \"move to different folder and change the filename\"):\n:0 \n* ^X-Spam # the header our filter inserts for spam \n{ \n :0 \n .Junk\\ E-mail/ # stores in .Junk E-mail/new/ \n\n :0 \n * LASTFOLDER ?? /\\/[^/]+$ # get the stored message's filename \n { tail=$MATCH } # and put it into $tail\n # now move the message \n TRAP=\"mv .Junk\\ E-mail/new/$tail .Junk\\ E-mail/cur/$tail:2,S\" \n}\n\n", "Many mailers prefix a mail's subject with \"Re: \" when replying, if that prefix isn't already there. German Outlook instead prefixes with \"AW: \" (for \"AntWort\") if that prefix isn't already there. Unfortunately, these two behaviours clash, resulting in mail subjects like \"Re: AW: Re: AW: Re: AW: Re: AW: Lunch\". So I now have:\n:0f\n* ^Subject: (Antwort|AW):\n|sed -r -e '1,/^$/s/^(Subject: )(((Antwort: )|(Re: )|(AW: ))+)(.*)/\\1Re: \\7\\nX-Orig-Subject: \\2\\7/'\n\nWhich curtails these (and an \"Antwort: \" prefix that I've evidently also been bothered by at some point) down to a single \"Re: \".\n", "I have various filters in my .procmailrc file, but the most useful is this one, which I add to the very top of the file before I make any other changes.\n:0 c:\nmail.save\n\nThis saves a copy of everything and then continues with the rest of the recipes. If I've done something wrong, my e-mail is saved in the file \"mail.save\". When I'm sure my changes are working, I comment these lines out, until the next time.\n", "To stop weird russian and chinese spams, I use this procmail configuration. \nUNREADABLE='[^?\"]*big5|iso-2022-jp|ISO-2022-KR|euc-kr|gb2312|ks_c_5601-1987'\n:0:\n* ^Content-Type:.*multipart\n* B ?? $ ^Content-Type:.*^?.*charset=\"?($UNREADABLE)\nspam-unreadable\n\n" ]
[ 5, 5, 3, 2 ]
[]
[]
[ "email", "procmail", "unix" ]
stackoverflow_0000008493_email_procmail_unix.txt
Q: is it possible to detect if a flash movie also contains (plays) sound? Is there a way to detect if a flash movie contains any sound or is playing any music? It would be nice if this could be done inside a webbrowser (actionscript from another flash object, javascript,..) and could be done before the flash movie starts playing. However, I have my doubts this will be possible altogether, so any other (programmable) solution is also appreciated A: Yes, on the server side for sure. Client side? I don't know. (I'm a serverside kind of guy.) On the server side, one would have to parse the file, read the header and/or look for audio frames. (I've ported a haskel FLV parser to Java for indexing purposes myself, and there are other parsing utilities out there. It is possible.) osflash.org's FLV page has the gory details. Check out the FLV Format sections's FLV Header table. FIELD DATA TYPE EXAMPLE DESCRIPTION Signature byte[3] “FLV” Always “FLV” Version uint8 “\x01” (1) Currently 1 for known FLV files Flags uint8 bitmask “\x05” (5, audio+video) Bitmask: 4 is audio, 1 is video Offset uint32-be “\x00\x00\x00\x09” (9) Total size of header (always 9 for known FLV files) EDIT: My client side coding with Flash is non-existent, but I believe there is an onMetaDataLoad event that your code could catch. That might be happening a bit late for you, but maybe it is good enough? A: Are you asking about FLV video files or Flash "movies" as in SWF? Just to clarify, an FLV is the Flash Video Format (or whatever the acronym is), a regular Flash movie/application/banner would be an SWF. These are very different file formats. A: With the ByteArray you can do pretty much what you want. Before starting playback you can analyze the bytes of the FLV header (use byteArray.readByte() and refer to the specs) to determine to check if the audio flag is on. Since the FLV header is loaded almost instantly this shouldn't cause any inconvenient delay for the user. With SWF's it's a lot tricker -- i'm pretty sure there's no easy way to determine in advance if a swf plays audio somewhere. A way to do it could be to look at what assets the SWF has defined in the library but also then the swf could just load an external audio file (or even generate it with some hacks or the new apis in Flash player 10). If the swf's are user submitted (or something similar that's out of your immediate control) I think this is a risky road..
is it possible to detect if a flash movie also contains (plays) sound?
Is there a way to detect if a flash movie contains any sound or is playing any music? It would be nice if this could be done inside a webbrowser (actionscript from another flash object, javascript,..) and could be done before the flash movie starts playing. However, I have my doubts this will be possible altogether, so any other (programmable) solution is also appreciated
[ "Yes, on the server side for sure. Client side? I don't know. (I'm a serverside kind of guy.) \nOn the server side, one would have to parse the file, read the header and/or look for audio frames. (I've ported a haskel FLV parser to Java for indexing purposes myself, and there are other parsing utilities out there. It is possible.)\nosflash.org's FLV page has the gory details. Check out the FLV Format sections's FLV Header table. \nFIELD DATA TYPE EXAMPLE DESCRIPTION\n Signature byte[3] “FLV” Always “FLV”\n Version uint8 “\\x01” (1) Currently 1 for known FLV files\n Flags uint8 bitmask “\\x05” (5, audio+video) Bitmask: 4 is audio, 1 is video\n Offset uint32-be “\\x00\\x00\\x00\\x09” (9) Total size of header (always 9 for known FLV files) \n\n\nEDIT: My client side coding with Flash is non-existent, but I believe there is an onMetaDataLoad event that your code could catch. That might be happening a bit late for you, but maybe it is good enough?\n", "Are you asking about FLV video files or Flash \"movies\" as in SWF?\nJust to clarify, an FLV is the Flash Video Format (or whatever the acronym is), a regular Flash movie/application/banner would be an SWF. These are very different file formats.\n", "With the ByteArray you can do pretty much what you want. Before starting playback you can analyze the bytes of the FLV header (use byteArray.readByte() and refer to the specs) to determine to check if the audio flag is on. Since the FLV header is loaded almost instantly this shouldn't cause any inconvenient delay for the user.\nWith SWF's it's a lot tricker -- i'm pretty sure there's no easy way to determine in advance if a swf plays audio somewhere. A way to do it could be to look at what assets the SWF has defined in the library but also then the swf could just load an external audio file (or even generate it with some hacks or the new apis in Flash player 10). If the swf's are user submitted (or something similar that's out of your immediate control) I think this is a risky road.. \n" ]
[ 3, 1, 0 ]
[]
[]
[ "audio", "flash" ]
stackoverflow_0000043503_audio_flash.txt
Q: Compiling code on an external drive To make things easier when switching between machines (my workstation at the office and my personal laptop) I have thought about trying an external hard drive to store my working directory on. Specifically I am looking at Firewire 800 drives (most are 5400 rpm 8mb cache). What I am wondering is if anyone has experience with doing this with Visual Studio projects and what sort of performance hit they see. A: It depends on the size of the project. The throughput is low and the latency is high, so you're going to get hit every which way, but due to the latency you'll be hit harder if you have a lot of little files rather than a few large ones. Have you considered simply carrying around a GIT or other distributed repository and updating the machine repositories as you move around? Then you can compile locally and treat the drive and a roving server. Since only changes will be moved across, it should be faster, and your code will be 'backed up' in more places. If you forget the drive, it breaks, or is lost/stolen, then you can still sit down at a PC and program with no code missing if you're at the last PC you used, or very little code missing (which will be updated later with a resync anyway). And it's just a hop skip and a jump away from simply using the network to move the changes between the systems if you don't want to carry the drive around later. A: I use vmware and the virtual machines are on an external usb drive. Performance is fine. You might have some issues with the drive name changing - not an issue if you use virtual machines. A: Granted I work in an industry were Personal Information and Intellectual Property are king, but I don't like that idea at all. That hard drive disappears and you have a big problem. Why not Remote Desktop into the work machine? EDIT Stipud Spelingg
Compiling code on an external drive
To make things easier when switching between machines (my workstation at the office and my personal laptop) I have thought about trying an external hard drive to store my working directory on. Specifically I am looking at Firewire 800 drives (most are 5400 rpm 8mb cache). What I am wondering is if anyone has experience with doing this with Visual Studio projects and what sort of performance hit they see.
[ "It depends on the size of the project. The throughput is low and the latency is high, so you're going to get hit every which way, but due to the latency you'll be hit harder if you have a lot of little files rather than a few large ones.\nHave you considered simply carrying around a GIT or other distributed repository and updating the machine repositories as you move around? Then you can compile locally and treat the drive and a roving server. Since only changes will be moved across, it should be faster, and your code will be 'backed up' in more places.\nIf you forget the drive, it breaks, or is lost/stolen, then you can still sit down at a PC and program with no code missing if you're at the last PC you used, or very little code missing (which will be updated later with a resync anyway).\nAnd it's just a hop skip and a jump away from simply using the network to move the changes between the systems if you don't want to carry the drive around later.\n", "I use vmware and the virtual machines are on an external usb drive. Performance is fine. You might have some issues with the drive name changing - not an issue if you use virtual machines.\n", "Granted I work in an industry were Personal Information and Intellectual Property are king, but I don't like that idea at all. That hard drive disappears and you have a big problem. \nWhy not Remote Desktop into the work machine?\nEDIT Stipud Spelingg\n" ]
[ 5, 0, 0 ]
[]
[]
[ "hardware", "visual_studio" ]
stackoverflow_0000058247_hardware_visual_studio.txt
Q: Registry key that contains the folder for the local user's Programs folder on Vista I'm troubleshooting a problem with creating Vista shortcuts. I want to make sure that our Installer is reading the Programs folder from the right registry key. It's reading it from: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders\Programs And it's showing this directory for Programs: C:\Users\NonAdmin2 UAC OFF\AppData\Roaming\Microsoft\Windows\Start Menu\Programs From what I've read, this seems correct, but I wanted to double check. A: Don't use the registry to read this. Use SHGetFolderPath with CSIDL_PROGRAMS. For a reason why, see Raymond Chen's comments on the "Shell Folders" key: http://blogs.msdn.com/oldnewthing/archive/2003/11/03/55532.aspx A: use windows installer properties. will probably be easier. http://msdn.microsoft.com/en-us/library/aa370905(VS.85).aspx#system_folder_properties A: You should probably use API for this, such as SHGetFolderPath A: Sounds correct to me. A: Example of the SHGetFolderPath in VB http://support.microsoft.com/kb/252652 A: Helpful code snippet: public class Utilities { public enum FolderPaths { CSIDL_DESKTOP = 0x0000, // <desktop> CSIDL_INTERNET = 0x0001, // Internet Explorer (icon on desktop) CSIDL_PROGRAMS = 0x0002, // Start Menu\Programs CSIDL_CONTROLS = 0x0003, // My Computer\Control Panel CSIDL_PRINTERS = 0x0004, // My Computer\Printers CSIDL_PERSONAL = 0x0005, // My Documents CSIDL_FAVORITES = 0x0006, // <user name>\Favorites CSIDL_STARTUP = 0x0007, // Start Menu\Programs\Startup CSIDL_RECENT = 0x0008, // <user name>\Recent CSIDL_SENDTO = 0x0009, // <user name>\SendTo CSIDL_BITBUCKET = 0x000a, // <desktop>\Recycle Bin CSIDL_STARTMENU = 0x000b, // <user name>\Start Menu CSIDL_MYDOCUMENTS = CSIDL_PERSONAL, // Personal was just a silly name for My Documents CSIDL_MYMUSIC = 0x000d, // "My Music" folder CSIDL_MYVIDEO = 0x000e, // "My Videos" folder CSIDL_DESKTOPDIRECTORY = 0x0010, // <user name>\Desktop CSIDL_DRIVES = 0x0011, // My Computer CSIDL_NETWORK = 0x0012, // Network Neighborhood (My Network Places) CSIDL_NETHOOD = 0x0013, // <user name>\nethood CSIDL_FONTS = 0x0014, // windows\fonts CSIDL_TEMPLATES = 0x0015, CSIDL_COMMON_STARTMENU = 0x0016, // All Users\Start Menu CSIDL_COMMON_PROGRAMS = 0X0017, // All Users\Start Menu\Programs CSIDL_COMMON_STARTUP = 0x0018, // All Users\Startup CSIDL_COMMON_DESKTOPDIRECTORY = 0x0019, // All Users\Desktop CSIDL_APPDATA = 0x001a, // <user name>\Application Data CSIDL_PRINTHOOD = 0x001b, // <user name>\PrintHood CSIDL_LOCAL_APPDATA = 0x001c // <user name>\Local Settings\Applicaiton Data (non roaming) } [DllImport("shfolder.dll", CharSet = CharSet.Unicode)] public static extern int SHGetFolderPath(IntPtr owner, int folder, IntPtr token, int flags, StringBuilder path); } void MyFunction() { StringBuilder path = new StringBuilder(260); String folderPath = ""; if (0 == Utilities.SHGetFolderPath(IntPtr.Zero, (int) Utilities.FolderPaths.CSIDL_MYVIDEO, IntPtr.Zero, 0, path)) { folderPath = path.ToString(); } }
Registry key that contains the folder for the local user's Programs folder on Vista
I'm troubleshooting a problem with creating Vista shortcuts. I want to make sure that our Installer is reading the Programs folder from the right registry key. It's reading it from: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders\Programs And it's showing this directory for Programs: C:\Users\NonAdmin2 UAC OFF\AppData\Roaming\Microsoft\Windows\Start Menu\Programs From what I've read, this seems correct, but I wanted to double check.
[ "Don't use the registry to read this. Use SHGetFolderPath with CSIDL_PROGRAMS.\nFor a reason why, see Raymond Chen's comments on the \"Shell Folders\" key:\nhttp://blogs.msdn.com/oldnewthing/archive/2003/11/03/55532.aspx\n", "use windows installer properties. will probably be easier.\nhttp://msdn.microsoft.com/en-us/library/aa370905(VS.85).aspx#system_folder_properties\n", "You should probably use API for this, such as SHGetFolderPath\n", "Sounds correct to me.\n", "Example of the SHGetFolderPath in VB\nhttp://support.microsoft.com/kb/252652\n", "Helpful code snippet:\npublic class Utilities\n{\n\n public enum FolderPaths\n {\n CSIDL_DESKTOP = 0x0000, // <desktop>\n CSIDL_INTERNET = 0x0001, // Internet Explorer (icon on desktop)\n CSIDL_PROGRAMS = 0x0002, // Start Menu\\Programs\n CSIDL_CONTROLS = 0x0003, // My Computer\\Control Panel\n CSIDL_PRINTERS = 0x0004, // My Computer\\Printers\n CSIDL_PERSONAL = 0x0005, // My Documents\n CSIDL_FAVORITES = 0x0006, // <user name>\\Favorites\n CSIDL_STARTUP = 0x0007, // Start Menu\\Programs\\Startup\n CSIDL_RECENT = 0x0008, // <user name>\\Recent\n CSIDL_SENDTO = 0x0009, // <user name>\\SendTo\n CSIDL_BITBUCKET = 0x000a, // <desktop>\\Recycle Bin\n CSIDL_STARTMENU = 0x000b, // <user name>\\Start Menu\n CSIDL_MYDOCUMENTS = CSIDL_PERSONAL, // Personal was just a silly name for My Documents\n CSIDL_MYMUSIC = 0x000d, // \"My Music\" folder\n CSIDL_MYVIDEO = 0x000e, // \"My Videos\" folder\n CSIDL_DESKTOPDIRECTORY = 0x0010, // <user name>\\Desktop\n CSIDL_DRIVES = 0x0011, // My Computer\n CSIDL_NETWORK = 0x0012, // Network Neighborhood (My Network Places)\n CSIDL_NETHOOD = 0x0013, // <user name>\\nethood\n CSIDL_FONTS = 0x0014, // windows\\fonts\n CSIDL_TEMPLATES = 0x0015,\n CSIDL_COMMON_STARTMENU = 0x0016, // All Users\\Start Menu\n CSIDL_COMMON_PROGRAMS = 0X0017, // All Users\\Start Menu\\Programs\n CSIDL_COMMON_STARTUP = 0x0018, // All Users\\Startup\n CSIDL_COMMON_DESKTOPDIRECTORY = 0x0019, // All Users\\Desktop\n CSIDL_APPDATA = 0x001a, // <user name>\\Application Data\n CSIDL_PRINTHOOD = 0x001b, // <user name>\\PrintHood\n CSIDL_LOCAL_APPDATA = 0x001c // <user name>\\Local Settings\\Applicaiton Data (non roaming)\n }\n\n\n [DllImport(\"shfolder.dll\", CharSet = CharSet.Unicode)]\n public static extern int SHGetFolderPath(IntPtr owner, int folder, IntPtr token, int flags, StringBuilder path);\n}\n\nvoid MyFunction()\n{\n StringBuilder path = new StringBuilder(260);\n\n String folderPath = \"\";\n if (0 == Utilities.SHGetFolderPath(IntPtr.Zero, (int) Utilities.FolderPaths.CSIDL_MYVIDEO, IntPtr.Zero, 0, path))\n {\n folderPath = path.ToString();\n }\n\n}\n\n" ]
[ 4, 1, 1, 0, 0, 0 ]
[]
[]
[ "registry", "windows_vista" ]
stackoverflow_0000057855_registry_windows_vista.txt
Q: Reading from a http-get presenting in Firefox bookmarks I'm trying to get a Firefox plugin to read data from a HTTP get, parse the results and present them as links in a bookmark-like drop-down menu. My quesion then is: Does anyone have any sample code that will do this? A: Having never developed one myself, I'm not certain how this is typically done in Firefox plugins, but since plugin scripting is JavaScript, I can probably help out with the loading part. Assuming a variable named url containing the URL you want to request: var xmlhttp = new XMLHttpRequest(); xmlhttp.open("GET", url, true); xmlhttp.onreadystatechange = function() { if(this.readyState == 4) { // Done loading? if(this.status == 200) { // Everything okay? // read content from this.responseXML or this.responseText } else { // Error occurred; handle it alert("Error " + this.status + ":\n" + this.statusText); } } }; xmlhttp.send(null); A couple of notes on this code: You may want more sophisticated status code handling. For example, 200 is not the only non-error status code. Details on status codes can be found here. You probably want to have a timeout to handle the case where, for some reason, you don't get to readyState 4 in a reasonable amount of time. You may want to do things when earlier readyStates are received. This page documents the readyState codes, along with other properties and methods on the XMLHttpRequest object which you may find useful. A: Robert Walker did a great job of describing how to send the request. You can read more about Mozilla's xmlhttprequest here. I would just add that the response would be found (using Robert's code) using xmlhttp.responseText (Edit - i didn't read closely enough, thanks Robert) You didn't indicate exactly what the data was, although you mentioned wanting to parse links from the data. You could the xmlhttp.responseText as an xml document, parse out the links, and place it into a menulist or whatever you like.
Reading from a http-get presenting in Firefox bookmarks
I'm trying to get a Firefox plugin to read data from a HTTP get, parse the results and present them as links in a bookmark-like drop-down menu. My quesion then is: Does anyone have any sample code that will do this?
[ "Having never developed one myself, I'm not certain how this is typically done in Firefox plugins, but since plugin scripting is JavaScript, I can probably help out with the loading part. Assuming a variable named url containing the URL you want to request:\nvar xmlhttp = new XMLHttpRequest();\nxmlhttp.open(\"GET\", url, true);\n\nxmlhttp.onreadystatechange = function() {\n if(this.readyState == 4) { // Done loading?\n if(this.status == 200) { // Everything okay?\n // read content from this.responseXML or this.responseText\n } else { // Error occurred; handle it\n alert(\"Error \" + this.status + \":\\n\" + this.statusText);\n }\n }\n};\n\nxmlhttp.send(null);\n\nA couple of notes on this code:\n\nYou may want more sophisticated status code handling. For example, 200 is not the only non-error status code. Details on status codes can be found here.\nYou probably want to have a timeout to handle the case where, for some reason, you don't get to readyState 4 in a reasonable amount of time.\nYou may want to do things when earlier readyStates are received. This page documents the readyState codes, along with other properties and methods on the XMLHttpRequest object which you may find useful.\n\n", "Robert Walker did a great job of describing how to send the request. You can read more about Mozilla's xmlhttprequest here.\nI would just add that the response would be found (using Robert's code) using\n xmlhttp.responseText\n\n\n(Edit - i didn't read closely enough, thanks Robert)\nYou didn't indicate exactly what the data was, although you mentioned wanting to parse links from the data. You could the xmlhttp.responseText as an xml document, parse out the links, and place it into a menulist or whatever you like.\n" ]
[ 2, 0 ]
[]
[]
[ "firefox", "javascript", "plugins", "xul" ]
stackoverflow_0000040125_firefox_javascript_plugins_xul.txt
Q: When can/should you go whole hog with the ORM approach? It seems to me that introducing an ORM tool is supposed to make your architecture cleaner, but for efficiency I've found myself bypassing it and iterating over a JDBC Result Set on occasion. This leads to an uncoordinated tangle of artifacts instead of a cleaner architecture. Is this because I'm applying the tool in an invalid Context, or is it deeper than that? When can/should you go whole hog with the ORM approach? Any insight would be greatly appreciated. A little of background: In my environment I have about 50 client computers and 1 reasonably powerful SQL Server. I have a desktop application in which all 50 clients are accessing the data at all times. The project's Data Model has gone through a number of reorganizations for various reasons including clarity, efficiency, etc. My Data Model's history JDBC calls directly DAO + POJO without relations between Pojos (basically wrapping the JDBC). Added Relations between POJOs implementing Lazy Loading, but just hiding the inter-DAO calls Jumped onto the Hibernate bandwagon after seeing how "simple" it made data access (it made inter POJO relations trivial) and because it could decrease the number of round trips to the database when working with many related entities. Since it was a desktop application keeping Sessions open long term was a nightmare so it ended up causing a whole lot of issues Stepped back to a partial DAO/Hibernate approach that allows me to make direct JDBC calls behind the DAO curtain while at the same time using Hibernate. A: Hibernate makes more sense when your application works on object graphs, which are persisted in the RDBMS. Instead, if your application logic works on a 2-D matrix of data, fetching those via direct JDBC works better. Although Hibernate is written on top of JDBC, it has capabilities which might be non-trivial to implement in JDBC. For eg: Say, the user views a row in the UI and changes some of the values and you want to fire an update query for only those columns that did indeed change. To avoid getting into deadlocks you need to maintain a global order for SQLs in a transaction. Getting this right JDBC might not be easy Easily setting up optimistic locking. When you use JDBC, you need to remember to have this in every update query. Batch updates, lazy materialization of collections etc might also be non-trivial to implement in JDBC. (I say "might be non-trivial", because it of course can be done - and you might be a super hacker:) Hibernate lets you fire your own SQL queries also, in case you need to. Hope this helps you to decide. PS: Keeping the Session open on a remote desktop client and running into trouble is really not Hibernate's problem - you would run into the same issue if you keep the Connection to the DB open for long.
When can/should you go whole hog with the ORM approach?
It seems to me that introducing an ORM tool is supposed to make your architecture cleaner, but for efficiency I've found myself bypassing it and iterating over a JDBC Result Set on occasion. This leads to an uncoordinated tangle of artifacts instead of a cleaner architecture. Is this because I'm applying the tool in an invalid Context, or is it deeper than that? When can/should you go whole hog with the ORM approach? Any insight would be greatly appreciated. A little of background: In my environment I have about 50 client computers and 1 reasonably powerful SQL Server. I have a desktop application in which all 50 clients are accessing the data at all times. The project's Data Model has gone through a number of reorganizations for various reasons including clarity, efficiency, etc. My Data Model's history JDBC calls directly DAO + POJO without relations between Pojos (basically wrapping the JDBC). Added Relations between POJOs implementing Lazy Loading, but just hiding the inter-DAO calls Jumped onto the Hibernate bandwagon after seeing how "simple" it made data access (it made inter POJO relations trivial) and because it could decrease the number of round trips to the database when working with many related entities. Since it was a desktop application keeping Sessions open long term was a nightmare so it ended up causing a whole lot of issues Stepped back to a partial DAO/Hibernate approach that allows me to make direct JDBC calls behind the DAO curtain while at the same time using Hibernate.
[ "Hibernate makes more sense when your application works on object graphs, which are persisted in the RDBMS. Instead, if your application logic works on a 2-D matrix of data, fetching those via direct JDBC works better. Although Hibernate is written on top of JDBC, it has capabilities which might be non-trivial to implement in JDBC. For eg:\n\nSay, the user views a row in the UI and changes some of the values and you want to fire an update query for only those columns that did indeed change.\nTo avoid getting into deadlocks you need to maintain a global order for SQLs in a transaction. Getting this right JDBC might not be easy\nEasily setting up optimistic locking. When you use JDBC, you need to remember to have this in every update query.\nBatch updates, lazy materialization of collections etc might also be non-trivial to implement in JDBC.\n\n(I say \"might be non-trivial\", because it of course can be done - and you might be a super hacker:)\nHibernate lets you fire your own SQL queries also, in case you need to. \nHope this helps you to decide.\nPS: Keeping the Session open on a remote desktop client and running into trouble is really not Hibernate's problem - you would run into the same issue if you keep the Connection to the DB open for long.\n" ]
[ 4 ]
[]
[]
[ "architecture", "hibernate", "java", "orm" ]
stackoverflow_0000058163_architecture_hibernate_java_orm.txt
Q: Something special about Safari for Windows and AJAX? Is there something special about Safari for Windows and AJAX? In other words: Are there some common pitfalls I should keep in mind? A: Safari is really standards compliant. Unless you're using some really esoteric browser features, in general if something works in Firefox, I've found it works without modification in Windows Safari. Apple has a developer center for web developers, but I didn't find anything too useful there. A: In your event handlers, instead of return false, use event.preventDefault() or event.stopPropagation(). The event methods are the standard/compatible way, but lots of old tutorials still recommend return. A: One word of warning: Safari on Windows does not support XSLT.
Something special about Safari for Windows and AJAX?
Is there something special about Safari for Windows and AJAX? In other words: Are there some common pitfalls I should keep in mind?
[ "Safari is really standards compliant. Unless you're using some really esoteric browser features, in general if something works in Firefox, I've found it works without modification in Windows Safari.\nApple has a developer center for web developers, but I didn't find anything too useful there.\n", "In your event handlers, instead of return false, use event.preventDefault() or event.stopPropagation(). The event methods are the standard/compatible way, but lots of old tutorials still recommend return.\n", "One word of warning: Safari on Windows does not support XSLT.\n" ]
[ 3, 2, 0 ]
[]
[]
[ "ajax", "browser", "javascript", "safari", "windows" ]
stackoverflow_0000058184_ajax_browser_javascript_safari_windows.txt
Q: Cannot access a webservice from mobile device I developed a program in a mobile device (Pocket PC 2003) to access a web service, the web service is installed on a Windows XP SP2 PC with IIS, the PC has the IP 192.168.5.2. The device obtains from the wireless network the IP 192.168.5.118 and the program works OK, it calls the method from the web service and executes the action that is needed. This program is going to be used in various buildings. Now I have this problem, it turns that when I try to test it in another building (distances neraly about 100 mts. or 200 mts.) connected with the network, the program cannot connect to the webservice, at this moment the device gets from an Access Point the IP 192.168.10.25, and it accesses the same XP machine I stated before (192.168.5.2). I made a mobile aspx page to verify that I can reach the web server over the network and it loads it in the device, I even made a winform that access the same webservice in a PC from that building and also works there so I don't understand what is going on. I also tried to ping that 192.168.5.2 PC and it responds alive. After that fail I returned to the original place where I tested the program before and it happens that it works normally. The only thing that I look different here is that the third number in the IP is 10 instead of 5, another observation is that I can't ping to the mobile device. I feel confused I don't know what happens here? What could be the problem? This is how I call the web service; //Connect to webservice svc = new TheWebService(); svc.Credentials = new System.Net.NetworkCredential(Settings.UserName, Settings.Password); svc.AllowAutoRedirect = false; svc.UserAgent = Settings.UserAgent; svc.PreAuthenticate = true; svc.Url = Settings.Url; svc.Timeout = System.Threading.Timeout.Infinite; //Send information to webservice svc.ExecuteMethod(info); the content of the app.config in the mobile device is; <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="UserName" value="administrator" /> <add key="Password" value="************" /> <add key="UserAgent" value="My User Agent" /> <add key="Url" value="http://192.168.5.2/WebServices/TWUD.asmx" /> </appSettings> </configuration> Does anyone have an idea what is going on? A: It was a network issue, we configurated a proxy server and that was the problem, I need to learn more about network. A: Not an expert with this stuff but it looks like the first 3 parts of the address are being masked out. Is it possible that the mobile device is being given a network mask of: 255.255.255.0 As to reach beyond the range of the first 3 parts you need the mask to be: 255.255.0.0 This may be an oversimplification or completely wrong but that's was my gut response to the question. A: This looks like a network issue, unless there's an odd bug in .Net CF that doesn't allow you to traverse subnets in certain situations (I can find no evidence of such a thing from googling). Can you get any support from the network/IT team? Also, have you tried it from a different subnet? I.e. not the same as the XP machine (192.168.5.x) and not the same as the one that's not worked so far (192.168.10.). @Shaun Austin - that wouldn't explain why they can get at a regular web page on the XP machine from the different subnet.
Cannot access a webservice from mobile device
I developed a program in a mobile device (Pocket PC 2003) to access a web service, the web service is installed on a Windows XP SP2 PC with IIS, the PC has the IP 192.168.5.2. The device obtains from the wireless network the IP 192.168.5.118 and the program works OK, it calls the method from the web service and executes the action that is needed. This program is going to be used in various buildings. Now I have this problem, it turns that when I try to test it in another building (distances neraly about 100 mts. or 200 mts.) connected with the network, the program cannot connect to the webservice, at this moment the device gets from an Access Point the IP 192.168.10.25, and it accesses the same XP machine I stated before (192.168.5.2). I made a mobile aspx page to verify that I can reach the web server over the network and it loads it in the device, I even made a winform that access the same webservice in a PC from that building and also works there so I don't understand what is going on. I also tried to ping that 192.168.5.2 PC and it responds alive. After that fail I returned to the original place where I tested the program before and it happens that it works normally. The only thing that I look different here is that the third number in the IP is 10 instead of 5, another observation is that I can't ping to the mobile device. I feel confused I don't know what happens here? What could be the problem? This is how I call the web service; //Connect to webservice svc = new TheWebService(); svc.Credentials = new System.Net.NetworkCredential(Settings.UserName, Settings.Password); svc.AllowAutoRedirect = false; svc.UserAgent = Settings.UserAgent; svc.PreAuthenticate = true; svc.Url = Settings.Url; svc.Timeout = System.Threading.Timeout.Infinite; //Send information to webservice svc.ExecuteMethod(info); the content of the app.config in the mobile device is; <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="UserName" value="administrator" /> <add key="Password" value="************" /> <add key="UserAgent" value="My User Agent" /> <add key="Url" value="http://192.168.5.2/WebServices/TWUD.asmx" /> </appSettings> </configuration> Does anyone have an idea what is going on?
[ "It was a network issue, we configurated a proxy server and that was the problem, I need to learn more about network.\n", "Not an expert with this stuff but it looks like the first 3 parts of the address are being masked out. Is it possible that the mobile device is being given a network mask of:\n255.255.255.0\n\nAs to reach beyond the range of the first 3 parts you need the mask to be:\n255.255.0.0\n\nThis may be an oversimplification or completely wrong but that's was my gut response to the question.\n", "This looks like a network issue, unless there's an odd bug in .Net CF that doesn't allow you to traverse subnets in certain situations (I can find no evidence of such a thing from googling).\nCan you get any support from the network/IT team? Also, have you tried it from a different subnet? I.e. not the same as the XP machine (192.168.5.x) and not the same as the one that's not worked so far (192.168.10.).\n@Shaun Austin - that wouldn't explain why they can get at a regular web page on the XP machine from the different subnet.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "mobile" ]
stackoverflow_0000031297_mobile.txt
Q: Tools for manipulating PowerPoint files Do you know managed tools for manipulating PowerPoint files? The tool should be 100% managed code and offer the option to handle .ppt and .pptx files. A: Well, 100% managed could be going the hard route, however, you can use the Office PIAs from your .NET code. Joel Spolsky has an article discussing your various options.
Tools for manipulating PowerPoint files
Do you know managed tools for manipulating PowerPoint files? The tool should be 100% managed code and offer the option to handle .ppt and .pptx files.
[ "Well, 100% managed could be going the hard route, however, you can use the Office PIAs from your .NET code. Joel Spolsky has an article discussing your various options.\n" ]
[ 0 ]
[]
[]
[ ".net", "c#", "powerpoint" ]
stackoverflow_0000058300_.net_c#_powerpoint.txt
Q: What's the bare minimum permission set for Sql Server 2005 services? Best practices recommend not installing Sql Server to run as SYSTEM. What is the bare minumum you need to give the user account you create for it? A: By default, SQL Server 2005 installation will create a security group called SQLServer2005MSSQLUser$ComputerName$MSSQLSERVER with the correct rights. You just need to create a domain user or local user and make it a member of that group. More details are available in the SQL Server Books Online: Reviewing Windows NT Rights and Privileges Granted for SQL Server Service Accounts A: Typically I create a Domain User with only the specific rights on the network which I will require the server to have (i.e. to write to the network backup drive), I then add the account to local power users or local administrators depending on what needs to be done on the machine, however this isn't required. I've installed SQL a number of times using a standard user as a Service Account but you need to ensure that the user has access to write to the resources as listed at https://web.archive.org/web/20081223155956/http://support.microsoft.com/kb/283811 . Its probably not as defined an answer as you wanted but I'm only a developer (not a professional DBA / System Engineer). Mauro PS dont downmark me for saying "only a developer" :P
What's the bare minimum permission set for Sql Server 2005 services?
Best practices recommend not installing Sql Server to run as SYSTEM. What is the bare minumum you need to give the user account you create for it?
[ "By default, SQL Server 2005 installation will create a security group called SQLServer2005MSSQLUser$ComputerName$MSSQLSERVER with the correct rights. You just need to create a domain user or local user and make it a member of that group. \nMore details are available in the SQL Server Books Online: Reviewing Windows NT Rights and Privileges Granted for SQL Server Service Accounts\n", "Typically I create a Domain User with only the specific rights on the network which I will require the server to have (i.e. to write to the network backup drive), I then add the account to local power users or local administrators depending on what needs to be done on the machine, however this isn't required. I've installed SQL a number of times using a standard user as a Service Account but you need to ensure that the user has access to write to the resources as listed at https://web.archive.org/web/20081223155956/http://support.microsoft.com/kb/283811 . Its probably not as defined an answer as you wanted but I'm only a developer (not a professional DBA / System Engineer).\nMauro\nPS dont downmark me for saying \"only a developer\" :P\n" ]
[ 3, 1 ]
[]
[]
[ "security", "sql_server_2005", "system_administration" ]
stackoverflow_0000057800_security_sql_server_2005_system_administration.txt
Q: Rails Binary Stream support I'm going to be starting a project soon that requires support for large-ish binary files. I'd like to use Ruby on Rails for the webapp, but I'm concerned with the BLOB support. In my experience with other languages, frameworks, and databases, BLOBs are often overlooked and thus have poor, difficult, and/or buggy functionality. Does RoR spport BLOBs adequately? Are there any gotchas that creep up once you're already committed to Rails? BTW: I want to be using PostgreSQL and/or MySQL as the backend database. Obviously, BLOB support in the underlying database is important. For the moment, I want to avoid focusing on the DB's BLOB capabilities; I'm more interested in how Rails itself reacts. Ideally, Rails should be hiding the details of the database from me, and so I should be able to switch from one to the other. If this is not the case (ie: there's some problem with using Rails with a particular DB) then please do mention it. UPDATE: Also, I'm not just talking about ActiveRecord here. I'll need to handle binary files on the HTTP side (file upload effectively). That means getting access to the appropriate HTTP headers and streams via Rails. I've updated the question title and description to reflect this. A: As for streaming, you can do it all in an (at least memory-) efficient way. On the upload side, file parameters in forms are abstracted as IO objects that you can read from; on the download side, look in to the form of render :text => that takes a Proc argument: render :content_type => 'application/octet-stream', :text => Proc.new { |response, output| # do something that reads data and writes it to output } If your stuff is in files on disk, though, the aforementioned solutions will certainly work better. A: +1 for attachment_fu I use attachment_fu in one of my apps and MUST store files in the DB (for annoying reasons which are outside the scope of this convo). The (one?) tricky thing dealing w/BLOB's I've found is that you need a separate code path to send the data to the user -- you can't simply in-line a path on the filesystem like you would if it was a plain-Jane file. e.g. if you're storing avatar information, you can't simply do: <%= image_tag @youruser.avatar.path %> you have to write some wrapper logic and use send_data, e.g. (below is JUST an example w/attachment_fu, in practice you'd need to DRY this up) send_data(@youruser.avatar.current_data, :type => @youruser.avatar.content_type, :filename => @youruser.avatar.filename, :disposition => 'inline' ) Unfortunately, as far as I know attachment_fu (I don't have the latest version) does not do clever wrapping for you -- you've gotta write it yourself. P.S. Seeing your question edit - Attachment_fu handles all that annoying stuff that you mention -- about needing to know file paths and all that crap -- EXCEPT the one little issue when storing in the DB. Give it a try; it's the standard for rails apps. IF you insist on re-inventing the wheel, the source code for attachment_fu should document most of the gotchas, too! A: You can use the :binary type in your ActiveRecord migration and also constrain the maximum size: class BlobTest < ActiveRecord::Migration def self.up create_table :files do |t| t.column :file_data, :binary, :limit => 1.megabyte end end end ActiveRecord exposes the BLOB (or CLOB) contents as a Ruby String. A: I think your best bet is the attachment_fu plug-in: http://github.com/technoweenie/attachment_fu/tree/master UPDATE: Found some more info here http://groups.google.com/group/rubyonrails-talk/browse_thread/thread/a81beffb93708bb3 A: Look into the plugin, x_send_file too. "The XSendFile plugin provides a simple interface for sending files via the X-Sendfile HTTP header. This enables your web server to serve the file directly from disk, instead of streaming it through your Rails process. This is faster and saves a lot of memory if you‘re using Mongrel. Not every web server supports this header. YMMV." I'm not sure if it's usable with Blobs, it may just be for files on the file system. But you probably need something that doesn't tie up the web server streaming large chunks of data.
Rails Binary Stream support
I'm going to be starting a project soon that requires support for large-ish binary files. I'd like to use Ruby on Rails for the webapp, but I'm concerned with the BLOB support. In my experience with other languages, frameworks, and databases, BLOBs are often overlooked and thus have poor, difficult, and/or buggy functionality. Does RoR spport BLOBs adequately? Are there any gotchas that creep up once you're already committed to Rails? BTW: I want to be using PostgreSQL and/or MySQL as the backend database. Obviously, BLOB support in the underlying database is important. For the moment, I want to avoid focusing on the DB's BLOB capabilities; I'm more interested in how Rails itself reacts. Ideally, Rails should be hiding the details of the database from me, and so I should be able to switch from one to the other. If this is not the case (ie: there's some problem with using Rails with a particular DB) then please do mention it. UPDATE: Also, I'm not just talking about ActiveRecord here. I'll need to handle binary files on the HTTP side (file upload effectively). That means getting access to the appropriate HTTP headers and streams via Rails. I've updated the question title and description to reflect this.
[ "As for streaming, you can do it all in an (at least memory-) efficient way. On the upload side, file parameters in forms are abstracted as IO objects that you can read from; on the download side, look in to the form of render :text => that takes a Proc argument:\nrender :content_type => 'application/octet-stream', :text => Proc.new {\n |response, output|\n # do something that reads data and writes it to output\n}\n\nIf your stuff is in files on disk, though, the aforementioned solutions will certainly work better.\n", "+1 for attachment_fu\nI use attachment_fu in one of my apps and MUST store files in the DB (for annoying reasons which are outside the scope of this convo).\nThe (one?) tricky thing dealing w/BLOB's I've found is that you need a separate code path to send the data to the user -- you can't simply in-line a path on the filesystem like you would if it was a plain-Jane file.\ne.g. if you're storing avatar information, you can't simply do:\n<%= image_tag @youruser.avatar.path %>\n\nyou have to write some wrapper logic and use send_data, e.g. (below is JUST an example w/attachment_fu, in practice you'd need to DRY this up)\nsend_data(@youruser.avatar.current_data, :type => @youruser.avatar.content_type, :filename => @youruser.avatar.filename, :disposition => 'inline' )\n\nUnfortunately, as far as I know attachment_fu (I don't have the latest version) does not do clever wrapping for you -- you've gotta write it yourself.\nP.S.\nSeeing your question edit - Attachment_fu handles all that annoying stuff that you mention -- about needing to know file paths and all that crap -- EXCEPT the one little issue when storing in the DB. Give it a try; it's the standard for rails apps. IF you insist on re-inventing the wheel, the source code for attachment_fu should document most of the gotchas, too!\n", "You can use the :binary type in your ActiveRecord migration and also constrain the maximum size:\nclass BlobTest < ActiveRecord::Migration\n def self.up\n create_table :files do |t|\n t.column :file_data, :binary, :limit => 1.megabyte\n end\n end\nend\n\nActiveRecord exposes the BLOB (or CLOB) contents as a Ruby String.\n", "I think your best bet is the attachment_fu plug-in:\nhttp://github.com/technoweenie/attachment_fu/tree/master\nUPDATE: Found some more info here http://groups.google.com/group/rubyonrails-talk/browse_thread/thread/a81beffb93708bb3\n", "Look into the plugin, x_send_file too. \n\"The XSendFile plugin provides a simple interface for sending files via the X-Sendfile HTTP header. This enables your web server to serve the file directly from disk, instead of streaming it through your Rails process. This is faster and saves a lot of memory if you‘re using Mongrel. Not every web server supports this header. YMMV.\"\nI'm not sure if it's usable with Blobs, it may just be for files on the file system. But you probably need something that doesn't tie up the web server streaming large chunks of data.\n" ]
[ 13, 8, 5, 3, 0 ]
[]
[]
[ "blob", "ruby_on_rails" ]
stackoverflow_0000057104_blob_ruby_on_rails.txt
Q: What are some good examples of a WS-Eventing client in Java? There are a few web service frameworks available for Java: Axis2, CXF, JBossWS, and Metro. Does anyone have some good examples of a WS-Eventing client with these frameworks? A: Check out Apache Savan. It's was a publisher/subscriber implementation for Axis2 that supported WS-Eventing (see sample.eventing.Client for an example client) but was retired in 2014. JBossWS has some information about setting up a service here, but I didn't see any example for a client. Regarding CXF is includes support for eventing although it uses native types, nothing from JAX-WS.
What are some good examples of a WS-Eventing client in Java?
There are a few web service frameworks available for Java: Axis2, CXF, JBossWS, and Metro. Does anyone have some good examples of a WS-Eventing client with these frameworks?
[ "Check out Apache Savan. It's was a publisher/subscriber implementation for Axis2 that supported WS-Eventing (see sample.eventing.Client for an example client) but was retired in 2014.\nJBossWS has some information about setting up a service here, but I didn't see any example for a client. Regarding CXF is includes support for eventing although it uses native types, nothing from JAX-WS.\n" ]
[ 2 ]
[]
[]
[ "java", "web_services", "ws_eventing" ]
stackoverflow_0000057915_java_web_services_ws_eventing.txt
Q: Will .NET MVC give me the HTML/CSS/JS separation I need? I'm working with my ASP.NET development team to try and create "better" (i.e. cleaner) HTML when rendering pages. At the moment, .NET has a nasty tendency to do things like dump JavaScript into the page, making it a mandatory requirement on form controls and not allowing forms to work when JS isn't available. In some instances, we're struggling to overcome this without having to add a notable chunk to development time estimates. I've worked with MVC before - via Struts in Java - and found that in that instance, I was able to keep HTML pages exactly as I'd wanted them to be. (This viewpoint is based on the "best case" static HTML I typically developed beforehand, then handed over to the Java programmers to fill in the holes.) This worked out really well and we were able to produce some of the "best" web pages from a CMS that I've seen. Could .NET MVC give me the separation I need, or would it be a false economy to put aside valuable development time to test this? If .NET MVC isn't going to give me this fine-grained control over separation, any recommendations for .NET coding practices, libraries, etc. which might would be really helpful. A: The ASP.NET MVC Framework would give you a much more familiar separation. Because there is no viewstate, etc in the MVC Framework you won't have to worry about JavaScript being dumped into your pages. The only JavaScript calls you see will be ones that you manually add. You can, for the most part, separate HTML / CSS / JS like you would in a different framework. A: Depending on the view engine you're going to use. yes. But you can easilly check this by looking at the page-source for stack-overflow. It's not zen-garden but it's pretty clean. Some more clarification: The rendering of the pages is done by the view engine. You can use the standard view engine or existing ones like nVelocity or Brail, just like with monorail. http://www.chadmyers.com/Blog/archive/2007/11/28/testing-scottgu-alternate-view-engines-with-asp.net-mvc-nvelocity.aspx As the view engine is responsible for creating HTML what comes out depends on your choice. But most view engines are better in this respect than vanilla ASP.Net A: @Wrestlevania said: any recommendations for .NET coding practices, libraries, etc. which might would be really helpful. I try to maintain a high level of separation while coding in ASP.Net. I find that if I avoid the asp controls and stick as much as possible with basic html elements, I can avoid any situation where ASP.Net would be inclined to inject extra CSS or JS into my page. Example, use span in place of asp:literal, button in place of asp:button, etc. The only ASP control I use is the repeater, which is used to create a table. Any functionality I need that would be similar to an asp control, I either implement myself in javascript, or use a framework like jquery. A: Asp.Net MVC will help you keep html/css/js separate in that it will present fewer features that would prevent you from keeping them separate. For example Html helpers typically return just that: Html. From that point you are free to choose to keep all style information associated only by class attributes. Consider also looking into the practices you usually follow with a library like jQuery. It's an excellent example of how to keep the scripted functionality entirely in your js and out of your html by applying the event handling behaviors to the elements on page load based on things like element type, class and id.
Will .NET MVC give me the HTML/CSS/JS separation I need?
I'm working with my ASP.NET development team to try and create "better" (i.e. cleaner) HTML when rendering pages. At the moment, .NET has a nasty tendency to do things like dump JavaScript into the page, making it a mandatory requirement on form controls and not allowing forms to work when JS isn't available. In some instances, we're struggling to overcome this without having to add a notable chunk to development time estimates. I've worked with MVC before - via Struts in Java - and found that in that instance, I was able to keep HTML pages exactly as I'd wanted them to be. (This viewpoint is based on the "best case" static HTML I typically developed beforehand, then handed over to the Java programmers to fill in the holes.) This worked out really well and we were able to produce some of the "best" web pages from a CMS that I've seen. Could .NET MVC give me the separation I need, or would it be a false economy to put aside valuable development time to test this? If .NET MVC isn't going to give me this fine-grained control over separation, any recommendations for .NET coding practices, libraries, etc. which might would be really helpful.
[ "The ASP.NET MVC Framework would give you a much more familiar separation. Because there is no viewstate, etc in the MVC Framework you won't have to worry about JavaScript being dumped into your pages. The only JavaScript calls you see will be ones that you manually add.\nYou can, for the most part, separate HTML / CSS / JS like you would in a different framework.\n", "Depending on the view engine you're going to use. yes.\nBut you can easilly check this by looking at the page-source for stack-overflow. It's not zen-garden but it's pretty clean.\nSome more clarification:\nThe rendering of the pages is done by the view engine. You can use the standard view engine or existing ones like nVelocity or Brail, just like with monorail.\nhttp://www.chadmyers.com/Blog/archive/2007/11/28/testing-scottgu-alternate-view-engines-with-asp.net-mvc-nvelocity.aspx\nAs the view engine is responsible for creating HTML what comes out depends on your choice. But most view engines are better in this respect than vanilla ASP.Net\n", "@Wrestlevania said:\n\nany recommendations for .NET coding\n practices, libraries, etc. which might\n would be really helpful.\n\nI try to maintain a high level of separation while coding in ASP.Net. I find that if I avoid the asp controls and stick as much as possible with basic html elements, I can avoid any situation where ASP.Net would be inclined to inject extra CSS or JS into my page. Example, use span in place of asp:literal, button in place of asp:button, etc.\nThe only ASP control I use is the repeater, which is used to create a table. Any functionality I need that would be similar to an asp control, I either implement myself in javascript, or use a framework like jquery.\n", "Asp.Net MVC will help you keep html/css/js separate in that it will present fewer features that would prevent you from keeping them separate.\nFor example Html helpers typically return just that: Html. From that point you are free to choose to keep all style information associated only by class attributes. \nConsider also looking into the practices you usually follow with a library like jQuery. It's an excellent example of how to keep the scripted functionality entirely in your js and out of your html by applying the event handling behaviors to the elements on page load based on things like element type, class and id.\n" ]
[ 5, 3, 1, 0 ]
[]
[]
[ "asp.net_mvc", "css", "html", "javascript" ]
stackoverflow_0000056479_asp.net_mvc_css_html_javascript.txt
Q: UnhandledException handler in a .Net Windows Service Is it possible to use an UnhandledException Handler in a Windows Service? Normally I would use a custom built Exception Handling Component that does logging, phone home, etc. This component adds a handler to System.AppDomain.CurrentDomain.UnhandledException but as far as I can tell this doesn’t achieve anything win a Windows Service so I end up with this pattern in my 2 (or 4) Service entry points: Protected Overrides Sub OnStart(ByVal args() As String) ' Add code here to start your service. This method should set things ' in motion so your service can do its work. Try MyServiceComponent.Start() Catch ex As Exception 'call into our exception handler MyExceptionHandlingComponent.ManuallyHandleException (ex) 'zero is the default ExitCode for a successfull exit, so if we set it to non-zero ExitCode = -1 'So, we use Environment.Exit, it seems to be the most appropriate thing to use 'we pass an exit code here as well, just in case. System.Environment.Exit(-1) End Try End Sub Is there a way my Custom Exception Handling component can deal with this better so I don't have to fill my OnStart with messy exception handling plumbing? A: Ok, I’ve done a little more research into this now. When you create a windows service in .Net, you create a class that inherits from System.ServiceProcess.ServiceBase (In VB this is hidden in the .Designer.vb file). You then override the OnStart and OnStop function, and OnPause and OnContinue if you choose to. These methods are invoked from within the base class so I did a little poking around with reflector. OnStart is invoked by a method in System.ServiceProcess.ServiceBase called ServiceQueuedMainCallback. The vesion on my machine "System.ServiceProcess, Version=2.0.0.0" decompiles like this: Private Sub ServiceQueuedMainCallback(ByVal state As Object) Dim args As String() = DirectCast(state, String()) Try Me.OnStart(args) Me.WriteEventLogEntry(Res.GetString("StartSuccessful")) Me.status.checkPoint = 0 Me.status.waitHint = 0 Me.status.currentState = 4 Catch exception As Exception Me.WriteEventLogEntry(Res.GetString("StartFailed", New Object() { exception.ToString }), EventLogEntryType.Error) Me.status.currentState = 1 Catch obj1 As Object Me.WriteEventLogEntry(Res.GetString("StartFailed", New Object() { String.Empty }), EventLogEntryType.Error) Me.status.currentState = 1 End Try Me.startCompletedSignal.Set End Sub So because Me.OnStart(args) is called from within the Try portion of a Try Catch block I assume that anything that happens within the OnStart method is effectively wrapped by that Try Catch block and therefore any exceptions that occur aren't technically unhandled as they are actually handled in the ServiceQueuedMainCallback Try Catch. So CurrentDomain.UnhandledException never actually happens at least during the startup routine. The other 3 entry points (OnStop, OnPause and OnContinue) are all called from the base class in a similar way. So I ‘think’ that explains why my Exception Handling component can’t catch UnhandledException on Start and Stop, but I’m not sure if it explains why timers that are setup in OnStart can’t cause an UnhandledException when they fire. A: You can subscribe to the AppDomain.UnhandledException event. If you have a message loop, you can tie to the Application.ThreadException event.
UnhandledException handler in a .Net Windows Service
Is it possible to use an UnhandledException Handler in a Windows Service? Normally I would use a custom built Exception Handling Component that does logging, phone home, etc. This component adds a handler to System.AppDomain.CurrentDomain.UnhandledException but as far as I can tell this doesn’t achieve anything win a Windows Service so I end up with this pattern in my 2 (or 4) Service entry points: Protected Overrides Sub OnStart(ByVal args() As String) ' Add code here to start your service. This method should set things ' in motion so your service can do its work. Try MyServiceComponent.Start() Catch ex As Exception 'call into our exception handler MyExceptionHandlingComponent.ManuallyHandleException (ex) 'zero is the default ExitCode for a successfull exit, so if we set it to non-zero ExitCode = -1 'So, we use Environment.Exit, it seems to be the most appropriate thing to use 'we pass an exit code here as well, just in case. System.Environment.Exit(-1) End Try End Sub Is there a way my Custom Exception Handling component can deal with this better so I don't have to fill my OnStart with messy exception handling plumbing?
[ "Ok, I’ve done a little more research into this now.\nWhen you create a windows service in .Net, you create a class that inherits from System.ServiceProcess.ServiceBase (In VB this is hidden in the .Designer.vb file). You then override the OnStart and OnStop function, and OnPause and OnContinue if you choose to. \nThese methods are invoked from within the base class so I did a little poking around with reflector.\nOnStart is invoked by a method in System.ServiceProcess.ServiceBase called ServiceQueuedMainCallback. The vesion on my machine \"System.ServiceProcess, Version=2.0.0.0\" decompiles like this:\n\n\nPrivate Sub ServiceQueuedMainCallback(ByVal state As Object)\n Dim args As String() = DirectCast(state, String())\n Try \n Me.OnStart(args)\n Me.WriteEventLogEntry(Res.GetString(\"StartSuccessful\"))\n Me.status.checkPoint = 0\n Me.status.waitHint = 0\n Me.status.currentState = 4\n Catch exception As Exception\n Me.WriteEventLogEntry(Res.GetString(\"StartFailed\", New Object() { exception.ToString }), EventLogEntryType.Error)\n Me.status.currentState = 1\n Catch obj1 As Object\n Me.WriteEventLogEntry(Res.GetString(\"StartFailed\", New Object() { String.Empty }), EventLogEntryType.Error)\n Me.status.currentState = 1\n End Try\n Me.startCompletedSignal.Set\nEnd Sub\n\n\nSo because Me.OnStart(args) is called from within the Try portion of a Try Catch block I assume that anything that happens within the OnStart method is effectively wrapped by that Try Catch block and therefore any exceptions that occur aren't technically unhandled as they are actually handled in the ServiceQueuedMainCallback Try Catch. So CurrentDomain.UnhandledException never actually happens at least during the startup routine. \nThe other 3 entry points (OnStop, OnPause and OnContinue) are all called from the base class in a similar way.\nSo I ‘think’ that explains why my Exception Handling component can’t catch UnhandledException on Start and Stop, but I’m not sure if it explains why timers that are setup in OnStart can’t cause an UnhandledException when they fire. \n", "You can subscribe to the AppDomain.UnhandledException event. If you have a message loop, you can tie to the Application.ThreadException event.\n" ]
[ 16, 2 ]
[]
[]
[ ".net", "exception_handling", "vb.net", "windows_services" ]
stackoverflow_0000058280_.net_exception_handling_vb.net_windows_services.txt
Q: Saving an open generic type in an array? I am facing a problem with .NET generics. The thing I want to do is saving an array of generics types (GraphicsItem): public class GraphicsItem<T> { private T _item; public void Load(T item) { _item = item; } } How can I save such open generic type in an array? A: Implement a non-generic interface and use that: public class GraphicsItem<T> : IGraphicsItem { private T _item; public void Load(T item) { _item = item; } public void SomethingWhichIsNotGeneric(int i) { // Code goes here... } } public interface IGraphicsItem { void SomethingWhichIsNotGeneric(int i); } Then use that interface as the item in the list: var values = new List<IGraphicsItem>(); A: If you want to store heterogeneous GrpahicsItem's i.e. GraphicsItem< X> and GrpahicsItem< Y> you need to derive them from common base class, or implement common interface. Another option is to store them in List< object> A: Are you trying to create an array of GraphicsItem in a non-generic method? You cannot do the following: static void foo() { var _bar = List<GraphicsItem<T>>(); } and then fill the list later. More probably you are trying to do something like this? static GraphicsItem<T>[] CreateArrays<T>() { GraphicsItem<T>[] _foo = new GraphicsItem<T>[1]; // This can't work, because you don't know if T == typeof(string) // _foo[0] = (GraphicsItem<T>)new GraphicsItem<string>(); // You can only create an array of the scoped type parameter T _foo[0] = new GraphicsItem<T>(); List<GraphicsItem<T>> _bar = new List<GraphicsItem<T>>(); // Again same reason as above // _bar.Add(new GraphicsItem<string>()); // This works _bar.Add(new GraphicsItem<T>()); return _bar.ToArray(); } Remember you are going to need a generic type reference to create an array of a generic type. This can be either at method-level (using the T after the method) or at class-level (using the T after the class). If you want the method to return an array of GraphicsItem and GraphicsItem, then let GraphicsItem inherit from a non-generic base class GraphicsItem and return an array of that. You will lose all type safety though. Hope that helps.
Saving an open generic type in an array?
I am facing a problem with .NET generics. The thing I want to do is saving an array of generics types (GraphicsItem): public class GraphicsItem<T> { private T _item; public void Load(T item) { _item = item; } } How can I save such open generic type in an array?
[ "Implement a non-generic interface and use that:\npublic class GraphicsItem<T> : IGraphicsItem\n{\n private T _item;\n\n public void Load(T item)\n {\n _item = item;\n }\n\n public void SomethingWhichIsNotGeneric(int i)\n {\n // Code goes here...\n }\n}\n\npublic interface IGraphicsItem\n{\n void SomethingWhichIsNotGeneric(int i);\n}\n\nThen use that interface as the item in the list:\nvar values = new List<IGraphicsItem>();\n\n", "If you want to store heterogeneous GrpahicsItem's i.e. GraphicsItem< X> and GrpahicsItem< Y> you need to derive them from common base class, or implement common interface. Another option is to store them in List< object>\n", "Are you trying to create an array of GraphicsItem in a non-generic method?\nYou cannot do the following:\nstatic void foo()\n{\n var _bar = List<GraphicsItem<T>>();\n}\n\nand then fill the list later. \nMore probably you are trying to do something like this?\nstatic GraphicsItem<T>[] CreateArrays<T>()\n{\n GraphicsItem<T>[] _foo = new GraphicsItem<T>[1];\n\n // This can't work, because you don't know if T == typeof(string)\n // _foo[0] = (GraphicsItem<T>)new GraphicsItem<string>();\n\n // You can only create an array of the scoped type parameter T\n _foo[0] = new GraphicsItem<T>();\n\n List<GraphicsItem<T>> _bar = new List<GraphicsItem<T>>();\n\n // Again same reason as above\n // _bar.Add(new GraphicsItem<string>());\n\n // This works\n _bar.Add(new GraphicsItem<T>());\n\n return _bar.ToArray();\n}\n\nRemember you are going to need a generic type reference to create an array of a generic type. This can be either at method-level (using the T after the method) or at class-level (using the T after the class).\nIf you want the method to return an array of GraphicsItem and GraphicsItem, then let GraphicsItem inherit from a non-generic base class GraphicsItem and return an array of that. You will lose all type safety though.\nHope that helps.\n" ]
[ 4, 0, 0 ]
[]
[]
[ ".net", "generics" ]
stackoverflow_0000058384_.net_generics.txt
Q: Combining Enums Is there a way to combine Enums in VB.net? A: I believe what you want is a flag type enum. You need to add the Flags attribute to the top of the enum, and then you can combine enums with the 'Or' keyword. Like this: <Flags()> _ Enum CombinationEnums As Integer HasButton = 1 TitleBar = 2 [ReadOnly] = 4 ETC = 8 End Enum Note: The numbers to the right are always twice as big (powers of 2) - this is needed to be able to separate the individual flags that have been set. Combine the desired flags using the Or keyword: Dim settings As CombinationEnums settings = CombinationEnums.TitleBar Or CombinationEnums.Readonly This sets TitleBar and Readonly into the enum To check what's been set: If (settings And CombinationEnums.TitleBar) = CombinationEnums.TitleBar Then Window.TitleBar = True End If A: You can use the FlagsAttribute to decorate an Enum like so which will let you combine the Enum: <FlagsAttribute> _ Public Enumeration SecurityRights None = 0 Read = 1 Write = 2 Execute = 4 And then call them like so (class UserPriviltes): Public Sub New ( _ options As SecurityRights _ ) New UserPrivileges(SecurityRights.Read OR SecurityRights.Execute) They effectively get combined (bit math) so that the above user has both Read AND Execute all carried around in one fancy SecurityRights variable. To check to see if the user has a privilege you use AND (more bitwise math) to check the users enum value with the the Enum value you're checking for: //Check to see if user has Write rights If (user.Privileges And SecurityRights.Write = SecurityRigths.Write) Then //Do something clever... Else //Tell user he can't write. End If HTH, Tyler A: If I understand your question correctly you want to combine different enum types. So one variable can store a value from one of two different enum's right? If you're asking about storing combining two different values of one enum type you can look at Dave Arkell's explanation Enums are just integers with some syntactic sugar. So if you make sure there's no overlap you can combine them by casting to int. It won't make for pretty code though. I try to avoid using enums most of the time. Usually if you let enums breed in your code it's just a matter of time before they give birth to repeated case statements and other messy antipatterns. A: The key to combination Enums is to make sure that the value is a power of two (1, 2, 4, 8, etc.) so that you can perform bit operations on them (|= &=). Those Enums can be be tagged with a Flags attribute. The Anchor property on Windows Forms controls is an example of such a control. If it's marked as a flag, Visual Studio will let you check values instead of selecting a single one in a drop-down in the properties designer. A: If you taking about using enum flags() there is a good article here.
Combining Enums
Is there a way to combine Enums in VB.net?
[ "I believe what you want is a flag type enum.\nYou need to add the Flags attribute to the top of the enum, and then you can combine enums with the 'Or' keyword.\nLike this:\n<Flags()> _\nEnum CombinationEnums As Integer\n HasButton = 1\n TitleBar = 2\n [ReadOnly] = 4\n ETC = 8\nEnd Enum\n\nNote: The numbers to the right are always twice as big (powers of 2) - this is needed to be able to separate the individual flags that have been set.\nCombine the desired flags using the Or keyword:\nDim settings As CombinationEnums\nsettings = CombinationEnums.TitleBar Or CombinationEnums.Readonly\n\nThis sets TitleBar and Readonly into the enum\nTo check what's been set:\nIf (settings And CombinationEnums.TitleBar) = CombinationEnums.TitleBar Then\n Window.TitleBar = True\nEnd If\n\n", "You can use the FlagsAttribute to decorate an Enum like so which will let you combine the Enum:\n<FlagsAttribute> _\nPublic Enumeration SecurityRights\nNone = 0\nRead = 1\nWrite = 2\nExecute = 4\n\nAnd then call them like so (class UserPriviltes):\nPublic Sub New ( _\n options As SecurityRights _\n)\n\nNew UserPrivileges(SecurityRights.Read OR SecurityRights.Execute)\n\nThey effectively get combined (bit math) so that the above user has both Read AND Execute all carried around in one fancy SecurityRights variable.\nTo check to see if the user has a privilege you use AND (more bitwise math) to check the users enum value with the the Enum value you're checking for:\n//Check to see if user has Write rights\nIf (user.Privileges And SecurityRights.Write = SecurityRigths.Write) Then\n //Do something clever...\nElse\n //Tell user he can't write.\nEnd If\n\nHTH,\nTyler\n", "If I understand your question correctly you want to combine different enum types. So one variable can store a value from one of two different enum's right? If you're asking about storing combining two different values of one enum type you can look at Dave Arkell's explanation\nEnums are just integers with some syntactic sugar. So if you make sure there's no overlap you can combine them by casting to int.\nIt won't make for pretty code though. I try to avoid using enums most of the time. Usually if you let enums breed in your code it's just a matter of time before they give birth to repeated case statements and other messy antipatterns.\n", "The key to combination Enums is to make sure that the value is a power of two (1, 2, 4, 8, etc.) so that you can perform bit operations on them (|= &=). Those Enums can be be tagged with a Flags attribute. The Anchor property on Windows Forms controls is an example of such a control. If it's marked as a flag, Visual Studio will let you check values instead of selecting a single one in a drop-down in the properties designer.\n", "If you taking about using enum flags() there is a good article here.\n" ]
[ 49, 4, 1, 1, 0 ]
[]
[]
[ ".net", "enums", "vb.net" ]
stackoverflow_0000058517_.net_enums_vb.net.txt
Q: - How to show the whole height of referenced page? I have an application that I would like to embed inside our companies CMS. The only way to do that (I am told), is to load it in an <iframe>. Easy: just set height and width to 100%! Except, it doesn't work. I did find out about setting frameborder to 0, so it at least looks like part of the site, but I'd prefer not to have an ugly scrollbar inside a page that allready has one. Do you know of any tricks to do this? EDIT: I think I need to clarify my question somewhat: the company CMS displays the fluff and stuff for our whole website most pages created through the CMS my application isn't, but they will let me embedd it in an <iframe> I have no control over the iframe, so any solution must work from the referenced page (according to the src attribute of the iframe tag) the CMS displays a footer, so setting the height to 1 million pixels is not a good idea Can I access the parent pages DOM from the referenced page? This might help, but I can see some people might not want this to be possible... This technique seems to work (gleaned from several sources, but inspired by the link from the accepted answer: In parent document: <iframe id="MyIFRAME" name="MyIFRAME" src="http://localhost/child.html" scrolling="auto" width="100%" frameborder="0"> no iframes supported... </iframe> In child: <!-- ... --> <body> <script type="text/javascript"> function resizeIframe() { var docHeight; if (typeof document.height != 'undefined') { docHeight = document.height; } else if (document.compatMode && document.compatMode != 'BackCompat') { docHeight = document.documentElement.scrollHeight; } else if (document.body && typeof document.body.scrollHeight != 'undefined') { docHeight = document.body.scrollHeight; } // magic number: suppress generation of scrollbars... docHeight += 20; parent.document.getElementById('MyIFRAME').style.height = docHeight + "px"; } parent.document.getElementById('MyIFRAME').onload = resizeIframe; parent.window.onresize = resizeIframe; </script> </body> BTW: This will only work if parent and child are in the same domain due to a restriction in JavaScript for security reasons... A: You could either just use a scripting language to include the page into the parent page, other wise, you might want to try one of these javascript methods: http://brondsema.net/blog/index.php/2007/06/06/100_height_iframe http://www.experts-exchange.com/Web_Development/Web_Languages-Standards/PHP/Q_22840093.html A: Provided that your iframe is hosted on the same server as the containing page, you can access it via javascript. There are a number of suggested methods for setting the iframe to the full height of the contents, each with varying degrees of success - a google for this problem shows that it's quite a common one, with no real, one-size-fits-all consensus solution i'm afraid! Several people have reported that this script does the trick, but may need some modification for your specific case (again, assuming your iframe and parent page are on the same domain). A: I might be missing something here, but adding scrolling=no as an attribute to the iframe tag normally gets rid of the scrollbars.
- How to show the whole height of referenced page?
I have an application that I would like to embed inside our companies CMS. The only way to do that (I am told), is to load it in an <iframe>. Easy: just set height and width to 100%! Except, it doesn't work. I did find out about setting frameborder to 0, so it at least looks like part of the site, but I'd prefer not to have an ugly scrollbar inside a page that allready has one. Do you know of any tricks to do this? EDIT: I think I need to clarify my question somewhat: the company CMS displays the fluff and stuff for our whole website most pages created through the CMS my application isn't, but they will let me embedd it in an <iframe> I have no control over the iframe, so any solution must work from the referenced page (according to the src attribute of the iframe tag) the CMS displays a footer, so setting the height to 1 million pixels is not a good idea Can I access the parent pages DOM from the referenced page? This might help, but I can see some people might not want this to be possible... This technique seems to work (gleaned from several sources, but inspired by the link from the accepted answer: In parent document: <iframe id="MyIFRAME" name="MyIFRAME" src="http://localhost/child.html" scrolling="auto" width="100%" frameborder="0"> no iframes supported... </iframe> In child: <!-- ... --> <body> <script type="text/javascript"> function resizeIframe() { var docHeight; if (typeof document.height != 'undefined') { docHeight = document.height; } else if (document.compatMode && document.compatMode != 'BackCompat') { docHeight = document.documentElement.scrollHeight; } else if (document.body && typeof document.body.scrollHeight != 'undefined') { docHeight = document.body.scrollHeight; } // magic number: suppress generation of scrollbars... docHeight += 20; parent.document.getElementById('MyIFRAME').style.height = docHeight + "px"; } parent.document.getElementById('MyIFRAME').onload = resizeIframe; parent.window.onresize = resizeIframe; </script> </body> BTW: This will only work if parent and child are in the same domain due to a restriction in JavaScript for security reasons...
[ "You could either just use a scripting language to include the page into the parent page, other wise, you might want to try one of these javascript methods:\nhttp://brondsema.net/blog/index.php/2007/06/06/100_height_iframe\nhttp://www.experts-exchange.com/Web_Development/Web_Languages-Standards/PHP/Q_22840093.html\n", "Provided that your iframe is hosted on the same server as the containing page, you can access it via javascript.\nThere are a number of suggested methods for setting the iframe to the full height of the contents, each with varying degrees of success - a google for this problem shows that it's quite a common one, with no real, one-size-fits-all consensus solution i'm afraid!\nSeveral people have reported that this script does the trick, but may need some modification for your specific case (again, assuming your iframe and parent page are on the same domain).\n", "I might be missing something here, but adding scrolling=no as an attribute to the iframe tag normally gets rid of the scrollbars.\n" ]
[ 3, 2, 0 ]
[]
[]
[ "html", "iframe" ]
stackoverflow_0000058543_html_iframe.txt
Q: Calling a Web Service from Seam A simple question, but could someone provide sample code as to how would someone call a web service from within the JBoss Seam framework, and process the results? I need to be able to integrate with a search platform being provided by a private vendor who is exposing his functionality as a web service. So, I'm just looking for some guidance as to what the code for calling a given web service would look like. (Any sample web service can be chosen as an example.) A: There's roughly a gajillion HTTP client libraries (Restlet is quite a bit more than that, but I already had that code snippet for something else), but they should all provide support for sending GET requests. Here's a rather less featureful snippet that uses HttpClient from Apache Commons: HttpClient client = new HttpClient(); HttpMethod method = new GetMethod("http://api.search.yahoo.com/WebSearchService/V1/webSearch?appid=restbook&query=HttpClient"); client.executeMethod(method); A: import org.restlet.Client; import org.restlet.data.Protocol; import org.restlet.data.Reference; import org.restlet.data.Response; import org.restlet.resource.DomRepresentation; import org.w3c.dom.Node; /** * Uses YAHOO!'s RESTful web service with XML. */ public class YahooSearch { private static final String BASE_URI = "http://api.search.yahoo.com/WebSearchService/V1/webSearch"; public static void main(final String[] args) { if (1 != args.length) { System.err.println("You need to pass a search term!"); } else { final String term = Reference.encode(args[0]); final String uri = BASE_URI + "?appid=restbook&query=" + term; final Response response = new Client(Protocol.HTTP).get(uri); final DomRepresentation document = response.getEntityAsDom(); document.setNamespaceAware(true); document.putNamespace("y", "urn:yahoo:srch"); final String expr = "/y:ResultSet/y:Result/y:Title/text()"; for (final Node node : document.getNodes(expr)) { System.out.println(node.getTextContent()); } } } } This code uses Restlet to make a request to Yahoo's RESTful search service. Obviously, the details of the web service you are using will dictate what your client for it looks like. A: final Response response = new Client(Protocol.HTTP).get(uri); So, if I understand this correctly, the above line is where the actual call to the web service is being made, with the response being converted to an appropriate format and manipulated after this line. Assuming I were not using Restlet, how would this line differ? (Of course, the actual processing code would be significantly different as well, so that's a given.)
Calling a Web Service from Seam
A simple question, but could someone provide sample code as to how would someone call a web service from within the JBoss Seam framework, and process the results? I need to be able to integrate with a search platform being provided by a private vendor who is exposing his functionality as a web service. So, I'm just looking for some guidance as to what the code for calling a given web service would look like. (Any sample web service can be chosen as an example.)
[ "There's roughly a gajillion HTTP client libraries (Restlet is quite a bit more than that, but I already had that code snippet for something else), but they should all provide support for sending GET requests. Here's a rather less featureful snippet that uses HttpClient from Apache Commons:\nHttpClient client = new HttpClient();\nHttpMethod method = new GetMethod(\"http://api.search.yahoo.com/WebSearchService/V1/webSearch?appid=restbook&query=HttpClient\");\nclient.executeMethod(method);\n\n", "import org.restlet.Client;\nimport org.restlet.data.Protocol;\nimport org.restlet.data.Reference;\nimport org.restlet.data.Response;\nimport org.restlet.resource.DomRepresentation;\nimport org.w3c.dom.Node;\n\n/**\n * Uses YAHOO!'s RESTful web service with XML.\n */\npublic class YahooSearch {\n private static final String BASE_URI = \"http://api.search.yahoo.com/WebSearchService/V1/webSearch\";\n\n public static void main(final String[] args) {\n if (1 != args.length) {\n System.err.println(\"You need to pass a search term!\");\n } else {\n final String term = Reference.encode(args[0]);\n final String uri = BASE_URI + \"?appid=restbook&query=\" + term;\n final Response response = new Client(Protocol.HTTP).get(uri);\n final DomRepresentation document = response.getEntityAsDom();\n\n document.setNamespaceAware(true);\n document.putNamespace(\"y\", \"urn:yahoo:srch\");\n\n final String expr = \"/y:ResultSet/y:Result/y:Title/text()\";\n for (final Node node : document.getNodes(expr)) {\n System.out.println(node.getTextContent());\n }\n }\n }\n}\n\nThis code uses Restlet to make a request to Yahoo's RESTful search service. Obviously, the details of the web service you are using will dictate what your client for it looks like.\n", "final Response response = new Client(Protocol.HTTP).get(uri);\n\nSo, if I understand this correctly, the above line is where the actual call to the web service is being made, with the response being converted to an appropriate format and manipulated after this line.\nAssuming I were not using Restlet, how would this line differ?\n(Of course, the actual processing code would be significantly different as well, so that's a given.)\n" ]
[ 1, 0, 0 ]
[]
[]
[ "java", "seam" ]
stackoverflow_0000056865_java_seam.txt
Q: What is the preferred operating system for web programmers, client or server? Which OS do you prefer to program on? Client or Server There is a school of though that if you are doing (mostly) web programming (or other server based code), you should use a server OS for your dev machine, since that's closer to the environment where your app will be running. Update: I should add, this is really directed to the Windows crowd A: OK, I know you're mainly talking about windows but... I used to develop on windows for deployment on *nix servers. Sure there were lots of gotchas with this way of working, but you just kind of get used to it. In October 2005 I switched to Linux, initially as an experiment, but I never went back. There was a steep learning curve. I thought I knew *nix pretty well after 10 years of dealing with it, but I knew nothing compared with the amount I learned using it on my desktop machine. Workflow has been so much smoother developing and deploying to similar platforms. More recently, we have even started to pick servers running Ubuntu server, so that they most closely match our Ubuntu desktop development machines. If you are talking about the difference between a desktop and a server edition, I'd guess you needn't worry about it. If you're developing on one OS for deployment on another, I'd consider changing your desktop platform. A: There is a school of though that if you are doing (mostly) web programming (or other server based code), you should use a server OS for your dev machine I think that applies more to 'system programmers' rather than web 'application programmers'. Why? There is definitely great value in knowing the platform intimately, like one would get in living with the OS, etc. day in and day out. But not everyone can or should need to go there. While my main production environment is RHEL4, Linux just does not work for me on the desktop--in fact, it drives me crazy. I find working on OSX close enough, though. And I just love working on my Mac rather than an XP box. I'm doing the Java thing, and the "write once, run everywhere" hype actually works for me. :) Update: I should add, this is really directed to the Windows crowd Minute late, bit short ;) Maybe you should edit the title too... A: It seems like the question is more about whether to use the server or client version of the same OS. So my answer is this: the client should be just fine. You can develop and test web applications of many flavors on client versions of Windows, OS X, and Linux. OS X and Linux obviously make Apache-based apps a little easier by coming with Apache pre-installed, but a download of XAMPP or WAMPP can quickly turn a Windows box into a solid development platform for LAMP applications, as well. And if you're doing ASP.NET, your development tools (if you're using something in the Visual Studio line) have test server mechanisms built in. So unless you have some other need for wanting the server version, I would stick with the client. It's less money, and you really don't need the server version. A: The client vs. server OS issue is only relevant on MS platforms. And even there it depends on what you're developing for. As far as I understand for Sharepoint development you need a server OS to run your code If you're just doing vanilla ASP.Net stuff then it's mostly personal taste. Edit As Tyler commented, you can run MOSS/WSS on Vista but it's not supported. Or you could develop on a client OS and run sharepoint on a server OS in a VM. A: Regardless of the operating system you're actually talking about, it shouldn't matter. Most applications you might write won't need to worry about the differences (if there indeed are any). Only in rare cases might you use some specific functionality that might only be available on a "server" edition of your OS. There are other considerations, for example Windows server editions are tuned by default to give less priority and attention to desktop programs, and more attention to things like the file cache. Personally, I would always choose a "client" edition of my chosen OS. A: Personally I use Windows Vista but that's because it's what I like and I can use it well. But in all honesty it doesn't matter, your OS should be something you are comfortable in and has the tools you need to be productive. I would say your test environment is the one you need to have as close to your production environment as possible. I write in RoR on Vista but test it in a Linux VM setup the same as my web server and at work we have a Win2k3 server with IIS6 installed to test our .Net sites on but I develop on Vista using IIS7. A: I use Windows Server 2003 set up as a workstation.This is the guide i have used for several years. Really like it. A: This is going to be a bit of a weird answer but I'm a big fan of Windows 2008 and Hyper-V, as a workstation (I know). Essentially I'll only install Office like software on my workstation and all the development will be in Virtual Machines. Assuming there's no Win2k8/Hyper-V availiable I'd gladly settle for some old WinXP (but w/Virtual PC). Hyper-V allows you to get great performance out of any .VHD VM that you run. Both Virtual PC and Virtual Server are free (as in beer) and you can set up a ton of infrastructure that allows you to re-purpose virtual machines (ie. Base Machines, Differencing Disks, Undo Disks). The .VHDs are also interchangeable so you can re-host a previously enjoyed .VHD for other developers to enjoy on some virtual server, OR they can take a copy of it, rename the virtual machine and enjoy your ready-to-go environment with some Virtual PC! This is awesome for bringing team members up to speed (environment wise) in less than 10 min. YOu can also use it to VERY QUICKLY provision machines that would otherwise take days to setup/configure. Never mind the much better ability to test from different OS', or be able to roll back changes using Undo disks, VMs are a life saver! Start virtualizing people! For some of the great benefits of Virtual Machines/Differencing Disks consider this post by Andrew Connell.
What is the preferred operating system for web programmers, client or server?
Which OS do you prefer to program on? Client or Server There is a school of though that if you are doing (mostly) web programming (or other server based code), you should use a server OS for your dev machine, since that's closer to the environment where your app will be running. Update: I should add, this is really directed to the Windows crowd
[ "OK, I know you're mainly talking about windows but...\nI used to develop on windows for deployment on *nix servers. Sure there were lots of gotchas with this way of working, but you just kind of get used to it. \nIn October 2005 I switched to Linux, initially as an experiment, but I never went back. There was a steep learning curve. I thought I knew *nix pretty well after 10 years of dealing with it, but I knew nothing compared with the amount I learned using it on my desktop machine. \nWorkflow has been so much smoother developing and deploying to similar platforms. \nMore recently, we have even started to pick servers running Ubuntu server, so that they most closely match our Ubuntu desktop development machines. \nIf you are talking about the difference between a desktop and a server edition, I'd guess you needn't worry about it. If you're developing on one OS for deployment on another, I'd consider changing your desktop platform.\n", "\nThere is a school of though that if you are doing (mostly) web programming (or other server based code), you should use a server OS for your dev machine\n\nI think that applies more to 'system programmers' rather than web 'application programmers'. Why? There is definitely great value in knowing the platform intimately, like one would get in living with the OS, etc. day in and day out. But not everyone can or should need to go there.\nWhile my main production environment is RHEL4, Linux just does not work for me on the desktop--in fact, it drives me crazy. I find working on OSX close enough, though. And I just love working on my Mac rather than an XP box.\nI'm doing the Java thing, and the \"write once, run everywhere\" hype actually works for me. :)\n\nUpdate: I should add, this is really directed to the Windows crowd \n\nMinute late, bit short ;) Maybe you should edit the title too...\n", "It seems like the question is more about whether to use the server or client version of the same OS. So my answer is this: the client should be just fine. You can develop and test web applications of many flavors on client versions of Windows, OS X, and Linux. OS X and Linux obviously make Apache-based apps a little easier by coming with Apache pre-installed, but a download of XAMPP or WAMPP can quickly turn a Windows box into a solid development platform for LAMP applications, as well.\nAnd if you're doing ASP.NET, your development tools (if you're using something in the Visual Studio line) have test server mechanisms built in.\nSo unless you have some other need for wanting the server version, I would stick with the client. It's less money, and you really don't need the server version.\n", "The client vs. server OS issue is only relevant on MS platforms. And even there it depends on what you're developing for.\nAs far as I understand for Sharepoint development you need a server OS to run your code\nIf you're just doing vanilla ASP.Net stuff then it's mostly personal taste.\nEdit\nAs Tyler commented, you can run MOSS/WSS on Vista but it's not supported. Or you could develop on a client OS and run sharepoint on a server OS in a VM.\n", "Regardless of the operating system you're actually talking about, it shouldn't matter. Most applications you might write won't need to worry about the differences (if there indeed are any). Only in rare cases might you use some specific functionality that might only be available on a \"server\" edition of your OS.\nThere are other considerations, for example Windows server editions are tuned by default to give less priority and attention to desktop programs, and more attention to things like the file cache. Personally, I would always choose a \"client\" edition of my chosen OS.\n", "Personally I use Windows Vista but that's because it's what I like and I can use it well. But in all honesty it doesn't matter, your OS should be something you are comfortable in and has the tools you need to be productive. \nI would say your test environment is the one you need to have as close to your production environment as possible. I write in RoR on Vista but test it in a Linux VM setup the same as my web server and at work we have a Win2k3 server with IIS6 installed to test our .Net sites on but I develop on Vista using IIS7.\n", "I use Windows Server 2003 set up as a workstation.This is the guide i have used for several years. Really like it.\n", "This is going to be a bit of a weird answer but I'm a big fan of Windows 2008 and Hyper-V, as a workstation (I know). Essentially I'll only install Office like software on my workstation and all the development will be in Virtual Machines.\nAssuming there's no Win2k8/Hyper-V availiable I'd gladly settle for some old WinXP (but w/Virtual PC).\nHyper-V allows you to get great performance out of any .VHD VM that you run. Both Virtual PC and Virtual Server are free (as in beer) and you can set up a ton of infrastructure that allows you to re-purpose virtual machines (ie. Base Machines, Differencing Disks, Undo Disks). The .VHDs are also interchangeable so you can re-host a previously enjoyed .VHD for other developers to enjoy on some virtual server, OR they can take a copy of it, rename the virtual machine and enjoy your ready-to-go environment with some Virtual PC!\nThis is awesome for bringing team members up to speed (environment wise) in less than 10 min. YOu can also use it to VERY QUICKLY provision machines that would otherwise take days to setup/configure.\nNever mind the much better ability to test from different OS', or be able to roll back changes using Undo disks, VMs are a life saver! Start virtualizing people!\nFor some of the great benefits of Virtual Machines/Differencing Disks consider this post by Andrew Connell.\n" ]
[ 5, 3, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "operating_system" ]
stackoverflow_0000058463_operating_system.txt
Q: Rhino Mocks - How can I test that at least one of a group of methods is called? Say I have an interface IFoo which I am mocking. There are 3 methods on this interface. I need to test that the system under test calls at least one of the three methods. I don't care how many times, or with what arguments it does call, but the case where it ignores all the methods and does not touch the IFoo mock is the failure case. I've been looking through the Expect.Call documentation but can't see an easy way to do it. Any ideas? A: You can give rhino mocks a lambda to run when a function get's called. This lambda can then increment a counter. Assert the counter > 1 and you're done. Commented by Don Kirkby: I believe Mendelt is referring to the Do method. A: Not sure this answers your question but I've found that if I need to do anything like that with Rhino (or any similiar framework/library), anything that I didn't know how to do upfront, then I'm better just creating a manual mock. Creating a class that implements the interface and sets a public boolean field to true if any of the methods is called will be trivially easy, you can give the class a descriptive name which means that (most importantly) the next person viewing the code will immediately understand it. A: If I understood you correctly you want to check that the interface is called at least once on any of three specified methods. Looking through the quick reference I don't think you can do that in Rhino Mocks. Intuitively I think you're trying to write a test that is brittle, which is a bad thing. This implies incomplete specification of the class under test. I urge you to think the design through so that the class under test and the test can have a known behavior. However, to be useful with an example, you could always do it like this (but don't). [TestFixture] public class MyTest { // The mocked interface public class MockedInterface implements MyInterface { int counter = 0; public method1() { counter++; } public method2() { counter++; } public method3() { counter++; } } // The actual test, I assume you have the ClassUnderTest // inject the interface through the constructor and // the methodToTest calls either of the three methods on // the interface. [TestMethod] public void testCallingAnyOfTheThreeMethods() { MockedInterface mockery = new MockedInterface(); ClassUnderTest classToTest = new ClassUnderTest(mockery); classToTest.methodToTest(); Assert.That(mockery.counter, Is.GreaterThan(1)); } } (Somebody check my code, I've written this from my head now and haven't written a C# stuff for about a year now) I'm interested to know why you're doing this though.
Rhino Mocks - How can I test that at least one of a group of methods is called?
Say I have an interface IFoo which I am mocking. There are 3 methods on this interface. I need to test that the system under test calls at least one of the three methods. I don't care how many times, or with what arguments it does call, but the case where it ignores all the methods and does not touch the IFoo mock is the failure case. I've been looking through the Expect.Call documentation but can't see an easy way to do it. Any ideas?
[ "You can give rhino mocks a lambda to run when a function get's called. This lambda can then increment a counter. Assert the counter > 1 and you're done.\nCommented by Don Kirkby:\nI believe Mendelt is referring to the Do method.\n", "Not sure this answers your question but I've found that if I need to do anything like that with Rhino (or any similiar framework/library), anything that I didn't know how to do upfront, then I'm better just creating a manual mock. \nCreating a class that implements the interface and sets a public boolean field to true if any of the methods is called will be trivially easy, you can give the class a descriptive name which means that (most importantly) the next person viewing the code will immediately understand it.\n", "If I understood you correctly you want to check that the interface is called at least once on any of three specified methods. Looking through the quick reference I don't think you can do that in Rhino Mocks. \nIntuitively I think you're trying to write a test that is brittle, which is a bad thing. This implies incomplete specification of the class under test. I urge you to think the design through so that the class under test and the test can have a known behavior.\nHowever, to be useful with an example, you could always do it like this (but don't).\n[TestFixture]\npublic class MyTest {\n\n // The mocked interface\n public class MockedInterface implements MyInterface {\n int counter = 0;\n public method1() { counter++; }\n public method2() { counter++; }\n public method3() { counter++; }\n }\n\n // The actual test, I assume you have the ClassUnderTest\n // inject the interface through the constructor and\n // the methodToTest calls either of the three methods on \n // the interface.\n [TestMethod]\n public void testCallingAnyOfTheThreeMethods() {\n MockedInterface mockery = new MockedInterface();\n ClassUnderTest classToTest = new ClassUnderTest(mockery);\n\n classToTest.methodToTest();\n\n Assert.That(mockery.counter, Is.GreaterThan(1));\n }\n}\n\n(Somebody check my code, I've written this from my head now and haven't written a C# stuff for about a year now)\nI'm interested to know why you're doing this though.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "bdd", "mocking", "rhino_mocks", "unit_testing" ]
stackoverflow_0000053666_bdd_mocking_rhino_mocks_unit_testing.txt
Q: Can't make an SSL Connection I'm using a device that's got GPRS media to connect to a PC running stunnel. On TCPIP connections, the number of sessions is limitless. However, when it comes to SSL connections, it could only go as far as 1062 successful sessions. I've tried it like 3 times but makes no difference. I've checked the OpenSSL codes and I couldn't seem to find any code block that limits SSL connection to 1062. On SSL's point of view, is there anything that limits the number of connections? Yes, I'm using a postpaid phone SIM, but there isn't any problem with TCPIP. It only happens with SSL connections. We've tried connecting to other PC's as well using same OpenSSL stunnel, but only ends up to 1062 connections. A: I guess I'm not the only one having this kind of problem. I found out that Sun Java System Directory Server had a limit of opened ssl connection which only reached 1020 (FD_SETSIZE=1024). It was hardcoded though so you could obviously see the cause of the problem. In my case however, I couldn't seem to find the culprit... :( A: Are you connecting via a phone provider - could that be the issue?
Can't make an SSL Connection
I'm using a device that's got GPRS media to connect to a PC running stunnel. On TCPIP connections, the number of sessions is limitless. However, when it comes to SSL connections, it could only go as far as 1062 successful sessions. I've tried it like 3 times but makes no difference. I've checked the OpenSSL codes and I couldn't seem to find any code block that limits SSL connection to 1062. On SSL's point of view, is there anything that limits the number of connections? Yes, I'm using a postpaid phone SIM, but there isn't any problem with TCPIP. It only happens with SSL connections. We've tried connecting to other PC's as well using same OpenSSL stunnel, but only ends up to 1062 connections.
[ "I guess I'm not the only one having this kind of problem. I found out that Sun Java System Directory Server had a limit of opened ssl connection which only reached 1020 (FD_SETSIZE=1024). It was hardcoded though so you could obviously see the cause of the problem. In my case however, I couldn't seem to find the culprit... :(\n", "Are you connecting via a phone provider - could that be the issue?\n" ]
[ 1, 0 ]
[]
[]
[ "connection", "limits", "ssl" ]
stackoverflow_0000055953_connection_limits_ssl.txt
Q: How do I remotely get a checksum for a file on a Windows machine? I'm trying to check, using an automated discovery tool, when JAR files in remote J2EE application servers have changed content. Currently, the system downloads the whole JAR using WMI to checksum it locally, which is slow for large JARs. For UNIXy servers (and Windows servers with Cygwin), I can just log in over SSH and run md5sum foo.jar. Ideally, I'd like to avoid installing extra software on the remote servers (there may be thousands), so is there a good way to do this on vanilla Windows servers? A: You could try the Sysinternals PSExec tool. You would need a checksum utility available on the remote machine. Unfortunately since they became part of Microsoft they don't make any source code available. Alternatively, you could install the Cygwin SSH daemon on the remote machines and use ssh but that's a bit more involved. A: Microsoft has a free checksum tool you could run with PSExec above.
How do I remotely get a checksum for a file on a Windows machine?
I'm trying to check, using an automated discovery tool, when JAR files in remote J2EE application servers have changed content. Currently, the system downloads the whole JAR using WMI to checksum it locally, which is slow for large JARs. For UNIXy servers (and Windows servers with Cygwin), I can just log in over SSH and run md5sum foo.jar. Ideally, I'd like to avoid installing extra software on the remote servers (there may be thousands), so is there a good way to do this on vanilla Windows servers?
[ "You could try the Sysinternals PSExec tool. You would need a checksum utility available on the remote machine. Unfortunately since they became part of Microsoft they don't make any source code available.\nAlternatively, you could install the Cygwin SSH daemon on the remote machines and use ssh but that's a bit more involved.\n", "Microsoft has a free checksum tool you could run with PSExec above.\n" ]
[ 1, 1 ]
[]
[]
[ "checksum", "jakarta_ee", "windows", "wmi" ]
stackoverflow_0000058558_checksum_jakarta_ee_windows_wmi.txt
Q: Is it possible to display a modal window in SCSF application at the center of the screen In SCSF application I would like to display a view as a modal window at the center of the screen. Is it possible to do that? WindowSmartPartInfo doesn't have any option for setting screen postion. Thanks. A: Assuming you're talking about Winforms, not WPF since the WPF layer for CAB does expose this option. In winforms there is no option in the WindowSmartPartInfo to do this. However, you could extend it and extend WindowWorkspace to use your new SmartPartInfo (override the OnApplySmartPartInfo method). Before you do this, you might want to check the contrib and community sites to see if anyone has already done it. I think I've seen one somewhere.
Is it possible to display a modal window in SCSF application at the center of the screen
In SCSF application I would like to display a view as a modal window at the center of the screen. Is it possible to do that? WindowSmartPartInfo doesn't have any option for setting screen postion. Thanks.
[ "Assuming you're talking about Winforms, not WPF since the WPF layer for CAB does expose this option. In winforms there is no option in the WindowSmartPartInfo to do this. However, you could extend it and extend WindowWorkspace to use your new SmartPartInfo (override the OnApplySmartPartInfo method).\nBefore you do this, you might want to check the contrib and community sites to see if anyone has already done it. I think I've seen one somewhere.\n" ]
[ 1 ]
[]
[]
[ "cab", "scsf" ]
stackoverflow_0000056859_cab_scsf.txt
Q: What is a good dvd burning component for Windows or .Net? I'd like to add dvd burning functionality to my .Net app (running on Windows Server 2003), are there any good components available? I've used the NeroCOM sdk that used to come with Nero but they no longer support the sdk in the latest versions of Nero. I learned that Microsoft has created an IMAPI2 upgrade for Windows XP/2003 and there is an example project at CodeProject but not having used it myself I can't say how easy/reliable it is to use. I'm not really worried about burning audio/video to DVD as this is for file backup purposes only. A: I've used the code from the codeproject article and it works pretty well. It's a nice wrapper around the IMAPI2, so as longs as IMAPI2 supports what you need to do, the .NET wrapper will do it. A: At my last job I was tasked with finding a cross platform and preferably free way to write our application specific files to cd/dvd. I quickly found that writing CD's wasn't hard on windows, but I couldn't write DVD's easily, and that only worked on windows. I ended up writing a wrapper around cdrecord cdrecord is an open source project that builds easily with cygwin. I would create a staging directory where I added the files that needed to be written, called mkisofs on that directory to make a cd iso, and then called cdrecord to burn the image. This may not be the best solution if you have a strictly windows audience, but it was the only thing I could find that did window, Linux, and OS X. Another option worht checking out is the StarBurn SDK, I download the trial and used it, it worked well, but in the end it wasn't free so it was too expensive for my purposes. A: My cdrecord method did support dvd burning, I just looked over the code, and boy did I forget how much time and effort I put into that class. cdrecord has no problem burning just about any type of media you throw at it, but since it is a stand alone application, I had to do a lot of parsing to get useful information. I can dig up the flags and different calls I used if you are interested, but unfortunately I cannot share the source as it was developed for a commercial project. While looking over the code I was also reminded that I switched form cdrecord (cdrtools) to wodim (cdrkit). wodim is a branch of cdrecord made a few years ago by the debian team because cdrecord dropped the GPL license. Like I said before this was released as part of a commercial application, our interpretation of the GPL was that you can call external binaries from your program without a problem as long as your program can run without the external binaries (if cdrecord wasn't found we popped up a dialog informing the user that burning capabilities were not available) and we also had to host the source for cdrkit and cygwin and include a copy of the GPL with our distributed program. So basically we would not make "derivative works", we would compile the cdrkit code exactly as it was, and then use the produced binaries. As far as StarBurn SDK, I demoed it, but I didn't use it for a shipped product so I can't really give a recommendation or say much more than it does work A: Did your cdrecord methodology support dvd burning? And is there an easy way to redistribute/install cygwin with an application? StarBurn looks pretty good at first glance, although I'm a little hesitant to go with unproven libraries that have to handle something this complicated (especially with the number of types of media out there now) and the StarBurn portfolio page is a bit on the fluffy side.
What is a good dvd burning component for Windows or .Net?
I'd like to add dvd burning functionality to my .Net app (running on Windows Server 2003), are there any good components available? I've used the NeroCOM sdk that used to come with Nero but they no longer support the sdk in the latest versions of Nero. I learned that Microsoft has created an IMAPI2 upgrade for Windows XP/2003 and there is an example project at CodeProject but not having used it myself I can't say how easy/reliable it is to use. I'm not really worried about burning audio/video to DVD as this is for file backup purposes only.
[ "I've used the code from the codeproject article and it works pretty well. It's a nice wrapper around the IMAPI2, so as longs as IMAPI2 supports what you need to do, the .NET wrapper will do it.\n", "At my last job I was tasked with finding a cross platform and preferably free way to write our application specific files to cd/dvd. I quickly found that writing CD's wasn't hard on windows, but I couldn't write DVD's easily, and that only worked on windows.\nI ended up writing a wrapper around cdrecord cdrecord is an open source project that builds easily with cygwin. I would create a staging directory where I added the files that needed to be written, called mkisofs on that directory to make a cd iso, and then called cdrecord to burn the image. This may not be the best solution if you have a strictly windows audience, but it was the only thing I could find that did window, Linux, and OS X.\nAnother option worht checking out is the StarBurn SDK, I download the trial and used it, it worked well, but in the end it wasn't free so it was too expensive for my purposes.\n", "My cdrecord method did support dvd burning, I just looked over the code, and boy did I forget how much time and effort I put into that class.\ncdrecord has no problem burning just about any type of media you throw at it, but since it is a stand alone application, I had to do a lot of parsing to get useful information. I can dig up the flags and different calls I used if you are interested, but unfortunately I cannot share the source as it was developed for a commercial project.\nWhile looking over the code I was also reminded that I switched form cdrecord (cdrtools) to wodim (cdrkit). wodim is a branch of cdrecord made a few years ago by the debian team because cdrecord dropped the GPL license.\nLike I said before this was released as part of a commercial application, our interpretation of the GPL was that you can call external binaries from your program without a problem as long as your program can run without the external binaries (if cdrecord wasn't found we popped up a dialog informing the user that burning capabilities were not available) and we also had to host the source for cdrkit and cygwin and include a copy of the GPL with our distributed program. So basically we would not make \"derivative works\", we would compile the cdrkit code exactly as it was, and then use the produced binaries.\nAs far as StarBurn SDK, I demoed it, but I didn't use it for a shipped product so I can't really give a recommendation or say much more than it does work\n", "Did your cdrecord methodology support dvd burning? And is there an easy way to redistribute/install cygwin with an application? StarBurn looks pretty good at first glance, although I'm a little hesitant to go with unproven libraries that have to handle something this complicated (especially with the number of types of media out there now) and the StarBurn portfolio page is a bit on the fluffy side.\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ ".net", "components", "dvd", "windows" ]
stackoverflow_0000032930_.net_components_dvd_windows.txt
Q: Push or Pull for a near real time automation server? We are currently developing a server whereby a client requests interest in changes to specific data elements and when that data changes the server pushes the data back to the client. There has vigorous debate at work about whether or not it would be better for the client to poll for this data. What is considered to be the ideal method, in terms of performance, scalability and network load, of data transfer in a near real time environment? Update: Here's a Link that gives some food for thought with regards to UI updates. A: There's probably no ideal method for every situation, but push is usually better and used more often. It allows to optimize server caching and data transfers, which helps performance and scalability, and cuts network traffic a bit by avoiding client requests and empty responses. It can be important advantage for a server to operate in it's own pace and supply clients with data when it is ready. Industry standarts - such as OPC, GID - support both. Server pushes updates to subscribed clients, but client can pull some rarely used data out without bothering with subscription. A: As long as the client initiates the connection (to get passed firewall and NAT problems) either way is fine. If there are several different type of data you need to send, you might want to have the client specify which type he wants, but this is only needed once per connection. Then you can have the server continue to send updates as it has them. It would be less network traffic to have the server send updates without the client continually asking for updates. A: What do you have on the client's side? Many firewalls allow outgoing requests but block incoming requests. In other words, pull may be your only option if you are crossing the Internet unless you are sending out e-mails.
Push or Pull for a near real time automation server?
We are currently developing a server whereby a client requests interest in changes to specific data elements and when that data changes the server pushes the data back to the client. There has vigorous debate at work about whether or not it would be better for the client to poll for this data. What is considered to be the ideal method, in terms of performance, scalability and network load, of data transfer in a near real time environment? Update: Here's a Link that gives some food for thought with regards to UI updates.
[ "There's probably no ideal method for every situation, but push is usually better and used more often. It allows to optimize server caching and data transfers, which helps performance and scalability, and cuts network traffic a bit by avoiding client requests and empty responses. It can be important advantage for a server to operate in it's own pace and supply clients with data when it is ready.\nIndustry standarts - such as OPC, GID - support both. Server pushes updates to subscribed clients, but client can pull some rarely used data out without bothering with subscription.\n", "As long as the client initiates the connection (to get passed firewall and NAT problems) either way is fine.\nIf there are several different type of data you need to send, you might want to have the client specify which type he wants, but this is only needed once per connection. Then you can have the server continue to send updates as it has them. \nIt would be less network traffic to have the server send updates without the client continually asking for updates. \n", "What do you have on the client's side? Many firewalls allow outgoing requests but block incoming requests. In other words, pull may be your only option if you are crossing the Internet unless you are sending out e-mails.\n" ]
[ 3, 1, 0 ]
[]
[]
[ "automation", "networking", "performance" ]
stackoverflow_0000058757_automation_networking_performance.txt
Q: Pointers to C++ class methods Whilst refactoring some legacy C++ code I found that I could potentially remove some code duplication by somehow defining a variable that could point to any class method that shared the same signature. After a little digging, I found that I could do something like the following: class MyClass { protected: bool CaseMethod1( int abc, const std::string& str ) { cout << "case 1:" << str; return true; } bool CaseMethod2( int abc, const std::string& str ) { cout << "case 2:" << str; return true; } bool CaseMethod3( int abc, const std::string& str ) { cout << "case 3:" << str; return true; } public: bool TestSwitch( int num ) { bool ( MyClass::*CaseMethod )( int, const std::string& ); switch ( num ) { case 1: CaseMethod = &MyClass::CaseMethod1; break; case 2: CaseMethod = &MyClass::CaseMethod2; break; case 3: CaseMethod = &MyClass::CaseMethod3; break; } ... bool res = CaseMethod( 999, "hello world" ); ... reurn res; } }; My question is - is this the correct way to go about this? Should I consider anything that Boost has to offer? Edit... Ok, my mistake - I should be calling the method like so: bool res = ( (*this).*CaseMethod )( 999, "Hello World" ); A: What you have there is a pointer-to-member-function. It will solve your problem. I am surprised that your "TestSwitch" function compiles, as the calling syntax is slightly different to what you might expect. It should be: bool res = (this->*CaseMethod)( 999, "hello world" ); However, you might find a combination of boost::function and boost::bind makes things a little easier, as you can avoid the bizarre calling syntax. boost::function<bool(int,std::string)> f= boost::bind(&MyClass::CaseMethod1,this,_1,_2); Of course, this will bind it to the current this pointer: you can make the this pointer of the member function an explicit third parameter if you like: boost::function<bool(MyClass*,int,std::string)> f= boost::bind(&MyClass::CaseMethod1,_1,_2,_3); Another alternative might be to use virtual functions and derived classes, but that might require major changes to your code. A: You could also build a lookup (if your key range is reasonable) so that you end up writing: this->*Methods[num]( 999, "hello world" ); This removes the switch as well, and makes the cleanup a bit more worthwhile. A: You can certainly do it, although the CaseMethod call isn't correct (it's a pointer to member function, so you have to specify the object on which the method should be called). The correct call would look like this: bool res = this->*CaseMethod( 999, "hello world" ); On the other hand, I'd recommend boost::mem_fn - you'll have less chances to screw it up. ;) A: I don't see the difference between your call and simply calling the method within the switch statement. No, there is no semantic or readability difference. The only difference I see is that you are taking a pointer to a method and so forbids to the compiler to inline it or optimizes any call to that method. A: Without wider context, it's hard to figure out the right answer, but I sew three possibilities here: stay with normal switch statement, no need to do anything. This is the most likely solution use pointers to member function in conjunction with an array, as @Simon says, or may be with a map. For a case statement with a large number of cases, this may be faster. split t he class into a number of classes, each carrying one function to call, and use virtual functions. This is probably the best solution, buy it will require some serious refatoring. Consider GoF patterns such as State or Visitor or some such. A: There's nothing intrinsically wrong with the localised example you've given here, but class method pointers can often be tricky to keep 'safe' if you use them in a wider context, such as outside the class they're a pointer of, or in conjunction with a complex inheritance tree. The way compilers typically manage method pointers is different to 'normal' pointers (since there's extra information beyond just a code entry point), and consequently there are a lot of restrictions on what you can do with them. If you're just keeping simple pointers the way you describe then you'll be fine, but fore more complex uses you may want to take a look at a more generalised functor system such as boost::bind. These can take pointers to just about any callable code pointer, and can also bind instanced function arguments if necessary. A: There are other approaches available, such as using an abstract base class, or specialized template functions. I'll describe the base class idea. You can define an abstract base class class Base { virtual bool Method(int i, const string& s) = 0; }; Then write each of your cases as a subclass, such as class Case1 : public Base { virtual bool Method(..) { /* implement */; } }; At some point, you will get your "num" variable that indicates which test to execute. You could write a factory function that takes this num (I'll call it which_case), and returns a pointer to Base, and then call Method from that pointer. Base* CreateBase(int which_num) { /* metacode: return new Case[which_num]; */ } // ... later, when you want to actually call your method ... Base* base = CreateBase(23); base->Method(999, "hello world!"); delete base; // Or use a scoped pointer. By the way, this application makes me wish C++ supported static virtual functions, or something like "type" as a builtin type - but it doesn't.
Pointers to C++ class methods
Whilst refactoring some legacy C++ code I found that I could potentially remove some code duplication by somehow defining a variable that could point to any class method that shared the same signature. After a little digging, I found that I could do something like the following: class MyClass { protected: bool CaseMethod1( int abc, const std::string& str ) { cout << "case 1:" << str; return true; } bool CaseMethod2( int abc, const std::string& str ) { cout << "case 2:" << str; return true; } bool CaseMethod3( int abc, const std::string& str ) { cout << "case 3:" << str; return true; } public: bool TestSwitch( int num ) { bool ( MyClass::*CaseMethod )( int, const std::string& ); switch ( num ) { case 1: CaseMethod = &MyClass::CaseMethod1; break; case 2: CaseMethod = &MyClass::CaseMethod2; break; case 3: CaseMethod = &MyClass::CaseMethod3; break; } ... bool res = CaseMethod( 999, "hello world" ); ... reurn res; } }; My question is - is this the correct way to go about this? Should I consider anything that Boost has to offer? Edit... Ok, my mistake - I should be calling the method like so: bool res = ( (*this).*CaseMethod )( 999, "Hello World" );
[ "What you have there is a pointer-to-member-function. It will solve your problem. I am surprised that your \"TestSwitch\" function compiles, as the calling syntax is slightly different to what you might expect. It should be:\nbool res = (this->*CaseMethod)( 999, \"hello world\" );\n\nHowever, you might find a combination of boost::function and boost::bind makes things a little easier, as you can avoid the bizarre calling syntax.\nboost::function<bool(int,std::string)> f=\n boost::bind(&MyClass::CaseMethod1,this,_1,_2);\n\nOf course, this will bind it to the current this pointer: you can make the this pointer of the member function an explicit third parameter if you like:\nboost::function<bool(MyClass*,int,std::string)> f=\n boost::bind(&MyClass::CaseMethod1,_1,_2,_3);\n\nAnother alternative might be to use virtual functions and derived classes, but that might require major changes to your code.\n", "You could also build a lookup (if your key range is reasonable) so that you end up writing:\nthis->*Methods[num]( 999, \"hello world\" );\n\nThis removes the switch as well, and makes the cleanup a bit more worthwhile.\n", "You can certainly do it, although the CaseMethod call isn't correct (it's a pointer to member function, so you have to specify the object on which the method should be called). The correct call would look like this:\nbool res = this->*CaseMethod( 999, \"hello world\" );\n\nOn the other hand, I'd recommend boost::mem_fn - you'll have less chances to screw it up. ;)\n", "I don't see the difference between your call and simply calling the method within the switch statement.\nNo, there is no semantic or readability difference.\nThe only difference I see is that you are taking a pointer to a method and so forbids to the compiler to inline it or optimizes any call to that method.\n", "Without wider context, it's hard to figure out the right answer, but I sew three possibilities here:\n\nstay with normal switch statement, no need to do anything. This is the most likely solution\nuse pointers to member function in conjunction with an array, as @Simon says, or may be with a map. For a case statement with a large number of cases, this may be faster.\nsplit t he class into a number of classes, each carrying one function to call, and use virtual functions. This is probably the best solution, buy it will require some serious refatoring. Consider GoF patterns such as State or Visitor or some such.\n\n", "There's nothing intrinsically wrong with the localised example you've given here, but class method pointers can often be tricky to keep 'safe' if you use them in a wider context, such as outside the class they're a pointer of, or in conjunction with a complex inheritance tree. The way compilers typically manage method pointers is different to 'normal' pointers (since there's extra information beyond just a code entry point), and consequently there are a lot of restrictions on what you can do with them.\nIf you're just keeping simple pointers the way you describe then you'll be fine, but fore more complex uses you may want to take a look at a more generalised functor system such as boost::bind. These can take pointers to just about any callable code pointer, and can also bind instanced function arguments if necessary.\n", "There are other approaches available, such as using an abstract base class, or specialized template functions.\nI'll describe the base class idea.\nYou can define an abstract base class\nclass Base { virtual bool Method(int i, const string& s) = 0; };\n\nThen write each of your cases as a subclass, such as\nclass Case1 : public Base { virtual bool Method(..) { /* implement */; } };\n\nAt some point, you will get your \"num\" variable that indicates which test to execute. You could write a factory function that takes this num (I'll call it which_case), and returns a pointer to Base, and then call Method from that pointer.\nBase* CreateBase(int which_num) { /* metacode: return new Case[which_num]; */ }\n// ... later, when you want to actually call your method ...\nBase* base = CreateBase(23);\nbase->Method(999, \"hello world!\");\ndelete base; // Or use a scoped pointer.\n\nBy the way, this application makes me wish C++ supported static virtual functions, or something like \"type\" as a builtin type - but it doesn't.\n" ]
[ 8, 3, 1, 1, 1, 0, 0 ]
[]
[]
[ "c++" ]
stackoverflow_0000056091_c++.txt
Q: How do I access performance counters from C# in Windows XP Embedded? I have an application running under Windows XP, and I'm accessing the Processor and Memory performance counters. When I try to run the same code and access them on XP Embedded, the counters don't seem to be present. They are present in the image - I can see them all in perfmon. What's the missing piece here? A: Have you added all the WMI components? As far as I know, you need all the WMI components to access the counters! The Performance Counter Windows Management Instrumentation (WMI) Provider component provides a bridge between the performance registry interface and the WMI interface. This component allows WMI clients to access performance counters through WMI scripts, and allows management applications built using WMI to access performance counters. Without this component, applications must directly use the registry interface or the performance data helper interface to access performance counters. Thank you TimK for the link (http://msdn.microsoft.com/en-us/library/aa939695.aspx) A: It looks like this is what I was missing: http://msdn.microsoft.com/en-us/library/aa939695.aspx
How do I access performance counters from C# in Windows XP Embedded?
I have an application running under Windows XP, and I'm accessing the Processor and Memory performance counters. When I try to run the same code and access them on XP Embedded, the counters don't seem to be present. They are present in the image - I can see them all in perfmon. What's the missing piece here?
[ "Have you added all the WMI components? As far as I know, you need all the WMI components to access the counters!\n\nThe Performance Counter Windows Management Instrumentation (WMI) Provider component provides a bridge between the performance registry interface and the WMI interface. This component allows WMI clients to access performance counters through WMI scripts, and allows management applications built using WMI to access performance counters. Without this component, applications must directly use the registry interface or the performance data helper interface to access performance counters. \n\nThank you TimK for the link (http://msdn.microsoft.com/en-us/library/aa939695.aspx)\n", "It looks like this is what I was missing: http://msdn.microsoft.com/en-us/library/aa939695.aspx\n" ]
[ 1, 0 ]
[]
[]
[ ".net", "c#", "windows_xp_embedded" ]
stackoverflow_0000056654_.net_c#_windows_xp_embedded.txt
Q: Should I use multiple assemblies for an isolated ASP.NET web application? Coming from a corporate IT environment, the standard was always creating a class library project for each layer, Business Logic, Data Access, and sometimes greater isolation of specific types. Now that I am working on my own web application project, I don't see a real need to isolate my code in this fashion. I don't have multiple applications that need to share this logic or service enable it. I also don't see any advantage to deployment scenarios. I am leaning towards putting all artifacts in one web application, logically separated by project folders. I wanted to know what the thoughts are of the community. Let me add more information... I am writing this application using MVC preview 5, so the unit testing piece will be supported by the separation of concerns inherit in the framework. I do like to have tests for everything! A: Start with the simplest thing possible and add complexity if and when required. Sounds as though a single assembly would work just fine for your case. However, do take care not to violate the layers by having layer A access an internal member of layer B. That would make it harder to pull the layers into separate assemblies at a later date. A: I'd say it depends on how serious you are about testing and unit-testing. If you plan to only do user/manual tests, or use basically, only test from the UI downward, then it doesn't really make a difference. On the other hand, if you plan on doing sort of unit-testing, or business rules validation, it definitely makes sense to split up your work into different assemblies. Even for smaller personal projects, I find this approach makes my life easier as the project goes on. I still run everything from the same solution, just with a web project for the UI, library for the business rules / application logic and another library for the DAL. A: You should still separate logically layers into there proper projects. That is a good engineering practice, whether you are just 1 developer or 100. The negative about the code all in one place is that it is going to make you refactor or duplicate code for expansion.
Should I use multiple assemblies for an isolated ASP.NET web application?
Coming from a corporate IT environment, the standard was always creating a class library project for each layer, Business Logic, Data Access, and sometimes greater isolation of specific types. Now that I am working on my own web application project, I don't see a real need to isolate my code in this fashion. I don't have multiple applications that need to share this logic or service enable it. I also don't see any advantage to deployment scenarios. I am leaning towards putting all artifacts in one web application, logically separated by project folders. I wanted to know what the thoughts are of the community. Let me add more information... I am writing this application using MVC preview 5, so the unit testing piece will be supported by the separation of concerns inherit in the framework. I do like to have tests for everything!
[ "Start with the simplest thing possible and add complexity if and when required. Sounds as though a single assembly would work just fine for your case. However, do take care not to violate the layers by having layer A access an internal member of layer B. That would make it harder to pull the layers into separate assemblies at a later date.\n", "I'd say it depends on how serious you are about testing and unit-testing.\nIf you plan to only do user/manual tests, or use basically, only test from the UI downward, then it doesn't really make a difference.\nOn the other hand, if you plan on doing sort of unit-testing, or business rules validation, it definitely makes sense to split up your work into different assemblies.\nEven for smaller personal projects, I find this approach makes my life easier as the project goes on. I still run everything from the same solution, just with a web project for the UI, library for the business rules / application logic and another library for the DAL.\n", "You should still separate logically layers into there proper projects.\nThat is a good engineering practice, whether you are just 1 developer or 100. The negative about the code all in one place is that it is going to make you refactor or duplicate code for expansion. \n" ]
[ 1, 0, 0 ]
[]
[]
[ ".net", "asp.net", "coding_style", "web_applications" ]
stackoverflow_0000058878_.net_asp.net_coding_style_web_applications.txt
Q: Mail Storage Quota Checker in C# We have a requirement to build a tool for users in an Intranet scenario. The tool should check how much percentage of the Mailbox Quota (set in Active Directory) is being used. Currently, they can check their Folder size using Outlook 2003 but this does not show the Quota Limit set for them or the percentage being used. This blog has all the exact information I need including vbscript samples. If you have any similar C# code, please post it. That will give me a good lead on writing a small system tray application which will poll the Active Directory and show the percentage in real time. PS: I am not being lazy. Already started writing code for this. Just checking if any of you went through a similar exercise and have code to share. A: Querying ActiveDirectory is pretty simple. You can find some good examples I've used before here.
Mail Storage Quota Checker in C#
We have a requirement to build a tool for users in an Intranet scenario. The tool should check how much percentage of the Mailbox Quota (set in Active Directory) is being used. Currently, they can check their Folder size using Outlook 2003 but this does not show the Quota Limit set for them or the percentage being used. This blog has all the exact information I need including vbscript samples. If you have any similar C# code, please post it. That will give me a good lead on writing a small system tray application which will poll the Active Directory and show the percentage in real time. PS: I am not being lazy. Already started writing code for this. Just checking if any of you went through a similar exercise and have code to share.
[ "Querying ActiveDirectory is pretty simple. You can find some good examples I've used before here.\n" ]
[ 2 ]
[]
[]
[ ".net", "c#", "exchange_server", "ldap", "outlook" ]
stackoverflow_0000057557_.net_c#_exchange_server_ldap_outlook.txt
Q: Is there a keyboard shortcut for "Build Page" in Visual Studio 2005? "Build Page" is one of the items you can add to your toolbar to compile just the ASPX page or ASCX control you are working on. Is there a keyboard shortcut for it? A: I always use Ctrl + Shift + B, which rebuilds the entire solution. You could also configure your own keyboard shortcut by clicking Tools / Options / Keyboard and scrolling down to the Build options. (There's ones for Build.BuildPage or Build.BuildSelection...)
Is there a keyboard shortcut for "Build Page" in Visual Studio 2005?
"Build Page" is one of the items you can add to your toolbar to compile just the ASPX page or ASCX control you are working on. Is there a keyboard shortcut for it?
[ "I always use Ctrl + Shift + B, which rebuilds the entire solution.\nYou could also configure your own keyboard shortcut by clicking Tools / Options / Keyboard and scrolling down to the Build options. (There's ones for Build.BuildPage or Build.BuildSelection...)\n" ]
[ 1 ]
[]
[]
[ "visual_studio" ]
stackoverflow_0000058933_visual_studio.txt
Q: How should you go about learning ASP.NET after life as a ColdFusion developer? As someone who has spent around 10 years programming web applications with Adobe's ColdFusion, I have decided to add ASP.NET as a string to my bow. For someone who has spent so long with CF and the underlying Java, ASP.NET seems a little alien to me. How should I go about getting up to speed with ASP.NET so that I can be truly dangerous with it? Do I need to know C# at any great amount of detail? I want to be in a position where I can build basic web apps fairly quickly so I can learn more doing the tricky stuff. A: I'm only maybe six months down the same path, but here are some thoughts from my experience so far: The C# language shouldn't give you much problem if you have very much experience with Java at all (or even CFScript). As a reference, though, when I was starting, I found csharp-station a good primer for language basics. It won't help you much as far as the ASP.NET side goes; but it is good for syntax. More you'll be familiarizing yourself with the .NET libraries. The IDE actually can be an enormous help here. Here are the three biggest differences I found making the transition: ASP.NET Server Controls - In ColdFusion, you really have pretty direct control over the HTML; you work very closely with the page. This isn't so much the case in ASP.NET. The server controls are meant to relieve you of a lot of the tedium, but at a cost of maybe some direct control. As a CF programmer, I'm very particular about what gets actually output to the browser; and at first ASP.NET frustrated me because it spits out a lot of extra code. Still, the controls are really powerful, and it pays to familiarize yourself with them. Form and validation controls, especially, save you from a lot of the tedium in CF of handling post back and validation. W3Schools actually has a decent list of web server controls. The page model - ColdFusion is pretty agnostic in terms of page flow. ASP.NET is very much geared towards using post backs, and is very event driven. If you're not using a framework with CF (e.g. Model Glue), this may be foreign to you. .NET takes care of handling a lot of the post back behavior for you. Also, not to say that ColdFusion can't be object and function driven by good use of CFC's, but ASP.NET really tries to push you down the OO path compared to CF in my experience. Database access - Using ASP.NET really made me appreciate how powerful cfquery really is. The csharp-station site also has a good tutorial on working with the native .NET db tools. I haven't worked on enough projects yet to start looking around for DB access extensions; I'm pretty sure Jeff recommended something that they used for building this site, so you might check that out. Otherwise, I really suggest you familiarize yourself with the DataSet object. It's somewhat similar to a query object in CF, and lets you run query of queries, etc... Looping over queries in CF is very common, but it doesn't happen nearly as much in ASP.NET because of data binding. A: Microsoft has a video called ASP.NET for ColdFusion developers you may be interested in. Edit, here's another A: ADO.NET is a core concept, and I would really recommend taking a course in it. Having a qualified instructor explain exactly what the differences are between a DataSet, DataReader (and so forth -- there are a lot of different data access object types) is invaluable. Not to mention you'll better understand the appropriate time and place to use each; and you can ask questions and get immediate answers in a classroom setting. I took an ADO.NET class (one night a week, about 8 weeks) at my local university for around $400. Even if my company hadn't paid for it, I would have been happy to, and I can highly recommend anyone trying to learn .NET do the same.
How should you go about learning ASP.NET after life as a ColdFusion developer?
As someone who has spent around 10 years programming web applications with Adobe's ColdFusion, I have decided to add ASP.NET as a string to my bow. For someone who has spent so long with CF and the underlying Java, ASP.NET seems a little alien to me. How should I go about getting up to speed with ASP.NET so that I can be truly dangerous with it? Do I need to know C# at any great amount of detail? I want to be in a position where I can build basic web apps fairly quickly so I can learn more doing the tricky stuff.
[ "I'm only maybe six months down the same path, but here are some thoughts from my experience so far:\nThe C# language shouldn't give you much problem if you have very much experience with Java at all (or even CFScript). As a reference, though, when I was starting, I found csharp-station a good primer for language basics. It won't help you much as far as the ASP.NET side goes; but it is good for syntax. More you'll be familiarizing yourself with the .NET libraries. The IDE actually can be an enormous help here.\nHere are the three biggest differences I found making the transition:\n\nASP.NET Server Controls - In ColdFusion, you really have pretty\ndirect control over the HTML; you\nwork very closely with the page. \nThis isn't so much the case in\nASP.NET. The server controls are\nmeant to relieve you of a lot of the\ntedium, but at a cost of maybe some\ndirect control. As a CF programmer,\nI'm very particular about what gets\nactually output to the browser; and\nat first ASP.NET frustrated me\nbecause it spits out a lot of extra\ncode. Still, the controls are\nreally powerful, and it pays to\nfamiliarize yourself with them. \nForm and validation controls,\nespecially, save you from a lot of\nthe tedium in CF of handling post\nback and validation. W3Schools\nactually has a decent list of web\nserver controls.\nThe page model - ColdFusion is pretty agnostic in terms of page\nflow. ASP.NET is very much geared\ntowards using post backs, and is\nvery event driven. If you're not\nusing a framework with CF (e.g.\nModel Glue), this may be foreign to\nyou. .NET takes care of handling a\nlot of the post back behavior for\nyou. Also, not to say that\nColdFusion can't be object and\nfunction driven by good use of\nCFC's, but ASP.NET really tries to\npush you down the OO path compared\nto CF in my experience.\nDatabase access - Using ASP.NET really made me appreciate how\npowerful cfquery really is. The\ncsharp-station site also has a good\ntutorial on working with the native\n.NET db tools. I haven't worked on\nenough projects yet to start looking\naround for DB access extensions; I'm\npretty sure Jeff recommended\nsomething that they used for\nbuilding this site, so you might\ncheck that out. Otherwise, I really\nsuggest you familiarize yourself\nwith the DataSet object. It's\nsomewhat similar to a query object\nin CF, and lets you run query of\nqueries, etc... Looping over\nqueries in CF is very common, but it\ndoesn't happen nearly as much in\nASP.NET because of data binding.\n\n", "Microsoft has a video called ASP.NET for ColdFusion developers you may be interested in.\nEdit, here's another \n", "ADO.NET is a core concept, and I would really recommend taking a course in it. Having a qualified instructor explain exactly what the differences are between a DataSet, DataReader (and so forth -- there are a lot of different data access object types) is invaluable. Not to mention you'll better understand the appropriate time and place to use each; and you can ask questions and get immediate answers in a classroom setting.\nI took an ADO.NET class (one night a week, about 8 weeks) at my local university for around $400. Even if my company hadn't paid for it, I would have been happy to, and I can highly recommend anyone trying to learn .NET do the same.\n" ]
[ 4, 0, 0 ]
[]
[]
[ "asp.net", "coldfusion" ]
stackoverflow_0000057768_asp.net_coldfusion.txt
Q: Loading different versions of the same assembly Using reflection, I need to load 2 different versions of the same assembly. Can I load the 2 versions in 2 different AppDomains in the same process? I need to do some data migration from the old version of the app to the new version. Please let me know if this is possible or should I use 2 separate processes. A: If you are doing it at design time (which you indicate you are not) this should help you: http://blogs.msdn.com/abhinaba/archive/2005/11/30/498278.aspx If you are doing it dynamically through reflection (looks like the case here) this might help you: https://www.infosysblogs.com/microsoft/2007/04/loading_multiple_versions_of_s.html A: UPDATE: I thought I will post my findings as an answer. Reflection proved too complex in terms of development effort, tracking run time errors etc. I remember doing a different approach using 2 different processes when faced with a similar situation long time back (Thank you Brandon). This is the plan: Nothing elegant but easier in terms of development and troubleshooting. Since it is a one time job, we just have to make it work. Host a remoting process (which i call the server) having the new version of the application. A remoting client has references for the older version. Remoting client instantiates and loads the objects with data required for migration. Convert the old objects into common serializable objects and pass as parameters to the server. Remoting Server uses the common data to instantiate and load the new objects. Invokes the functions on the new types to persist their data.
Loading different versions of the same assembly
Using reflection, I need to load 2 different versions of the same assembly. Can I load the 2 versions in 2 different AppDomains in the same process? I need to do some data migration from the old version of the app to the new version. Please let me know if this is possible or should I use 2 separate processes.
[ "If you are doing it at design time (which you indicate you are not) this should help you: \nhttp://blogs.msdn.com/abhinaba/archive/2005/11/30/498278.aspx\nIf you are doing it dynamically through reflection (looks like the case here) this might help you:\nhttps://www.infosysblogs.com/microsoft/2007/04/loading_multiple_versions_of_s.html\n", "UPDATE: I thought I will post my findings as an answer. Reflection proved too complex in terms of development effort, tracking run time errors etc. I remember doing a different approach using 2 different processes when faced with a similar situation long time back (Thank you Brandon).\nThis is the plan: Nothing elegant but easier in terms of development and troubleshooting. Since it is a one time job, we just have to make it work.\nHost a remoting process (which i call the server) having the new version of the application. A remoting client has references for the older version. \nRemoting client instantiates and loads the objects with data required for migration. \nConvert the old objects into common serializable objects and pass as parameters to the server.\nRemoting Server uses the common data to instantiate and load the new objects. Invokes the functions on the new types to persist their data.\n" ]
[ 14, 5 ]
[]
[]
[ ".net", "assemblies", "c#", "dll", "reflection" ]
stackoverflow_0000058035_.net_assemblies_c#_dll_reflection.txt
Q: Nested SQL Server transaction performing cascade delete Suppose I have a table called Companies that has a DepartmentID column. There's also a Departaments table that has as EmployeeID column. Of course I have an Employee table as well. The problem is that I want to delete a company, so first i have to delete all the employees for every departament and then all the departaments in the company. Cascade Delete is not an option, therefore i wish to use nested transactions. I'm new to SQL so I would appreciate your help. A: I'm not sure why you need nested transactions here. You only need one actual transaction: BEGIN TRAN DELETE FROM Employee FROM Employee INNER JOIN Department ON Employee.DepartmentID = Department.DepartmentID INNER JOIN Company ON Department.CompanyID = Company.CompanyID WHERE Company.CompanyID = @CompanyID DELETE FROM Department FROM Department INNER JOIN Company ON Department.CompanyID = Company.CompanyID WHERE Company.CompanyID = @CompanyID DELETE FROM Company WHERE Company.CompanyID = @CompanyID COMMIT TRAN Note the double FROM, that is not a typo, it's the correct SQL syntax for performing a JOIN in a DELETE. Each statement is atomic, either the entire DELETE will succeed or fail, which isn't that important in this case because the entire batch will either succeed or fail. BTW- I think you had your relationships backwards. The Department would not have an EmployeeID, the Employee would have a DepartmentID. A: I'm not answering your question, but foreign Keys is the way to go, why is it not an option? As for nested transactions they are: BEGIN delete from Employee where departmentId = 1; BEGIN delete from Department where companyId = 2; BEGIN delete from Company where companyId = 2; END END END Programmatically it looks different of course, but that'd depend on the platform you are using
Nested SQL Server transaction performing cascade delete
Suppose I have a table called Companies that has a DepartmentID column. There's also a Departaments table that has as EmployeeID column. Of course I have an Employee table as well. The problem is that I want to delete a company, so first i have to delete all the employees for every departament and then all the departaments in the company. Cascade Delete is not an option, therefore i wish to use nested transactions. I'm new to SQL so I would appreciate your help.
[ "I'm not sure why you need nested transactions here. You only need one actual transaction:\nBEGIN TRAN\n\nDELETE FROM Employee\n FROM Employee\n INNER JOIN Department ON Employee.DepartmentID = Department.DepartmentID\n INNER JOIN Company ON Department.CompanyID = Company.CompanyID\n WHERE Company.CompanyID = @CompanyID\n\nDELETE FROM Department\n FROM Department\n INNER JOIN Company ON Department.CompanyID = Company.CompanyID\n WHERE Company.CompanyID = @CompanyID\n\nDELETE FROM Company\n WHERE Company.CompanyID = @CompanyID\n\nCOMMIT TRAN\n\nNote the double FROM, that is not a typo, it's the correct SQL syntax for performing a JOIN in a DELETE.\nEach statement is atomic, either the entire DELETE will succeed or fail, which isn't that important in this case because the entire batch will either succeed or fail.\nBTW- I think you had your relationships backwards. The Department would not have an EmployeeID, the Employee would have a DepartmentID.\n", "I'm not answering your question, but foreign Keys is the way to go, why is it not an option?\nAs for nested transactions they are:\nBEGIN\n delete from Employee where departmentId = 1;\n BEGIN\n delete from Department where companyId = 2;\n BEGIN\n delete from Company where companyId = 2;\n END\n END\nEND\n\nProgrammatically it looks different of course, but that'd depend on the platform you are using\n" ]
[ 4, 0 ]
[]
[]
[ "cascade", "nested", "sql_server", "transactions" ]
stackoverflow_0000058916_cascade_nested_sql_server_transactions.txt
Q: Enforce SSL in code in an ashx handler I have a site, which contains several ashx handlers, on a couple of the handlers I want to reject non-SSL requests. Is there a way that I can do this in code? A: If you must do it programmatically, a way I've done it in the past is to inspect the url and look for "https" in it. Redirect if you don't see that. Request.IsSecureConnection should be the preferred method, however. You may have to add additional logic to handle a loopback address. A: I think the proper way is to check the Request.IsSecureConnection property and redirect or throw if it's false A: Try using the System.Web.HttpContext.Current.Request.IsSecureConnection to validate whether they are connecting securely, and then perform whatever denies you would like after that (returning an error message, or whatever your business need is).
Enforce SSL in code in an ashx handler
I have a site, which contains several ashx handlers, on a couple of the handlers I want to reject non-SSL requests. Is there a way that I can do this in code?
[ "If you must do it programmatically, a way I've done it in the past is to inspect the url and look for \"https\" in it. Redirect if you don't see that. Request.IsSecureConnection should be the preferred method, however. You may have to add additional logic to handle a loopback address.\n", "I think the proper way is to check the Request.IsSecureConnection property and redirect or throw if it's false\n", "Try using the System.Web.HttpContext.Current.Request.IsSecureConnection to validate whether they are connecting securely, and then perform whatever denies you would like after that (returning an error message, or whatever your business need is).\n" ]
[ 5, 3, 2 ]
[]
[]
[ "asp.net", "ssl" ]
stackoverflow_0000059000_asp.net_ssl.txt
Q: What is the best way to do per-user database connections in Rails What is the best way to do per-user database connections in Rails? I realize this is a poor Rails design practice, but we're gradually replacing an existing web application that uses one database per user. A complete redesign/rewrite is not feasible. A: Put something like this in your application controller. I'm using the subdomain plus "_clientdb" to pick the name of the database. I have all the databases using the same username and password, so I can grab that from the db config file. Hope this helps! class ApplicationController < ActionController::Base before_filter :hijack_db def hijack_db db_name = request.subdomains.first + "_clientdb" # lets manually connect to the proper db ActiveRecord::Base.establish_connection( :adapter => ActiveRecord::Base.configurations[ENV["RAILS_ENV"]]['adapter'], :host => ActiveRecord::Base.configurations[ENV["RAILS_ENV"]]['host'], :username => ActiveRecord::Base.configurations[ENV["RAILS_ENV"]]['username'], :password => ActiveRecord::Base.configurations[ENV["RAILS_ENV"]]['password'], :database => db_name ) end end A: Take a look at ActiveRecord::Base.establish_connection. That's how you connect to a different database server. I can't be of much more help since I don't know how you recognize the user or map it to it's database, but I suppose a master database will have that info (and the connection info should be on the database.yml file). Best of luck.
What is the best way to do per-user database connections in Rails
What is the best way to do per-user database connections in Rails? I realize this is a poor Rails design practice, but we're gradually replacing an existing web application that uses one database per user. A complete redesign/rewrite is not feasible.
[ "Put something like this in your application controller. I'm using the subdomain plus \"_clientdb\" to pick the name of the database. I have all the databases using the same username and password, so I can grab that from the db config file.\nHope this helps!\nclass ApplicationController < ActionController::Base\n\n before_filter :hijack_db\n\n def hijack_db\n db_name = request.subdomains.first + \"_clientdb\"\n\n # lets manually connect to the proper db\n ActiveRecord::Base.establish_connection(\n :adapter => ActiveRecord::Base.configurations[ENV[\"RAILS_ENV\"]]['adapter'],\n :host => ActiveRecord::Base.configurations[ENV[\"RAILS_ENV\"]]['host'],\n :username => ActiveRecord::Base.configurations[ENV[\"RAILS_ENV\"]]['username'],\n :password => ActiveRecord::Base.configurations[ENV[\"RAILS_ENV\"]]['password'],\n :database => db_name\n )\n end\nend\n\n", "Take a look at ActiveRecord::Base.establish_connection. That's how you connect to a different database server. I can't be of much more help since I don't know how you recognize the user or map it to it's database, but I suppose a master database will have that info (and the connection info should be on the database.yml file).\nBest of luck.\n" ]
[ 10, 1 ]
[]
[]
[ "ruby_on_rails" ]
stackoverflow_0000058755_ruby_on_rails.txt
Q: How to check which locale is a .NET application running under, without having access to its sourcecode? Context: I'm in charge of running a service written in .NET. Proprietary application. It uses a SQL Server database. It ran as a user member of the Administrators group in the local machine. It worked alright before I added the machine to a domain. So, I added the machine to a domain (Win 2003) and changed the user to a member of the Power Users group and now, the Problem: Some of the SQL sentences it tries to execute are "magically" in spanish localization (where , separates floating point numbers instead of .), leading to errors. There are fewer columns in the INSERT statement than values specified in the VALUES clause. The number of values in the VALUES clause must match the number of columns specified in the INSERT statement. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) Operating System and Regional Settings in the machine are in English. I asked the provider of the application and he said: Looks like you have a combination of code running under Spanish locale, and SQL server under English locale. So the SQL expects '15.28' and not '15,28' Which looks wrong to me in various levels (how can SQL Server distinguish between commas to separate arguments and commas belonging to a floating point number?). So, the code seems to be grabbing the spanish locale from somewhere, I don't know if it's the user it runs as, or someplace else (global policy, maybe?). But the question is What are the places where localization is defined on a machine/user/domain basis? I don't know all the places I must search for the culprit, so please help me to find it! A: There are two types of localisation in .NET, both the settings for the cultures can be found in these variables (fire up a .NET command line app on the machine to see what it says): System.Thread.CurrentThread.CurrentCulture & System.Thread.CurrentThread.CurrentUICulture http://msdn.microsoft.com/en-us/library/system.threading.thread_members.aspx They relate to the settings in the control panel (in the regional settings part). Create a .NET command line app, then just call ToString() on the above properties, that should tell you which property to look at. Edit: It turns out the setting for the locales per user are held here: HKEY_CURRENT_USER\Control Panel\International It might be worth inspecting the registry of the user with the spanish locale, and comparing it to one who is set to US or whichever locale you require. A: You can set it in the thread context in which your code is executing. System.Threading.Thread.CurrentThread.CurrentCulture A: Great, I created the console app and indeed, the app is not crazy, CurrentCulture is in spanish, but for THAT User in THAT machine only. If I run the console app as another user it returns english for all cultures. Should I open a new question asking where are user-wise locale settings? A: Well if it's user specific, check out the Regional and Language Options control panel. <rant>On a side note, kick the developer for not being culture aware when using strings.</rant> A: Found out why it happened in that machine only. It was the only one where I actually logged into with that user, then the domain controller set the regional settings as spanish for it.
How to check which locale is a .NET application running under, without having access to its sourcecode?
Context: I'm in charge of running a service written in .NET. Proprietary application. It uses a SQL Server database. It ran as a user member of the Administrators group in the local machine. It worked alright before I added the machine to a domain. So, I added the machine to a domain (Win 2003) and changed the user to a member of the Power Users group and now, the Problem: Some of the SQL sentences it tries to execute are "magically" in spanish localization (where , separates floating point numbers instead of .), leading to errors. There are fewer columns in the INSERT statement than values specified in the VALUES clause. The number of values in the VALUES clause must match the number of columns specified in the INSERT statement. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) Operating System and Regional Settings in the machine are in English. I asked the provider of the application and he said: Looks like you have a combination of code running under Spanish locale, and SQL server under English locale. So the SQL expects '15.28' and not '15,28' Which looks wrong to me in various levels (how can SQL Server distinguish between commas to separate arguments and commas belonging to a floating point number?). So, the code seems to be grabbing the spanish locale from somewhere, I don't know if it's the user it runs as, or someplace else (global policy, maybe?). But the question is What are the places where localization is defined on a machine/user/domain basis? I don't know all the places I must search for the culprit, so please help me to find it!
[ "There are two types of localisation in .NET, both the settings for the cultures can be found in these variables (fire up a .NET command line app on the machine to see what it says):\nSystem.Thread.CurrentThread.CurrentCulture\n&\nSystem.Thread.CurrentThread.CurrentUICulture\nhttp://msdn.microsoft.com/en-us/library/system.threading.thread_members.aspx\nThey relate to the settings in the control panel (in the regional settings part).\nCreate a .NET command line app, then just call ToString() on the above properties, that should tell you which property to look at.\nEdit:\nIt turns out the setting for the locales per user are held here:\nHKEY_CURRENT_USER\\Control Panel\\International\n\nIt might be worth inspecting the registry of the user with the spanish locale, and comparing it to one who is set to US or whichever locale you require.\n", "You can set it in the thread context in which your code is executing.\nSystem.Threading.Thread.CurrentThread.CurrentCulture\n", "Great, I created the console app and indeed, the app is not crazy, CurrentCulture is in spanish, but for THAT User in THAT machine only. If I run the console app as another user it returns english for all cultures.\nShould I open a new question asking where are user-wise locale settings?\n", "Well if it's user specific, check out the Regional and Language Options control panel.\n<rant>On a side note, kick the developer for not being culture aware when using strings.</rant>\n", "Found out why it happened in that machine only. It was the only one where I actually logged into with that user, then the domain controller set the regional settings as spanish for it.\n" ]
[ 3, 1, 0, 0, 0 ]
[]
[]
[ ".net", "locale", "sql_server", "windows" ]
stackoverflow_0000059013_.net_locale_sql_server_windows.txt
Q: How do I disable validation in Web Data Administrator? I'm trying to run some queries to get rid of XSS in our database using Web Data Administrator but I keep running into this Potentially Dangerous Request crap. How do I disable validation of the query in Web Data Administrator? A: Go into the install directory of web data admin, usually: C:\Program Files\Microsoft SQL Server Tools\Microsoft SQL Web Data Administrator Then in the "Web" folder open the file "QueryDatabase.aspx" and edit the following line: <%@ Page language="c#" Codebehind="QueryDatabase.aspx.cs" AutoEventWireup="false" Inherits="SqlWebAdmin.query" %> Add ValidateRequest="false" to the end of it like so: <%@ Page language="c#" Codebehind="QueryDatabase.aspx.cs" AutoEventWireup="false" Inherits="SqlWebAdmin.query" ValidateRequest="false" %> NOTE: THIS IS POTENTIALLY DANGEROUS!! Be Careful!
How do I disable validation in Web Data Administrator?
I'm trying to run some queries to get rid of XSS in our database using Web Data Administrator but I keep running into this Potentially Dangerous Request crap. How do I disable validation of the query in Web Data Administrator?
[ "Go into the install directory of web data admin, usually:\nC:\\Program Files\\Microsoft SQL Server Tools\\Microsoft SQL Web Data Administrator\nThen in the \"Web\" folder open the file \"QueryDatabase.aspx\" and edit the following line:\n<%@ Page language=\"c#\" Codebehind=\"QueryDatabase.aspx.cs\" AutoEventWireup=\"false\" Inherits=\"SqlWebAdmin.query\" %>\nAdd ValidateRequest=\"false\" to the end of it like so:\n<%@ Page language=\"c#\" Codebehind=\"QueryDatabase.aspx.cs\" AutoEventWireup=\"false\" Inherits=\"SqlWebAdmin.query\" ValidateRequest=\"false\" %>\nNOTE: THIS IS POTENTIALLY DANGEROUS!! Be Careful!\n" ]
[ 1 ]
[]
[]
[ "sql", "sql_server" ]
stackoverflow_0000059180_sql_sql_server.txt
Q: VS 2005 Installer Project Version Number I am getting this error now that I hit version number 1.256.0: Error 4 Invalid product version '1.256.0'. Must be of format '##.##.####' The installer was fine with 1.255.0 but something with 256 (2^8) it doesn't like. I found this stated on msdn.com: The Version property must be formatted as N.N.N, where each N represents at least one and no more than four digits. (http://msdn.microsoft.com/en-us/library/d3ywkte8(VS.80).aspx) Which would make me believe there is nothing wrong 1.256.0 because it meets the rules stated above. Does anyone have any ideas on why this would be failing now? A: The link you reference says " This page is specific to Microsoft Visual Studio 2008/.NET Framework 3.5", but you're talking about vs2005. My guess: a 0-based range of 256 numbers ends at 255, therefore trying to use 256 exceeds that and perhaps they changed it for VS2008 Edit: I looked again and see where that link can be switched to talk about VS2005, and gives the same answer. I'm still sticking to my 0-255 theory though. Wouldn't be the first time this week I came across something incorrect in MSDN docs. A: This article says there is a major and minor max of 255. http://msdn.microsoft.com/en-us/library/aa370859(VS.85).aspx
VS 2005 Installer Project Version Number
I am getting this error now that I hit version number 1.256.0: Error 4 Invalid product version '1.256.0'. Must be of format '##.##.####' The installer was fine with 1.255.0 but something with 256 (2^8) it doesn't like. I found this stated on msdn.com: The Version property must be formatted as N.N.N, where each N represents at least one and no more than four digits. (http://msdn.microsoft.com/en-us/library/d3ywkte8(VS.80).aspx) Which would make me believe there is nothing wrong 1.256.0 because it meets the rules stated above. Does anyone have any ideas on why this would be failing now?
[ "The link you reference says \" This page is specific to Microsoft Visual Studio 2008/.NET Framework 3.5\", but you're talking about vs2005.\nMy guess: a 0-based range of 256 numbers ends at 255, therefore trying to use 256 exceeds that and perhaps they changed it for VS2008\nEdit: I looked again and see where that link can be switched to talk about VS2005, and gives the same answer. I'm still sticking to my 0-255 theory though. Wouldn't be the first time this week I came across something incorrect in MSDN docs.\n", "This article says there is a major and minor max of 255.\nhttp://msdn.microsoft.com/en-us/library/aa370859(VS.85).aspx\n" ]
[ 0, 0 ]
[]
[]
[ "installation", "visual_studio_2005" ]
stackoverflow_0000059120_installation_visual_studio_2005.txt
Q: Are you fluent in Unicode yet? Almost 5 years ago Joel Spolsky wrote this article, "The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)". Like many, I read it carefully, realizing it was high-time I got to grips with this "replacement for ASCII". Unfortunately, 5 years later I feel I have slipped back into a few bad habits in this area. Have you? I don't write many specifically international applications, however I have helped build many ASP.NET internet facing websites, so I guess that's not an excuse. So for my benefit (and I believe many others) can I get some input from people on the following: How to "get over" ASCII once and for all Fundamental guidance when working with Unicode. Recommended (recent) books and websites on Unicode (for developers). Current state of Unicode (5 years after Joels' article) Future directions. I must admit I have a .NET background and so would also be happy for information on Unicode in the .NET framework. Of course this shouldn't stop anyone with a differing background from commenting though. Update: See this related question also asked on StackOverflow previously. A: Since I read the Joel article and some other I18n articles I always kept a close eye to my character encoding; And it actually works if you do it consistantly. If you work in a company where it is standard to use UTF-8 and everybody knows this / does this it will work. Here some interesting articles (besides Joel's article) on the subject: http://www.tbray.org/ongoing/When/200x/2003/04/06/Unicode http://www.tbray.org/ongoing/When/200x/2003/04/26/UTF A quote from the first article; Tips for using Unicode: Embrace Unicode, don't fight it; it's probably the right thing to do, and if it weren't you'd probably have to anyhow. Inside your software, store text as UTF-8 or UTF-16; that is to say, pick one of the two and stick with it. Interchange data with the outside world using XML whenever possible; this makes a whole bunch of potential problems go away. Try to make your application browser-based rather than write your own client; the browsers are getting really quite good at dealing with the texts of the world. If you're using someone else's library code (and of course you are), assume its Unicode handling is broken until proved to be correct. If you're doing search, try to hand the linguistic and character-handling problems off to someone who understands them. Go off to Amazon or somewhere and buy the latest revision of the printed Unicode standard; it contains pretty well everything you need to know. Spend some time poking around the Unicode web site and learning how the code charts work. If you're going to have to do any serious work with Asian languages, go buy the O'Reilly book on the subject by Ken Lunde. If you have a Macintosh, run out and grab Lord Pixel's Unicode Font Inspection tool. Totally cool. If you're really going to have to get down and dirty with the data, go attend one of the twice-a-year Unicode conferences. All the experts go and if you don't know what you need to know, you'll be able to find someone there who knows. A: I spent a while working with search engine software - You wouldn't believe how many web sites serve up content with HTTP headers or meta tags which lie about the encoding of the pages. Often, you'll even get a document which contains both ISO-8859 characters and UTF-8 characters. Once you've battled through a few of those sorts of issues, you start taking the proper character encoding of data you produce really seriously. A: The .NET Framework uses Windows default encoding for storing strings, which turns out to be UTF-16. If you don't specify an encoding when you use most text I/O classes, you will write UTF-8 with no BOM and read by first checking for a BOM then assuming UTF-8 (I know for sure StreamReader and StreamWriter behave this way.) This is pretty safe for "dumb" text editors that won't understand a BOM but kind of cruddy for smarter ones that could display UTF-8 or the situation where you're actually writing characters outside the standard ASCII range. Normally this is invisible, but it can rear its head in interesting ways. Yesterday I was working with someone who was using XML serialization to serialize an object to a string using a StringWriter, and he couldn't figure out why the encoding was always UTF-16. Since a string in memory is going to be UTF-16 and that is enforced by .NET, that's the only thing the XML serialization framework could do. So, when I'm writing something that isn't just a throwaway tool, I specify a UTF-8 encoding with a BOM. Technically in .NET you will always be accidentally Unicode aware, but only if your user knows to detect your encoding as UTF-8. It makes me cry a little every time I see someone ask, "How do I get the bytes of a string?" and the suggested solution uses Encoding.ASCII.GetBytes() :( A: Rule of thumb: if you never munge or look inside a string and instead treat it strictly as a blob of data, you'll be much better off. Even doing something as simple as splitting words or lowercasing strings becomes tough if you want to do it "the Unicode way". And if you want to do it "the Unicode way", you'll need an awfully good library. This stuff is incredibly complex.
Are you fluent in Unicode yet?
Almost 5 years ago Joel Spolsky wrote this article, "The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)". Like many, I read it carefully, realizing it was high-time I got to grips with this "replacement for ASCII". Unfortunately, 5 years later I feel I have slipped back into a few bad habits in this area. Have you? I don't write many specifically international applications, however I have helped build many ASP.NET internet facing websites, so I guess that's not an excuse. So for my benefit (and I believe many others) can I get some input from people on the following: How to "get over" ASCII once and for all Fundamental guidance when working with Unicode. Recommended (recent) books and websites on Unicode (for developers). Current state of Unicode (5 years after Joels' article) Future directions. I must admit I have a .NET background and so would also be happy for information on Unicode in the .NET framework. Of course this shouldn't stop anyone with a differing background from commenting though. Update: See this related question also asked on StackOverflow previously.
[ "Since I read the Joel article and some other I18n articles I always kept a close eye to my character encoding; And it actually works if you do it consistantly. If you work in a company where it is standard to use UTF-8 and everybody knows this / does this it will work.\nHere some interesting articles (besides Joel's article) on the subject:\n\nhttp://www.tbray.org/ongoing/When/200x/2003/04/06/Unicode\nhttp://www.tbray.org/ongoing/When/200x/2003/04/26/UTF\n\nA quote from the first article; Tips for using Unicode:\n\nEmbrace Unicode, don't fight it; it's probably the right thing to do, and if it weren't you'd probably have to anyhow.\nInside your software, store text as UTF-8 or UTF-16; that is to say, pick one of the two and stick with it.\nInterchange data with the outside world using XML whenever possible; this makes a whole bunch of potential problems go away.\nTry to make your application browser-based rather than write your own client; the browsers are getting really quite good at dealing with the texts of the world.\nIf you're using someone else's library code (and of course you are), assume its Unicode handling is broken until proved to be correct.\nIf you're doing search, try to hand the linguistic and character-handling problems off to someone who understands them.\nGo off to Amazon or somewhere and buy the latest revision of the printed Unicode standard; it contains pretty well everything you need to know.\nSpend some time poking around the Unicode web site and learning how the code charts work.\nIf you're going to have to do any serious work with Asian languages, go buy the O'Reilly book on the subject by Ken Lunde.\nIf you have a Macintosh, run out and grab Lord Pixel's Unicode Font Inspection tool. Totally cool.\nIf you're really going to have to get down and dirty with the data, go attend one of the twice-a-year Unicode conferences. All the experts go and if you don't know what you need to know, you'll be able to find someone there who knows.\n\n", "I spent a while working with search engine software - You wouldn't believe how many web sites serve up content with HTTP headers or meta tags which lie about the encoding of the pages. Often, you'll even get a document which contains both ISO-8859 characters and UTF-8 characters.\nOnce you've battled through a few of those sorts of issues, you start taking the proper character encoding of data you produce really seriously.\n", "The .NET Framework uses Windows default encoding for storing strings, which turns out to be UTF-16. If you don't specify an encoding when you use most text I/O classes, you will write UTF-8 with no BOM and read by first checking for a BOM then assuming UTF-8 (I know for sure StreamReader and StreamWriter behave this way.) This is pretty safe for \"dumb\" text editors that won't understand a BOM but kind of cruddy for smarter ones that could display UTF-8 or the situation where you're actually writing characters outside the standard ASCII range.\nNormally this is invisible, but it can rear its head in interesting ways. Yesterday I was working with someone who was using XML serialization to serialize an object to a string using a StringWriter, and he couldn't figure out why the encoding was always UTF-16. Since a string in memory is going to be UTF-16 and that is enforced by .NET, that's the only thing the XML serialization framework could do. \nSo, when I'm writing something that isn't just a throwaway tool, I specify a UTF-8 encoding with a BOM. Technically in .NET you will always be accidentally Unicode aware, but only if your user knows to detect your encoding as UTF-8.\nIt makes me cry a little every time I see someone ask, \"How do I get the bytes of a string?\" and the suggested solution uses Encoding.ASCII.GetBytes() :(\n", "Rule of thumb: if you never munge or look inside a string and instead treat it strictly as a blob of data, you'll be much better off.\nEven doing something as simple as splitting words or lowercasing strings becomes tough if you want to do it \"the Unicode way\".\nAnd if you want to do it \"the Unicode way\", you'll need an awfully good library. This stuff is incredibly complex.\n" ]
[ 9, 4, 3, 2 ]
[]
[]
[ "ascii", "internationalization", "language_agnostic", "unicode" ]
stackoverflow_0000059105_ascii_internationalization_language_agnostic_unicode.txt
Q: What is the best way to keep an asp:button from displaying it's URL on the status bar? What is the best way to keep an asp:button from displaying it's URL on the status bar of the browser? The button is currently defines like this: <asp:button id="btnFind" runat="server" Text="Find Info" onclick="btnFind_Click"> </asp:button> Update: This appears to be specific to IE7, IE6 and FF do not show the URL in the status bar. A: I use FF so never noticed this, but the link does in fact appear in the status bar in IE.. I dont think you can overwrite it :( I initially thought maybe setting the ToolTip (al la "title") property might do it.. Seems it does not.. Looking at the source, what appears is nowhere to be found, so I would say this is a browser issue, I don't think you can do anything in code.. :( Update Yeah, Looks like IE always posts whatever the form action is.. Can't see a way to override it, as yet.. Perhaps try explicitly setting it via JS? Update II Done some more Googleing. Don't think there really is a "nice" way of doing it.. Unless you remove the form all together and post data some other way.. Is it really worth that much? Generally this just tends to be the page name?
What is the best way to keep an asp:button from displaying it's URL on the status bar?
What is the best way to keep an asp:button from displaying it's URL on the status bar of the browser? The button is currently defines like this: <asp:button id="btnFind" runat="server" Text="Find Info" onclick="btnFind_Click"> </asp:button> Update: This appears to be specific to IE7, IE6 and FF do not show the URL in the status bar.
[ "I use FF so never noticed this, but the link does in fact appear in the status bar in IE..\nI dont think you can overwrite it :( I initially thought maybe setting the ToolTip (al la \"title\") property might do it.. Seems it does not..\nLooking at the source, what appears is nowhere to be found, so I would say this is a browser issue, I don't think you can do anything in code.. :(\nUpdate\nYeah, Looks like IE always posts whatever the form action is.. Can't see a way to override it, as yet..\nPerhaps try explicitly setting it via JS?\nUpdate II\nDone some more Googleing. Don't think there really is a \"nice\" way of doing it.. Unless you remove the form all together and post data some other way..\nIs it really worth that much? Generally this just tends to be the page name?\n" ]
[ 0 ]
[ "I don't see a link, I see this:\njavascript:__doPostBack('btn','');\n\nEDIT: Sorry, was looking at a LinkButton, not an ASP:Button. The ASP:Button shows the forms ACTION element like stated. \nBut, if you are trying to hide the DoPostBackCall, the only way to do that is to directly manipulate window.status with javascript. The downside is most browsers don't allow this anymore.\nTo do that, in your page_load add:\nbtnFind.Attributes.Add(\"onmouseover\",\"window.status = '';\");\nbtnFind.Attributes.Add(\"onmouseout\",\"window.status = '';\");\n\n" ]
[ -1 ]
[ "asp.net" ]
stackoverflow_0000059182_asp.net.txt
Q: Developing drivers with no info How does the open-source/free software community develop drivers for products that offer no documentation? A: How do you reverse engineer something? You observe the input and output, and develop a set of rules or models that describe the operation of the object. Example: Let's say you want to develop a USB camera driver. The "black box" is the software driver. Develop hooks into the OS and/or driver so you can see the inputs and outputs of the driver Generate typical inputs, and record the outputs Analyze the outputs and synthesize a model that describes the relationship between the input and output Test the model - put it in place of the black box driver, and run your tests If it does everything you need, you're done, if not rinse and repeat Note that this is just a regular problem solving/scientific process. For instance, weather forecasters do the same thing - they observe the weather, test the current conditions against the model, which predicts what will happen over the next few days, and then compare the model's output to reality. When it doesn't match they go back and adjust the model. This method is slightly safer (legally) than clean room reverse engineering, where someone actually decompiles the code, or disassembles the product, analyzes it thoroughly, and makes a model based on what they saw. Then the model (AND NOTHING ELSE) is passed to the developers replicating the functionality of the product. The engineer who took the original apart, however, cannot participate because he might bring copyrighted portions of the code/design and inadvertently put them in the new code. If you never disassemble or decompile the product, though, you should be in legally safe waters - the only problem left is that of patents. -Adam A: Usually by reverse engineering the code. There might be legal issues in some countries, though. Reverse Engineering Reverse engineering Windows USB device drivers for the purpose of creating compatible device drivers for Linux Nvidia cracks down on third party driver development A: This is a pretty vague question, but I would say reverse engineering. How they go about that is dependent on what kind of device it is and what is available for it. In many cases the device may have a similar core chipset to another device that can be modified to work.
Developing drivers with no info
How does the open-source/free software community develop drivers for products that offer no documentation?
[ "How do you reverse engineer something?\n\nYou observe the input and output, and develop a set of rules or models that describe the operation of the object.\n\nExample:\nLet's say you want to develop a USB camera driver. The \"black box\" is the software driver.\n\nDevelop hooks into the OS and/or driver so you can see the inputs and outputs of the driver\nGenerate typical inputs, and record the outputs\nAnalyze the outputs and synthesize a model that describes the relationship between the input and output\nTest the model - put it in place of the black box driver, and run your tests\nIf it does everything you need, you're done, if not rinse and repeat\n\nNote that this is just a regular problem solving/scientific process. For instance, weather forecasters do the same thing - they observe the weather, test the current conditions against the model, which predicts what will happen over the next few days, and then compare the model's output to reality. When it doesn't match they go back and adjust the model.\nThis method is slightly safer (legally) than clean room reverse engineering, where someone actually decompiles the code, or disassembles the product, analyzes it thoroughly, and makes a model based on what they saw. Then the model (AND NOTHING ELSE) is passed to the developers replicating the functionality of the product. The engineer who took the original apart, however, cannot participate because he might bring copyrighted portions of the code/design and inadvertently put them in the new code.\nIf you never disassemble or decompile the product, though, you should be in legally safe waters - the only problem left is that of patents.\n-Adam\n", "Usually by reverse engineering the code. There might be legal issues in some countries, though.\n\nReverse Engineering\nReverse engineering Windows USB device drivers for the purpose of\ncreating compatible device drivers for Linux\nNvidia cracks down on third party driver development\n\n", "This is a pretty vague question, but I would say reverse engineering. How they go about that is dependent on what kind of device it is and what is available for it. In many cases the device may have a similar core chipset to another device that can be modified to work.\n" ]
[ 7, 2, 0 ]
[]
[]
[ "bsd", "drivers", "kernel", "linux" ]
stackoverflow_0000059194_bsd_drivers_kernel_linux.txt
Q: Application Testing Is the real benefit in TDD the actual testing of the application, or the benefits that writing a testable application brings to the table? I ask because I feel too often the conversation revolves so much around testing, and not the total benefits package. A: TDD helps you design your software. The tests becomes the design. By writing the test first you think about your code from a consumer perspective, making a more user friendly and more compact software design. Also, by applying TDD you typically end up writing your code in a way where you can supply test mocks and stubs. This leads to less coupled software, making it easier to change and maintain over time. So I guess allot of the talk around TDD is about testing, but by doing that other big benefits follow, such as quality (coverage), flexibility (decoupling), better design (think as the consumer of the API). A: The real improvement is that it is a good way to force you to really think through the design and implementation. Then, once you've prepared the tests and written the code, solutions to unforeseen problems appear more easily. Something that usually happens to me that is a good analogy: When I'm going to post a question to a forum or IRC channel, I like to have the problems well written and fully described, many times the process of preparing a well written and complete description of the problem magically makes the solution appear. A: The real benefit of TDD is supposed to be that it allows you to modify/refactor/enhance your application without worrying about whether you've broken existing functionality. The fact that writing unit tests tends to result in loosely coupled code and better architecture isn't necessarily the point of TDD, but I think it's hard to have one without the other. You can't really experience the benefit of TDD unless you have unit tests with good coverage. In order to do that, you're going to have to write testable code. That's why the two are often used in conjunction or in place of one another. A: Automated testing is such a time saver and confidence booster when you are developing a product that you'll ship multiple versions of. With automated tests, you know that you haven't broken anything between versions. This especially helpful when your product is something that people can write add-ons for - you don't want to break their add-ons between versions. With TDD, you get a good suite of tests as you develop. Without TDD writing those tests is much more difficult. A: Michael Feathers has an insightful blog post about this titled The Flawed Theory Behind Unit Testing. Seriously, go read it. The punch line is All of these techniques have been shown to increase quality. And, if we look closely we can see why: all of them force us to reflect on our code. but you should read the full post for the context.
Application Testing
Is the real benefit in TDD the actual testing of the application, or the benefits that writing a testable application brings to the table? I ask because I feel too often the conversation revolves so much around testing, and not the total benefits package.
[ "TDD helps you design your software. The tests becomes the design. By writing the test first you think about your code from a consumer perspective, making a more user friendly and more compact software design.\nAlso, by applying TDD you typically end up writing your code in a way where you can supply test mocks and stubs. This leads to less coupled software, making it easier to change and maintain over time.\nSo I guess allot of the talk around TDD is about testing, but by doing that other big benefits follow, such as quality (coverage), flexibility (decoupling), better design (think as the consumer of the API).\n", "The real improvement is that it is a good way to force you to really think through the design and implementation. Then, once you've prepared the tests and written the code, solutions to unforeseen problems appear more easily.\nSomething that usually happens to me that is a good analogy: When I'm going to post a question to a forum or IRC channel, I like to have the problems well written and fully described, many times the process of preparing a well written and complete description of the problem magically makes the solution appear.\n", "The real benefit of TDD is supposed to be that it allows you to modify/refactor/enhance your application without worrying about whether you've broken existing functionality. The fact that writing unit tests tends to result in loosely coupled code and better architecture isn't necessarily the point of TDD, but I think it's hard to have one without the other. \nYou can't really experience the benefit of TDD unless you have unit tests with good coverage. In order to do that, you're going to have to write testable code. That's why the two are often used in conjunction or in place of one another.\n", "Automated testing is such a time saver and confidence booster when you are developing a product that you'll ship multiple versions of. With automated tests, you know that you haven't broken anything between versions. This especially helpful when your product is something that people can write add-ons for - you don't want to break their add-ons between versions.\nWith TDD, you get a good suite of tests as you develop. Without TDD writing those tests is much more difficult.\n", "Michael Feathers has an insightful blog post about this titled The Flawed Theory Behind Unit Testing. Seriously, go read it. The punch line is \n\nAll of these techniques have been shown to increase quality. And, if we look closely we can see why: all of them force us to reflect on our code.\n\nbut you should read the full post for the context.\n" ]
[ 7, 4, 1, 0, 0 ]
[ "Automated testing keeps humans from doing a machine's job.\nTest-driven development maximizes the amount of automated testing.\nBeyond a certain point, of course, a human is still required. You reach diminishing returns when you try to apply TDD beyond that point.\n" ]
[ -1 ]
[ "tdd" ]
stackoverflow_0000050954_tdd.txt
Q: Wrap an Oracle schema update in a transaction I've got a program that periodically updates its database schema. Sometimes, one of the DDL statements might fail and if it does, I want to roll back all the changes. I wrap the update in a transaction like so: BEGIN TRAN; CREATE TABLE A (PKey int NOT NULL IDENTITY, NewFieldKey int NULL, CONSTRAINT PK_A PRIMARY KEY (PKey)); CREATE INDEX A_2 ON A (NewFieldKey); CREATE TABLE B (PKey int NOT NULL IDENTITY, CONSTRAINT PK_B PRIMARY KEY (PKey)); ALTER TABLE A ADD CONSTRAINT FK_B_A FOREIGN KEY (NewFieldKey) REFERENCES B (PKey); COMMIT TRAN; As we're executing, if one of the statements fail, I do a ROLLBACK instead of a COMMIT. This works great on SQL Server, but doesn't have the desired effect on Oracle. Oracle seems to do an implicit COMMIT after each DDL statement: http://www.orafaq.com/wiki/SQL_FAQ#What_are_the_difference_between_DDL.2C_DML_and_DCL_commands.3F http://infolab.stanford.edu/~ullman/fcdb/oracle/or-nonstandard.html#transactions Is there any way to turn off this implicit commit? A: You can not turn this off. Fairly easy to work around by designing your scripts to drop tables in the event they already exist etc... You can look at using FLASHBACK database, I believe you can do this at the schema/object level but check the docs to confirm that. You would need to be on 10G for that to work.
Wrap an Oracle schema update in a transaction
I've got a program that periodically updates its database schema. Sometimes, one of the DDL statements might fail and if it does, I want to roll back all the changes. I wrap the update in a transaction like so: BEGIN TRAN; CREATE TABLE A (PKey int NOT NULL IDENTITY, NewFieldKey int NULL, CONSTRAINT PK_A PRIMARY KEY (PKey)); CREATE INDEX A_2 ON A (NewFieldKey); CREATE TABLE B (PKey int NOT NULL IDENTITY, CONSTRAINT PK_B PRIMARY KEY (PKey)); ALTER TABLE A ADD CONSTRAINT FK_B_A FOREIGN KEY (NewFieldKey) REFERENCES B (PKey); COMMIT TRAN; As we're executing, if one of the statements fail, I do a ROLLBACK instead of a COMMIT. This works great on SQL Server, but doesn't have the desired effect on Oracle. Oracle seems to do an implicit COMMIT after each DDL statement: http://www.orafaq.com/wiki/SQL_FAQ#What_are_the_difference_between_DDL.2C_DML_and_DCL_commands.3F http://infolab.stanford.edu/~ullman/fcdb/oracle/or-nonstandard.html#transactions Is there any way to turn off this implicit commit?
[ "You can not turn this off. Fairly easy to work around by designing your scripts to drop tables in the event they already exist etc...\nYou can look at using FLASHBACK database, I believe you can do this at the schema/object level but check the docs to confirm that. You would need to be on 10G for that to work. \n" ]
[ 6 ]
[]
[]
[ "ddl", "oracle", "transactions" ]
stackoverflow_0000059303_ddl_oracle_transactions.txt
Q: How do I use a start commit hook in TortoiseSVN to setup a custom log entry? I'd like to automate TortoiseSVN as part of a commit process. Specifically I'd like to dynamically create a log entry for the commit dialog. I know that I can launch the commit dialog either from the commandline or by right clicking on a folder and selecting svncommit. I'd like to use the start commit hook to setup a log entry. I thought this worked by passing an entry file name in the MESSAGEFILE variable but when I add a hook script it cannot see this variable (hook launched successfully after right clicking and choosing svncommit). When I try using the commandline I use the /logmsgfile parameter but it seems to have no effect. I'm using tortoisesvn 1.5.3. A: Looks like it was my own misunderstanding of the the API that caused by a problem. Solution: 1) I've added a start commit hook script to TortoiseSVN using the hooks gui in the settings area of the right click menu. 2) The script receive 3 pieces of information: PATH MESSAGEFILE CWD For details see: Manual These are passed as command line arguements to the script - for some reason I had thought they were set as temporary environmental variables. My script then simply opens the file specified by the second arguement and adds in the custom text. When the commit dialog comes up the custom text is there. 3) Best of all if tortoisesvn is launched from a script directly into the commit dialog: e.g. [ tortoiseproc /command:commit /path:. /closeonend:1 ] The hooks are still called. A: If you just need a static template, set the tsvn:logtemplate property. For dynamic generation, the /logmsgfile parameter does work, but it seems to need the full path. A batch file that looks like the following might work for you. GenerateLogMsg.exe > tmp.msg "C:\Program Files\TortoiseSVN\bin\TortoiseProc.exe" /command:commit /path:. /logmsgfile:"C:\Documents and Settings\User\My Documents\Visual Studio Projects\Project\tmp.msg"
How do I use a start commit hook in TortoiseSVN to setup a custom log entry?
I'd like to automate TortoiseSVN as part of a commit process. Specifically I'd like to dynamically create a log entry for the commit dialog. I know that I can launch the commit dialog either from the commandline or by right clicking on a folder and selecting svncommit. I'd like to use the start commit hook to setup a log entry. I thought this worked by passing an entry file name in the MESSAGEFILE variable but when I add a hook script it cannot see this variable (hook launched successfully after right clicking and choosing svncommit). When I try using the commandline I use the /logmsgfile parameter but it seems to have no effect. I'm using tortoisesvn 1.5.3.
[ "Looks like it was my own misunderstanding of the the API that caused by a problem.\nSolution:\n1) I've added a start commit hook script to TortoiseSVN using the hooks gui in the settings area of the right click menu.\n2) The script receive 3 pieces of information: PATH MESSAGEFILE CWD\n For details see: Manual\n These are passed as command line arguements to the script - for some reason I had thought they were set as temporary environmental variables.\nMy script then simply opens the file specified by the second arguement and adds in the custom text.\nWhen the commit dialog comes up the custom text is there.\n3) Best of all if tortoisesvn is launched from a script directly into the commit dialog:\ne.g. [ tortoiseproc /command:commit /path:. /closeonend:1 ]\nThe hooks are still called.\n", "If you just need a static template, set the tsvn:logtemplate property.\nFor dynamic generation, the /logmsgfile parameter does work, but it seems to need the full path. A batch file that looks like the following might work for you.\nGenerateLogMsg.exe > tmp.msg\n\"C:\\Program Files\\TortoiseSVN\\bin\\TortoiseProc.exe\" /command:commit /path:. /logmsgfile:\"C:\\Documents and Settings\\User\\My Documents\\Visual Studio Projects\\Project\\tmp.msg\"\n\n" ]
[ 2, 1 ]
[]
[]
[ "tortoisesvn" ]
stackoverflow_0000059007_tortoisesvn.txt
Q: Default button size? How do I create a button control (with CreateWindow of a BUTTON window class) that has a standard system-wide size (especially height) that's consistent with the rest of Windows applications? I should of course take DPI into account and probably other settings. Remark: Using USE_CW_DEFAULT for width and height results in a 0, 0 size button, so that's not a solution. A: This is what MSDN has to say: Design Specifications and Guidelines - Visual Design: Layout. The default size of a button is 50x14 DLUs, which can be calculated to pixels using the examples shown for GetDialogBaseUnits. The MapDialogRect function seems to do the calculation for you. A: In the perfect, hassle-free world... To create a standard size button we would have to do this: LONG units = GetDialogBaseUnits(); m_hButton = CreateWindow(TEXT("BUTTON"), TEXT("Close"), WS_VISIBLE | WS_CHILD | BS_DEFPUSHBUTTON, 0, 0, MulDiv(LOWORD(units), 50, 4), MulDiv(HIWORD(units), 14, 8), hwnd, NULL, hInst, NULL); where 50 and 14 are respective DLU dimensions, 4 and 8 are horizontal and vertical dialog template units respectively, based on GetDialogBaseUnits() function documentation remarks. Nothing's perfect BUT as Anders pointed out, those metrics are based on the system font. If your window uses a shell dialog font or simply anything not making your eyes bleed, you're pretty much on your own. To get your own "dialog" base units, you have to retrieve current text metrics with GetTextMetrics() and use character height and average width (tmHeight and tmAveCharWidth of the TEXTMETRIC struct respectively) and translate them with MulDiv by your own, unless you are in a dialog, then MapDialogRect() will do all the job for you. Note that tmAveCharWidth only approximates the actual average character width so it's recommended to use a GetTextExtentPoint32() function on an alphabetic character set instead. See: How to calculate dialog box units based on the current font in Visual C++ How To Calculate Dialog Base Units with Non-System-Based Font Simpler alternative If buttons are the only control you want to resize automatically, you can also use BCM_GETIDEALSIZE message Button_GetIdealSize() macro (Windows XP and up only) to retrieve optimal width and height that fits anything the button contains, though it looks pretty ugly without any margins applied around the button's text. A: @macbirdie: you should NOT use GetDialogBaseUnits(), it is based on the default system font (Ugly bitmap font). You should use MapDialogRect()
Default button size?
How do I create a button control (with CreateWindow of a BUTTON window class) that has a standard system-wide size (especially height) that's consistent with the rest of Windows applications? I should of course take DPI into account and probably other settings. Remark: Using USE_CW_DEFAULT for width and height results in a 0, 0 size button, so that's not a solution.
[ "This is what MSDN has to say: Design Specifications and Guidelines - Visual Design: Layout.\nThe default size of a button is 50x14 DLUs, which can be calculated to pixels using the examples shown for GetDialogBaseUnits.\nThe MapDialogRect function seems to do the calculation for you.\n", "In the perfect, hassle-free world...\nTo create a standard size button we would have to do this:\nLONG units = GetDialogBaseUnits();\nm_hButton = CreateWindow(TEXT(\"BUTTON\"), TEXT(\"Close\"), \n WS_VISIBLE | WS_CHILD | BS_DEFPUSHBUTTON, \n 0, 0, MulDiv(LOWORD(units), 50, 4), MulDiv(HIWORD(units), 14, 8),\n hwnd, NULL, hInst, NULL);\n\nwhere 50 and 14 are respective DLU dimensions, 4 and 8 are horizontal and vertical dialog template units respectively, based on GetDialogBaseUnits() function documentation remarks.\n\nNothing's perfect\nBUT as Anders pointed out, those metrics are based on the system font. If your window uses a shell dialog font or simply anything not making your eyes bleed, you're pretty much on your own.\nTo get your own \"dialog\" base units, you have to retrieve current text metrics with GetTextMetrics() and use character height and average width (tmHeight and tmAveCharWidth of the TEXTMETRIC struct respectively) and translate them with MulDiv by your own, unless you are in a dialog, then MapDialogRect() will do all the job for you.\nNote that tmAveCharWidth only approximates the actual average character width so it's recommended to use a GetTextExtentPoint32() function on an alphabetic character set instead.\nSee:\n\nHow to calculate dialog box units based on the current font in Visual C++\nHow To Calculate Dialog Base Units with Non-System-Based Font\n\n\nSimpler alternative\nIf buttons are the only control you want to resize automatically, you can also use BCM_GETIDEALSIZE message Button_GetIdealSize() macro (Windows XP and up only) to retrieve optimal width and height that fits anything the button contains, though it looks pretty ugly without any margins applied around the button's text.\n", "@macbirdie: you should NOT use GetDialogBaseUnits(), it is based on the default system font (Ugly bitmap font). You should use MapDialogRect()\n" ]
[ 9, 9, 1 ]
[]
[]
[ "winapi" ]
stackoverflow_0000058620_winapi.txt
Q: User Initiated Kernel dump in Windows XP I remember watching a webcast from Mark Russinovich showing the sequence of keyboard keys for a user initiated kernel dump. Can somebody refresh my memory on the exact order of the keys. Please note this is for XP. A: http://psacake.com/web/jr.asp contains full instructions, and here's an excerpt: While it may seem odd to think about purposefully causing a Blue Screen Of Death (BSOD), Microsoft includes such a provision in Windows XP. This might come in handy for testing and troubleshooting your Startup And Recovery settings, Event logging, and for demonstration purposes. Here's how to create a BSOD: Launch the Registry Editor (Regedit.exe). Go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\i8042prt\Parameters. Go to Edit, select New | DWORD Value and name the new value CrashOnCtrlScroll. Double-click the CrashOnCtrlScroll DWORD Value, type 1 in the Value Data textbox, and click OK. Close the Registry Editor and restart Windows XP. When you want to cause a BSOD, press and hold down the [Ctrl] key on the right side of your keyboard, and then tap the [ScrollLock] key twice. Now you should see the BSOD. If your system reboots instead of displaying the BSOD, you'll have to disable the Automatically Restart setting in the System Properties dialog box. To do so, follow these steps: Press [Windows]-Break. Select the Advanced tab. Click the Settings button in the Startup And Recovery panel. Clear the Automatically Restart check box in the System Failure panel. Click OK twice. Here's how you remove the BSOD configuration: Launch the Registry Editor (Regedit.exe). Go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\i8042prt\Parameters. Select the CrashOnCtrlScroll value, pull down the Edit menu, and select the Delete command. Close the Registry Editor and restart Windows XP. Note: Editing the registry is risky, so make sure you have a verified backup before making any changes. And I may be wrong in assuming you want BSOD, so this is a Microsoft Page showing how to capture kernel dumps: https://web.archive.org/web/20151014034039/https://support.microsoft.com/fr-ma/kb/316450 A: As far as I know, the "Create Dump" command was only added to Task Manager in Vista. The only process I know of to do this is using the adplus VBScript that comes with Debugging Tools. Short of hooking into dbghelp and programmatically doing it yourself. A: You can setup the user dump tool from Microsoft with hot keys to dump a process. However, this is a user process dump, not a kernel dump... A: I don't know of any keyboard short cuts, but are you looking for like in task manager, when you right click on a process and select "Create Dump"?
User Initiated Kernel dump in Windows XP
I remember watching a webcast from Mark Russinovich showing the sequence of keyboard keys for a user initiated kernel dump. Can somebody refresh my memory on the exact order of the keys. Please note this is for XP.
[ "http://psacake.com/web/jr.asp contains full instructions, and here's an excerpt:\n\nWhile it may seem odd to think about purposefully causing a Blue Screen Of Death (BSOD), Microsoft includes such a provision in Windows XP. This might come in handy for testing and troubleshooting your Startup And Recovery settings, Event logging, and for demonstration purposes.\n\nHere's how to create a BSOD:\n\nLaunch the Registry Editor (Regedit.exe).\nGo to HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\i8042prt\\Parameters.\nGo to Edit, select New | DWORD Value and name the new value CrashOnCtrlScroll.\nDouble-click the CrashOnCtrlScroll DWORD Value, type 1 in the Value Data textbox, and click OK.\nClose the Registry Editor and restart Windows XP.\nWhen you want to cause a BSOD, press and hold down the [Ctrl] key on the right side of your keyboard, and then tap the [ScrollLock] key twice. Now you should see the BSOD.\n\nIf your system reboots instead of displaying the BSOD, you'll have to disable the Automatically\nRestart setting in the System Properties dialog box. To do so, follow these steps:\n\nPress [Windows]-Break.\nSelect the Advanced tab.\nClick the Settings button in the Startup And Recovery panel.\nClear the Automatically Restart check box in the System Failure panel.\nClick OK twice.\n\nHere's how you remove the BSOD configuration:\n\nLaunch the Registry Editor (Regedit.exe).\nGo to HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\i8042prt\\Parameters.\nSelect the CrashOnCtrlScroll value, pull down the Edit menu, and select the Delete command.\nClose the Registry Editor and restart Windows XP.\nNote: Editing the registry is risky, so make sure you have a verified backup before making any changes. \n\nAnd I may be wrong in assuming you want BSOD, so this is a Microsoft Page showing how to capture kernel dumps:\nhttps://web.archive.org/web/20151014034039/https://support.microsoft.com/fr-ma/kb/316450\n", "As far as I know, the \"Create Dump\" command was only added to Task Manager in Vista. The only process I know of to do this is using the adplus VBScript that comes with Debugging Tools. Short of hooking into dbghelp and programmatically doing it yourself.\n", "You can setup the user dump tool from Microsoft with hot keys to dump a process. However, this is a user process dump, not a kernel dump... \n", "I don't know of any keyboard short cuts, but are you looking for like in task manager, when you right click on a process and select \"Create Dump\"?\n" ]
[ 8, 1, 1, 0 ]
[]
[]
[ "kernel", "memory_dump", "windows_xp" ]
stackoverflow_0000059313_kernel_memory_dump_windows_xp.txt
Q: Clean up Designer.vb file in Visual Studio 2008 I noticed that my Designer.vb file of one of my forms has a lot of controls that aren't even used or visible on my form. This is probably from copying controls from my other forms. Is there a way to clean up the Designer.vb file and get rid of all the unused controls? **UPDATE: This is for a Windows Form project. A: The only real solution I see is to copy all the controls into a new form by selecting them in the designer. This way all the not created controls should not follow you to the next form.
Clean up Designer.vb file in Visual Studio 2008
I noticed that my Designer.vb file of one of my forms has a lot of controls that aren't even used or visible on my form. This is probably from copying controls from my other forms. Is there a way to clean up the Designer.vb file and get rid of all the unused controls? **UPDATE: This is for a Windows Form project.
[ "The only real solution I see is to copy all the controls into a new form by selecting them in the designer. This way all the not created controls should not follow you to the next form.\n" ]
[ 3 ]
[]
[]
[ "vb.net", "visual_studio_2008" ]
stackoverflow_0000059418_vb.net_visual_studio_2008.txt
Q: String initialization with and without explicit trailing terminator What is the difference between char str1[32] = "\0"; and char str2[32] = ""; A: Since you already declared the sizes, the two declarations are exactly equal. However, if you do not specify the sizes, you can see that the first declaration makes a larger string: char a[] = "a\0"; char b[] = "a"; printf("%i %i\n", sizeof(a), sizeof(b)); prints 3 2 This is because a ends with two nulls (the explicit one and the implicit one) while b ends only with the implicit one. A: Well, assuming the two cases are as follows (to avoid compiler errors): char str1[32] = "\0"; char str2[32] = ""; As people have stated, str1 is initialized with two null characters: char str1[32] = {'\0','\0'}; char str2[32] = {'\0'}; However, according to both the C and C++ standard, if part of an array is initialized, then remaining elements of the array are default initialized. For a character array, the remaining characters are all zero initialized (i.e. null characters), so the arrays are really initialized as: char str1[32] = {'\0','\0','\0','\0','\0','\0','\0','\0', '\0','\0','\0','\0','\0','\0','\0','\0', '\0','\0','\0','\0','\0','\0','\0','\0', '\0','\0','\0','\0','\0','\0','\0','\0'}; char str2[32] = {'\0','\0','\0','\0','\0','\0','\0','\0', '\0','\0','\0','\0','\0','\0','\0','\0', '\0','\0','\0','\0','\0','\0','\0','\0', '\0','\0','\0','\0','\0','\0','\0','\0'}; So, in the end, there really is no difference between the two. A: As others have pointed out, "" implies one terminating '\0' character, so "\0" actually initializes the array with two null characters. Some other answerers have implied that this is "the same", but that isn't quite right. There may be no practical difference -- as long the only way the array is used is to reference it as a C string beginning with the first character. But note that they do indeed result in two different memory initalizations, in particular they differ in whether Str[1] is definitely zero, or is uninitialized (and could be anything, depending on compiler, OS, and other random factors). There are some uses of the array (perhaps not useful, but still) that would have different behaviors. A: Unless I'm mistaken, the first will initialize 2 chars to 0 (the '\0' and the terminator that's always there, and leave the rest untouched, and the last will initialize only 1 char (the terminator).
String initialization with and without explicit trailing terminator
What is the difference between char str1[32] = "\0"; and char str2[32] = "";
[ "Since you already declared the sizes, the two declarations are exactly equal. However, if you do not specify the sizes, you can see that the first declaration makes a larger string:\nchar a[] = \"a\\0\";\nchar b[] = \"a\";\n\nprintf(\"%i %i\\n\", sizeof(a), sizeof(b));\n\nprints\n3 2\n\nThis is because a ends with two nulls (the explicit one and the implicit one) while b ends only with the implicit one.\n", "Well, assuming the two cases are as follows (to avoid compiler errors):\nchar str1[32] = \"\\0\";\nchar str2[32] = \"\";\n\nAs people have stated, str1 is initialized with two null characters:\nchar str1[32] = {'\\0','\\0'};\nchar str2[32] = {'\\0'};\n\nHowever, according to both the C and C++ standard, if part of an array is initialized, then remaining elements of the array are default initialized. For a character array, the remaining characters are all zero initialized (i.e. null characters), so the arrays are really initialized as:\nchar str1[32] = {'\\0','\\0','\\0','\\0','\\0','\\0','\\0','\\0',\n '\\0','\\0','\\0','\\0','\\0','\\0','\\0','\\0',\n '\\0','\\0','\\0','\\0','\\0','\\0','\\0','\\0',\n '\\0','\\0','\\0','\\0','\\0','\\0','\\0','\\0'};\nchar str2[32] = {'\\0','\\0','\\0','\\0','\\0','\\0','\\0','\\0',\n '\\0','\\0','\\0','\\0','\\0','\\0','\\0','\\0',\n '\\0','\\0','\\0','\\0','\\0','\\0','\\0','\\0',\n '\\0','\\0','\\0','\\0','\\0','\\0','\\0','\\0'};\n\nSo, in the end, there really is no difference between the two.\n", "As others have pointed out, \"\" implies one terminating '\\0' character, so \"\\0\" actually initializes the array with two null characters.\nSome other answerers have implied that this is \"the same\", but that isn't quite right. There may be no practical difference -- as long the only way the array is used is to reference it as a C string beginning with the first character. But note that they do indeed result in two different memory initalizations, in particular they differ in whether Str[1] is definitely zero, or is uninitialized (and could be anything, depending on compiler, OS, and other random factors). There are some uses of the array (perhaps not useful, but still) that would have different behaviors.\n", "Unless I'm mistaken, the first will initialize 2 chars to 0 (the '\\0' and the terminator that's always there, and leave the rest untouched, and the last will initialize only 1 char (the terminator).\n" ]
[ 23, 17, 2, 0 ]
[]
[]
[ "c" ]
stackoverflow_0000049596_c.txt
Q: Is it feasible to support multiple applications of the same type that are all written in different languages? As much as we would all like to say it is a benefit to programmers to be language agnostic, is it really feasible to support multiple enterprise Web applications of the same type all written in different languages? Think about how complicated a CMS or e-commerce system can be -- now imagine supporting three different CMS platforms all written in different languages. I would hate to be known as a .NET or Java or PHP shop, but I also don't want to be the vendor who says they can support a solution they have never worked with, upsetting a client who wonders why we can't get something done right on time. Can anyone speak from experience on this? Does your company usually just suck it up, and try and to learn a new platform on the fly? Do you bill up-to-speed, or eat those costs? A: I think it all depends on who your clients are and what they expect. I think knowing about different technologies is good, but really when you're hired by someone, they expect you to know what you are doing. Personally, I would much rather be known that I do a really good job with a certain type of technology and when hired, I get the job done well. If you try and go after every contract without regard to what your core competencies are, you aren't going to succeed. You'll anger the people who do hire you and make mistakes, and you'll potentially miss opportunities where you can really shine. Sometimes you have to make compromises to pay the bills, but if you aren't careful, it can bite you in the end. The large consulting firms I've worked with throw resources at it and hope they don't anger too many people. They mainly do this because they know that the people who work with the consultants and get angry when they don't get the job done aren't the ones making the decisions to keep them hired. To them (not all of them I know, but some definately), don't care if they screw up because they ultimately know they can convince the VPs and SVPs to keep them around. A: To be honest, I think you tend to see this kind of thing happen over time, no matter how disciplined the organization is. It's natural for new methodologies to come bundled in the form of new libraries, frameworks, or even languages. Keep in mind that a .NET shop may well have been a ASP/VB shop at one time. They'll probably still maintain older systems for clients, because there's little benefit to rewriting everything from scratch. I'm not sure anyone has the luxury to keep everything "the same," because language issues are minor compared to library or framework issues -- especially the ones you build yourself.
Is it feasible to support multiple applications of the same type that are all written in different languages?
As much as we would all like to say it is a benefit to programmers to be language agnostic, is it really feasible to support multiple enterprise Web applications of the same type all written in different languages? Think about how complicated a CMS or e-commerce system can be -- now imagine supporting three different CMS platforms all written in different languages. I would hate to be known as a .NET or Java or PHP shop, but I also don't want to be the vendor who says they can support a solution they have never worked with, upsetting a client who wonders why we can't get something done right on time. Can anyone speak from experience on this? Does your company usually just suck it up, and try and to learn a new platform on the fly? Do you bill up-to-speed, or eat those costs?
[ "I think it all depends on who your clients are and what they expect. I think knowing about different technologies is good, but really when you're hired by someone, they expect you to know what you are doing. Personally, I would much rather be known that I do a really good job with a certain type of technology and when hired, I get the job done well. \nIf you try and go after every contract without regard to what your core competencies are, you aren't going to succeed. You'll anger the people who do hire you and make mistakes, and you'll potentially miss opportunities where you can really shine. Sometimes you have to make compromises to pay the bills, but if you aren't careful, it can bite you in the end. \nThe large consulting firms I've worked with throw resources at it and hope they don't anger too many people. They mainly do this because they know that the people who work with the consultants and get angry when they don't get the job done aren't the ones making the decisions to keep them hired. To them (not all of them I know, but some definately), don't care if they screw up because they ultimately know they can convince the VPs and SVPs to keep them around. \n", "To be honest, I think you tend to see this kind of thing happen over time, no matter how disciplined the organization is. It's natural for new methodologies to come bundled in the form of new libraries, frameworks, or even languages. Keep in mind that a .NET shop may well have been a ASP/VB shop at one time. They'll probably still maintain older systems for clients, because there's little benefit to rewriting everything from scratch.\nI'm not sure anyone has the luxury to keep everything \"the same,\" because language issues are minor compared to library or framework issues -- especially the ones you build yourself.\n" ]
[ 1, 1 ]
[]
[]
[ "multilingual" ]
stackoverflow_0000059436_multilingual.txt
Q: How can I tab accross a ButtonBar component in Flex? I have a button bar inf flex along with several other input controls, I have set the tabIndex property for each control and all goes well until I tab to the ButtonBar. The ButtonBar has 3 buttons but tabbing to it, only the first button gets focus, tab again and the focus goes back to the top control... How can I make tabbing go through ALL buttons in a Flex Button bar? Is there a way to do this or do I need to create individual buttons for this? This seems like a possible bug to me... A: The component is written so the user must press the left/right arrow keys when focus is within the bar to traverse the buttons--this is a fairly standard GUI behavior (you also see this in other places like radio button groups). If you look into the SDK source for ButtonBar, you can see where they've explicitly disabled tab focus for each child button as it's created: override protected function createNavItem( label:String, icon:Class = null):IFlexDisplayObject { var newButton:Button = Button(navItemFactory.newInstance()); // Set tabEnabled to false so individual buttons don't get focus. newButton.focusEnabled = false; ... If you really want to change this behavior, you can make a subclass to do it, something like this: package { import mx.controls.Button; import mx.controls.ButtonBar; import mx.core.IFlexDisplayObject; public class FocusableButtonBar extends ButtonBar { public function FocusableButtonBar() { super(); this.focusEnabled = false; } override protected function createNavItem( label:String, icon:Class=null):IFlexDisplayObject { var btn:Button = Button(super.createNavItem(label, icon)); btn.focusEnabled = true; return btn; } } }
How can I tab accross a ButtonBar component in Flex?
I have a button bar inf flex along with several other input controls, I have set the tabIndex property for each control and all goes well until I tab to the ButtonBar. The ButtonBar has 3 buttons but tabbing to it, only the first button gets focus, tab again and the focus goes back to the top control... How can I make tabbing go through ALL buttons in a Flex Button bar? Is there a way to do this or do I need to create individual buttons for this? This seems like a possible bug to me...
[ "The component is written so the user must press the left/right arrow keys when focus is within the bar to traverse the buttons--this is a fairly standard GUI behavior (you also see this in other places like radio button groups). If you look into the SDK source for ButtonBar, you can see where they've explicitly disabled tab focus for each child button as it's created:\noverride protected function createNavItem(\n label:String,\n icon:Class = null):IFlexDisplayObject\n {\n var newButton:Button = Button(navItemFactory.newInstance());\n\n // Set tabEnabled to false so individual buttons don't get focus.\n newButton.focusEnabled = false;\n ...\n\nIf you really want to change this behavior, you can make a subclass to do it, something like this:\npackage {\n import mx.controls.Button;\n import mx.controls.ButtonBar;\n import mx.core.IFlexDisplayObject;\n\n public class FocusableButtonBar extends ButtonBar {\n public function FocusableButtonBar()\n {\n super();\n this.focusEnabled = false;\n }\n\n override protected function createNavItem(\n label:String, icon:Class=null):IFlexDisplayObject\n {\n var btn:Button = Button(super.createNavItem(label, icon));\n btn.focusEnabled = true;\n return btn;\n }\n }\n}\n\n" ]
[ 3 ]
[]
[]
[ "actionscript_3", "apache_flex" ]
stackoverflow_0000059196_actionscript_3_apache_flex.txt
Q: Highlight parents in xml string Say I have: <ul> <li id="x"> <a href="x">x</a> </li> <li id="y"> <a href="y">y</a> <ul> <li id="z"> <a href="z">z</a> </li> </ul> </li> </ul> I want to add a class value to all the list items that are the parents of z. So, I want to modify y but not x. Obviously, I can parse this into some kind of associative array and then recurse backwards. Any ideas how I can do it with just text processing (string replacing, regular expression, etc)? Thanks! A: I would use XSLT. You can specify to search for nodes that are ancestors of z . A: xpath has an ancestor-axis which includes all ancestors of the current context node. //*[@id="z"]/ancestor::li * any element [@id="z"] that has an attribute id with the value z (since the attribute is id there can(/should be) only be one such element) /ancestor::li all li elements that are ancestors of that see also: https://www.w3schools.com/xml/xpath_axes.asp A: Example of the whole XSLT: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:variable name="ancestors" select="descendant::li[@id = 'z']/ancestor::li"/> <xsl:template match="li"> <xsl:copy> <!-- test if current li is in the $ancestors node-list --> <xsl:if test="count($ancestors | .) = count($ancestors)"> <xsl:attribute name="class">ancestor</xsl:attribute> </xsl:if> <xsl:apply-templates select="node() | @*"/> </xsl:copy> </xsl:template> <xsl:template match="node() | @*"> <xsl:copy> <xsl:apply-templates select="node() | @*"/> </xsl:copy> </xsl:template> </xsl:stylesheet> A: I suggest you parse it into a DOM and recurse backwards like you were thinking. Regular expressions don't work very well for nested structures with arbitrary nesting levels. A: Will add the attribute only to elements with a descendent element of 'z'. A: Thanks for the input. I figured it was impossible without parsing (or using xsl, etc)... Oh well. A: This is a good fit for jQuery if that's available to you. $("#z").parent().parent().addClass("foo"); A: In addition to John Sheehan's anwser: With JQuery I'd rather use $('#z').parents('li').addClass('myClass'); than relying on the actual structure.
Highlight parents in xml string
Say I have: <ul> <li id="x"> <a href="x">x</a> </li> <li id="y"> <a href="y">y</a> <ul> <li id="z"> <a href="z">z</a> </li> </ul> </li> </ul> I want to add a class value to all the list items that are the parents of z. So, I want to modify y but not x. Obviously, I can parse this into some kind of associative array and then recurse backwards. Any ideas how I can do it with just text processing (string replacing, regular expression, etc)? Thanks!
[ "I would use XSLT. You can specify to search for nodes that are ancestors of z .\n", "xpath has an ancestor-axis which includes all ancestors of the current context node.\n//*[@id=\"z\"]/ancestor::li\n* any element\n[@id=\"z\"] that has an attribute id with the value z (since the attribute is id there can(/should be) only be one such element)\n/ancestor::li all li elements that are ancestors of that\nsee also: https://www.w3schools.com/xml/xpath_axes.asp\n", "Example of the whole XSLT:\n<xsl:stylesheet xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\" version=\"1.0\">\n\n <xsl:variable name=\"ancestors\" select=\"descendant::li[@id = 'z']/ancestor::li\"/>\n\n <xsl:template match=\"li\">\n <xsl:copy>\n <!-- test if current li is in the $ancestors node-list -->\n <xsl:if test=\"count($ancestors | .) = count($ancestors)\">\n <xsl:attribute name=\"class\">ancestor</xsl:attribute>\n </xsl:if>\n <xsl:apply-templates select=\"node() | @*\"/>\n </xsl:copy>\n </xsl:template>\n\n <xsl:template match=\"node() | @*\">\n <xsl:copy>\n <xsl:apply-templates select=\"node() | @*\"/>\n </xsl:copy>\n </xsl:template>\n\n</xsl:stylesheet>\n\n", "I suggest you parse it into a DOM and recurse backwards like you were thinking. Regular expressions don't work very well for nested structures with arbitrary nesting levels.\n", "\n\nWill add the attribute only to elements with a descendent element of 'z'. \n", "Thanks for the input. I figured it was impossible without parsing (or using xsl, etc)... Oh well.\n", "This is a good fit for jQuery if that's available to you.\n$(\"#z\").parent().parent().addClass(\"foo\");\n\n", "In addition to John Sheehan's anwser:\nWith JQuery I'd rather use\n$('#z').parents('li').addClass('myClass');\nthan relying on the actual structure.\n" ]
[ 2, 2, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "algorithm", "text", "xml" ]
stackoverflow_0000056946_algorithm_text_xml.txt
Q: I have a link icon next to each link. How do I exclude the link icon from images? I've got the following in my .css file creating a little image next to each link on my site: div.post .text a[href^="http:"] { background: url(../../pics/remote.gif) right top no-repeat; padding-right: 10px; white-space: nowrap; } How do I modify this snippet (or add something new) to exclude the link icon next to images that are links themselves? A: If you set the background color and have a negative right margin on the image, the image will cover the external link image. Example: a[href^="http:"] { background: url(http://en.wikipedia.org/skins-1.5/monobook/external.png) right center no-repeat; padding-right: 14px; white-space: nowrap; } a[href^="http:"] img { margin-right: -14px; border: medium none; background-color: red; } <a href="http://www.google.ca">Google</a> <br/> <a href="http://www.google.ca"> <img src="http://upload.wikimedia.org/wikipedia/en/thumb/4/4a/Commons-logo.svg/50px-Commons-logo.svg.png" /> </a> edit: If you've got a patterned background this isn't going to look great for images that have transparency. Also, your href^= selector won't work on IE7 but you probably knew that already A: It might be worth it to add a class to those <a> tags and then add another declaration to remove the background: div.post .text a.noimage{ background:none; } A: If you have the content of the links as a span, you could do this, otherwise I think you would need to give one scenario a class to differentiate it. a > span { background: url(../../pics/remote.gif) right top no-repeat; padding-right: 10px; white-space: nowrap; } a > img { /* any specific styling for images wrapped in a link (e.g. polaroid like) */ border: 1px solid #cccccc; padding: 4px 4px 25px 4px; } A: You need a class name on either the a elements you want to include or exclude. If you don't want to do this in your server side code or documents, you could add the classes with javascript as the page is loaded. With the selection logic wrapped up elsewhere, your rule could just be: a.external_link { background: url(../../pics/remote.gif) right top no-repeat; padding-right: 10px; white-space: nowrap; } It would be possible with XPath to create a pattern like yours that would also exclude a elements that had img children, however this facility has been repeatedly (2002, 2006, 2007) proposed and rejected for CSS, largely on the grounds it goes against the incremental layout principles. So, while it is possible to do neat conditional content additions as you have with a contextual selector and a prefix match on the href attribute, CSS is considerably weaker than a general purpose programming language. To do more complex things you need to move the logic up a level and write out simpler instructions for the style engine to handle.
I have a link icon next to each link. How do I exclude the link icon from images?
I've got the following in my .css file creating a little image next to each link on my site: div.post .text a[href^="http:"] { background: url(../../pics/remote.gif) right top no-repeat; padding-right: 10px; white-space: nowrap; } How do I modify this snippet (or add something new) to exclude the link icon next to images that are links themselves?
[ "If you set the background color and have a negative right margin on the image, the image will cover the external link image.\nExample:\n\n\na[href^=\"http:\"] {\r\n background: url(http://en.wikipedia.org/skins-1.5/monobook/external.png) right center no-repeat;\r\n padding-right: 14px;\r\n white-space: nowrap;\r\n}\r\na[href^=\"http:\"] img {\r\n margin-right: -14px;\r\n border: medium none;\r\n background-color: red;\r\n}\n<a href=\"http://www.google.ca\">Google</a>\r\n<br/>\r\n<a href=\"http://www.google.ca\">\r\n <img src=\"http://upload.wikimedia.org/wikipedia/en/thumb/4/4a/Commons-logo.svg/50px-Commons-logo.svg.png\" />\r\n</a>\n\n\n\nedit: If you've got a patterned background this isn't going to look great for images that have transparency. Also, your href^= selector won't work on IE7 but you probably knew that already\n", "It might be worth it to add a class to those <a> tags and then add another declaration to remove the background:\ndiv.post .text a.noimage{\n background:none;\n}\n\n", "If you have the content of the links as a span, you could do this, otherwise I think you would need to give one scenario a class to differentiate it.\na > span {\n background: url(../../pics/remote.gif) right top no-repeat;\n padding-right: 10px;\n white-space: nowrap;\n}\na > img {\n /* any specific styling for images wrapped in a link (e.g. polaroid like) */\n border: 1px solid #cccccc;\n padding: 4px 4px 25px 4px;\n}\n\n", "You need a class name on either the a elements you want to include or exclude. If you don't want to do this in your server side code or documents, you could add the classes with javascript as the page is loaded. With the selection logic wrapped up elsewhere, your rule could just be:\na.external_link\n{\n background: url(../../pics/remote.gif) right top no-repeat;\n padding-right: 10px;\n white-space: nowrap;\n}\n\nIt would be possible with XPath to create a pattern like yours that would also exclude a elements that had img children, however this facility has been repeatedly (2002, 2006, 2007) proposed and rejected for CSS, largely on the grounds it goes against the incremental layout principles.\nSo, while it is possible to do neat conditional content additions as you have with a contextual selector and a prefix match on the href attribute, CSS is considerably weaker than a general purpose programming language. To do more complex things you need to move the logic up a level and write out simpler instructions for the style engine to handle.\n" ]
[ 4, 1, 0, 0 ]
[]
[]
[ "css" ]
stackoverflow_0000059423_css.txt
Q: How do I find records added to my database table in the past 24 hours? I'm using MySQL in particular, but I'm hoping for a cross-vendor solution. I'm using the NOW() function to add a timestamp as a column for each record. INSERT INTO messages (typeId, messageTime, stationId, message) VALUES (?, NOW(), ?, ?) A: SELECT * FROM messages WHERE DATE_SUB(CURDATE(),INTERVAL 1 DAY) <= messageTime A: The SQL Server query is: Select * From Messages Where MessageTime > DateAdd(dd, -1, GetDate()) As far as I can tell the (untested!) MySQL equivalent is Select * From Messages Where MessageTime > ADDDATE(NOW(), INTERVAL -1 DAY) A: For Sybase SQL Anywhere: Select * From Messages Where MessageTime > dateadd( day, -1, now() ) A: For Oracle SELECT * FROM messages WHERE messageTime > SYSDATE - 1 (The psuedo variable SYSDATE includes the time, so sysdate -1 will give you the last 24 hrs) A: There is no cross database solution, as most of them have their own date handling (and mainly interval representation) syntax and semantics. In PostgreSQL it would be SELECT * FROM messages WHERE messagetime >= messagetime - interval '1 day' A: If you are accessing this from an API based client (I'm guessing that is the case because of the '?'s in the query) you can do this from your program rather than through SQL. Note: The rest is for JDBC syntax, other APIs/languages will be different syntax, but should be conceptually the same. On the insert side do PreparedStatement stmt = connection.prepareStatement( "INSERT INTO messages " + "(typeId, messageTime, stationId, message) VALUES " + "(?, ?, ?, ?)" ); stmt.setInt(1, typeId); stmt.setDate(2, new java.sql.Date(System.currentTimeMillis())); stmt.setInt(3, stationId); stmt.setString(4, message); On the query side do: PrepatedStatement stmt = connection.prepareStatement( "SELECT typeId, messageTime, stationId, message " + "from messages where messageTime < ?"); long yesterday = System.currentTimeMillis() - 86400000; // 86400 sec/day stmt.setDate(1,new java.sql.Date(yesterday)); That should work in a portable manner.
How do I find records added to my database table in the past 24 hours?
I'm using MySQL in particular, but I'm hoping for a cross-vendor solution. I'm using the NOW() function to add a timestamp as a column for each record. INSERT INTO messages (typeId, messageTime, stationId, message) VALUES (?, NOW(), ?, ?)
[ "SELECT * FROM messages WHERE DATE_SUB(CURDATE(),INTERVAL 1 DAY) <= messageTime\n\n", "The SQL Server query is:\nSelect *\nFrom Messages\nWhere MessageTime > DateAdd(dd, -1, GetDate())\n\nAs far as I can tell the (untested!) MySQL equivalent is\nSelect *\nFrom Messages\nWhere MessageTime > ADDDATE(NOW(), INTERVAL -1 DAY)\n\n", "For Sybase SQL Anywhere:\nSelect * From Messages Where MessageTime > dateadd( day, -1, now() )\n\n", "For Oracle \nSELECT * FROM messages WHERE messageTime > SYSDATE - 1\n\n(The psuedo variable SYSDATE includes the time, so sysdate -1 will give you the last 24 hrs)\n", "There is no cross database solution, as most of them have their own date handling (and mainly interval representation) syntax and semantics. \nIn PostgreSQL it would be\nSELECT * FROM messages WHERE messagetime >= messagetime - interval '1 day'\n\n", "If you are accessing this from an API based client (I'm guessing that is the case because of the '?'s in the query) you can do this from your program rather than through SQL.\nNote: The rest is for JDBC syntax, other APIs/languages will be different syntax, but should be conceptually the same. \nOn the insert side do\nPreparedStatement stmt = connection.prepareStatement( \n \"INSERT INTO messages \" +\n \"(typeId, messageTime, stationId, message) VALUES \" +\n \"(?, ?, ?, ?)\" );\nstmt.setInt(1, typeId);\nstmt.setDate(2, new java.sql.Date(System.currentTimeMillis()));\nstmt.setInt(3, stationId);\nstmt.setString(4, message);\n\nOn the query side do:\nPrepatedStatement stmt = connection.prepareStatement(\n \"SELECT typeId, messageTime, stationId, message \" +\n \"from messages where messageTime < ?\");\nlong yesterday = System.currentTimeMillis() - 86400000; // 86400 sec/day\nstmt.setDate(1,new java.sql.Date(yesterday));\n\nThat should work in a portable manner. \n" ]
[ 4, 2, 1, 1, 0, 0 ]
[]
[]
[ "sql" ]
stackoverflow_0000059425_sql.txt
Q: Confused by gdb print ptr vs print "%s" 1167 ptr = (void*)getcwd(cwd, MAX_PATH_LENGTH-1); (gdb) n 1168 if (!ptr) { (gdb) print ptr $1 = 0xbff2d96c "/media/MMC-SD/partition1/aaaaaaaaaaa" (gdb) print &cwd $2 = (char (*)[3500]) 0xbff2d96c (gdb) print strlen(cwd) $3 = 36 (gdb) print "%s",cwd $4 = "/media/MMC-SD/partition1/aaaaaaaaaaa", '\0' <repeats 912 times>, "��O�001\000\000\000\000��027\000\000\000�3����EL鷠3�000��027\000\000\000\000\000\000\000\027\000\000\000\000��/�027\000\000\000�3����N����\230���鷠3�000��027\000\000\000\000\000\000\000��000\000\000\000\001\000\000\000��M鷠3����\000\000\000\000.\231�027��w\005\b\001\000"... (gdb) print "%s", ptr $5 = 0xbff2d96c "/media/MMC-SD/partition1/aaaaaaaaaaa" (gdb) Quit Why is ptr printing the string correctly but cwd not; this also affects the program and it crashes if I try to use the cwd... [edit: turns out that crash was caused by a stupid buffer overflow on this var... grr...not gdb, but the print question was still valid] A: I agree with mweerden. Trying something I believe is similar to your code, I get: (gdb) print cwd $1 = "/media", '\0' <repeats 782 times>, "\016���" ... (gdb) print (char*) cwd $2 = 0xbfc8eb84 "/media" from gdb, so it seems that since cwd was defined as char cwd[3500], gdb prints the entire array, while if you tell gdb to interpret it as a char*, it will work as you expect. If your application crashes, I would assume it is because of something else. A: The reason that cwd is printed differently in gdb is because gdb knows that ptr is a char * (I guess) and that cwd is an array of length 3500 (as shown in your output). So when printing ptr it prints the pointer value (and as a service also the string it points to) and when printing cwd it prints the whole array. I don't see any reason why using cwd instead of ptr would lead to problems, but I would need to see some code to be sure. A: That ptr is displayed as nicely-formatted string and cwd as "byte buffer" is probably specific to gdb. In any case it shouldn't affect your application; according to man 3 getcwd, ptr should point to cwd (or it should be NULL if an error occurred). Can you use ptr without crashing the program? A: What type is cwd? The above code snippet doesn't tell us that. It could be that ptr being a void* is treated differently by gdb.
Confused by gdb print ptr vs print "%s"
1167 ptr = (void*)getcwd(cwd, MAX_PATH_LENGTH-1); (gdb) n 1168 if (!ptr) { (gdb) print ptr $1 = 0xbff2d96c "/media/MMC-SD/partition1/aaaaaaaaaaa" (gdb) print &cwd $2 = (char (*)[3500]) 0xbff2d96c (gdb) print strlen(cwd) $3 = 36 (gdb) print "%s",cwd $4 = "/media/MMC-SD/partition1/aaaaaaaaaaa", '\0' <repeats 912 times>, "��O�001\000\000\000\000��027\000\000\000�3����EL鷠3�000��027\000\000\000\000\000\000\000\027\000\000\000\000��/�027\000\000\000�3����N����\230���鷠3�000��027\000\000\000\000\000\000\000��000\000\000\000\001\000\000\000��M鷠3����\000\000\000\000.\231�027��w\005\b\001\000"... (gdb) print "%s", ptr $5 = 0xbff2d96c "/media/MMC-SD/partition1/aaaaaaaaaaa" (gdb) Quit Why is ptr printing the string correctly but cwd not; this also affects the program and it crashes if I try to use the cwd... [edit: turns out that crash was caused by a stupid buffer overflow on this var... grr...not gdb, but the print question was still valid]
[ "I agree with mweerden. Trying something I believe is similar to your code, I get:\n(gdb) print cwd\n$1 = \"/media\", '\\0' <repeats 782 times>, \"\\016���\" ...\n(gdb) print (char*) cwd\n$2 = 0xbfc8eb84 \"/media\"\n\nfrom gdb, so it seems that since cwd was defined as char cwd[3500], gdb prints the entire array, while if you tell gdb to interpret it as a char*, it will work as you expect. If your application crashes, I would assume it is because of something else.\n", "The reason that cwd is printed differently in gdb is because gdb knows that ptr is a char * (I guess) and that cwd is an array of length 3500 (as shown in your output). So when printing ptr it prints the pointer value (and as a service also the string it points to) and when printing cwd it prints the whole array.\nI don't see any reason why using cwd instead of ptr would lead to problems, but I would need to see some code to be sure.\n", "That ptr is displayed as nicely-formatted string and cwd as \"byte buffer\" is probably specific to gdb. In any case it shouldn't affect your application; according to man 3 getcwd, ptr should point to cwd (or it should be NULL if an error occurred).\nCan you use ptr without crashing the program?\n", "What type is cwd? The above code snippet doesn't tell us that. It could be that ptr being a void* is treated differently by gdb.\n" ]
[ 5, 2, 1, 0 ]
[]
[]
[ "buffer_overflow", "buffer_overrun", "c", "gdb" ]
stackoverflow_0000059483_buffer_overflow_buffer_overrun_c_gdb.txt
Q: Optimize Windows Form Load Time I have a Windows Form that takes quite a bit of time to load initially. However, each subsequent request to load the Form doesn't take as long. Is there a way to optimize a Form's load time? A: You can use ngen. I also use this tip to reduce the Memory footprint on startup. The Native Image Generator (Ngen.exe) is a tool that improves the performance of managed applications. Ngen.exe creates native images, which are files containing compiled processor-specific machine code, and installs them into the native image cache on the local computer. The runtime can use native images from the cache instead using the just-in-time (JIT) compiler to compile the original assembly. A: You need to find out where the time is going before you can optimise it. Don't just ngen it without finding that out first, as if the problem is loading a 150MB background bitmap resource then you won't have done anything useful at all with ngen. You should disregard all specific advice or hunches about optimisation which arise without any measurements being made.
Optimize Windows Form Load Time
I have a Windows Form that takes quite a bit of time to load initially. However, each subsequent request to load the Form doesn't take as long. Is there a way to optimize a Form's load time?
[ "You can use ngen.\nI also use this tip to reduce the Memory footprint on startup.\nThe Native Image Generator (Ngen.exe) is a tool that improves the performance of managed applications. Ngen.exe creates native images, which are files containing compiled processor-specific machine code, and installs them into the native image cache on the local computer. The runtime can use native images from the cache instead using the just-in-time (JIT) compiler to compile the original assembly. \n", "You need to find out where the time is going before you can optimise it. Don't just ngen it without finding that out first, as if the problem is loading a 150MB background bitmap resource then you won't have done anything useful at all with ngen. \nYou should disregard all specific advice or hunches about optimisation which arise without any measurements being made.\n" ]
[ 10, 5 ]
[]
[]
[ ".net", "c#", "optimization", "vb.net", "winforms" ]
stackoverflow_0000059479_.net_c#_optimization_vb.net_winforms.txt
Q: Convert this delegate to an anonymous method or lambda I am new to all the anonymous features and need some help. I have gotten the following to work: public void FakeSaveWithMessage(Transaction t) { t.Message = "I drink goats blood"; } public delegate void FakeSave(Transaction t); public void SampleTestFunction() { Expect.Call(delegate { _dao.Save(t); }).Do(new FakeSave(FakeSaveWithMessage)); } But this is totally ugly and I would like to have the inside of the Do to be an anonymous method or even a lambda if it is possible. I tried: Expect.Call(delegate { _dao.Save(t); }).Do(delegate(Transaction t2) { t2.Message = "I drink goats blood"; }); and Expect.Call(delegate { _dao.Save(t); }).Do(delegate { t.Message = "I drink goats blood"; }); but these give me Cannot convert anonymous method to type 'System.Delegate' because it is not a delegate type** compile errors. What am I doing wrong? Because of what Mark Ingram posted, seems like the best answer, though nobody's explicitly said it, is to do this: public delegate void FakeSave(Transaction t); Expect.Call(delegate { _dao.Save(t); }).Do( new FakeSave(delegate(Transaction t2) { t.Message = expected_msg; })); A: That's a well known error message. Check the link below for a more detailed discussion. http://staceyw1.wordpress.com/2007/12/22/they-are-anonymous-methods-not-anonymous-delegates/ Basically you just need to put a cast in front of your anonymous delegate (your lambda expression). In case the link ever goes down, here is a copy of the post: They are Anonymous Methods, not Anonymous Delegates. Posted on December 22, 2007 by staceyw1 It is not just a talking point because we want to be difficult. It helps us reason about what exactly is going on. To be clear, there is *no such thing as an anonymous delegate. They don’t exist (not yet). They are "Anonymous Methods" – period. It matters in how we think of them and how we talk about them. Lets take a look at the anonymous method statement "delegate() {…}". This is actually two different operations and when we think of it this way, we will never be confused again. The first thing the compiler does is create the anonymous method under the covers using the inferred delegate signature as the method signature. It is not correct to say the method is "unnamed" because it does have a name and the compiler assigns it. It is just hidden from normal view. The next thing it does is create a delegate object of the required type to wrap the method. This is called delegate inference and can be the source of this confusion. For this to work, the compiler must be able to figure out (i.e. infer) what delegate type it will create. It has to be a known concrete type. Let write some code to see why. private void MyMethod() { } Does not compile: 1) Delegate d = delegate() { }; // Cannot convert anonymous method to type ‘System.Delegate’ because it is not a delegate type 2) Delegate d2 = MyMethod; // Cannot convert method group ‘MyMethod’ to non-delegate type ‘System.Delegate’ 3) Delegate d3 = (WaitCallback)MyMethod; // No overload for ‘MyMethod’ matches delegate ‘System.Threading.WaitCallback’ Line 1 does not compile because the compiler can not infer any delegate type. It can plainly see the signature we desire, but there is no concrete delegate type the compiler can see. It could create an anonymous type of type delegate for us, but it does not work like that. Line 2 does not compile for a similar reason. Even though the compiler knows the method signature, we are not giving it a delegate type and it is not just going to pick one that would happen to work (not what side effects that could have). Line 3 does not work because we purposely mismatched the method signature with a delegate having a different signature (as WaitCallback takes and object). Compiles: 4) Delegate d4 = (MethodInvoker)MyMethod; // Works because we cast to a delegate type of the same signature. 5) Delegate d5 = (Action)delegate { }; // Works for same reason as d4. 6) Action d6 = MyMethod; // Delegate inference at work here. New Action delegate is created and assigned. In contrast, these work. Line 1 works because we tell the compiler what delegate type to use and they match, so it works. Line 5 works for the same reason. Note we used the special form of "delegate" without the parens. The compiler infers the method signature from the cast and creates the anonymous method with the same signature as the inferred delegate type. Line 6 works because the MyMethod() and Action use same signature. I hope this helps. Also see: http://msdn.microsoft.com/msdnmag/issues/04/05/C20/ A: What Mark said. The problem is that Do takes a Delegate parameter. The compiler can't convert the anonymous methods to Delegate, only a "delegate type" i.e. a concrete type derived from Delegate. If that Do function had took Action<>, Action<,> ... etc. overloads, you wouldn't need the cast. A: The problem is not with your delegate definition, it's that the parameter of the Do() method is of type System.Delegate, and the compiler generated delegate type (FakeSave) does not implicitly convert to System.Delegate. Try adding a cast in front of your anonymous delegate: Expect.Call(delegate { _dao.Save(t); }).Do((Delegate)delegate { t.Message = "I drink goats blood"; }); A: Try something like: Expect.Call(delegate { _dao.Save(t); }).Do(new EventHandler(delegate(Transaction t2) { t2.CheckInInfo.CheckInMessage = "I drink goats blood"; })); Note the added EventHandler around the delegate. EDIT: might not work since the function signatures of EventHandler and the delegate are not the same... The solution you added to the bottom of your question may be the only way. Alternately, you could create a generic delegate type: public delegate void UnitTestingDelegate<T>(T thing); So that the delegate is not Transaction specific.
Convert this delegate to an anonymous method or lambda
I am new to all the anonymous features and need some help. I have gotten the following to work: public void FakeSaveWithMessage(Transaction t) { t.Message = "I drink goats blood"; } public delegate void FakeSave(Transaction t); public void SampleTestFunction() { Expect.Call(delegate { _dao.Save(t); }).Do(new FakeSave(FakeSaveWithMessage)); } But this is totally ugly and I would like to have the inside of the Do to be an anonymous method or even a lambda if it is possible. I tried: Expect.Call(delegate { _dao.Save(t); }).Do(delegate(Transaction t2) { t2.Message = "I drink goats blood"; }); and Expect.Call(delegate { _dao.Save(t); }).Do(delegate { t.Message = "I drink goats blood"; }); but these give me Cannot convert anonymous method to type 'System.Delegate' because it is not a delegate type** compile errors. What am I doing wrong? Because of what Mark Ingram posted, seems like the best answer, though nobody's explicitly said it, is to do this: public delegate void FakeSave(Transaction t); Expect.Call(delegate { _dao.Save(t); }).Do( new FakeSave(delegate(Transaction t2) { t.Message = expected_msg; }));
[ "That's a well known error message. Check the link below for a more detailed discussion.\nhttp://staceyw1.wordpress.com/2007/12/22/they-are-anonymous-methods-not-anonymous-delegates/ \nBasically you just need to put a cast in front of your anonymous delegate (your lambda expression).\nIn case the link ever goes down, here is a copy of the post: \n\nThey are Anonymous Methods, not\n Anonymous Delegates.\n Posted on December 22, 2007 by staceyw1 \nIt is not just a talking point because\n we want to be difficult. It helps us\n reason about what exactly is going on.\n To be clear, there is *no such thing\n as an anonymous delegate. They don’t\n exist (not yet). They are \"Anonymous\n Methods\" – period. It matters in how\n we think of them and how we talk about\n them. Lets take a look at the\n anonymous method statement \"delegate()\n {…}\". This is actually two different\n operations and when we think of it\n this way, we will never be confused\n again. The first thing the compiler\n does is create the anonymous method\n under the covers using the inferred\n delegate signature as the method\n signature. It is not correct to say\n the method is \"unnamed\" because it\n does have a name and the compiler\n assigns it. It is just hidden from\n normal view. The next thing it does\n is create a delegate object of the\n required type to wrap the method. This\n is called delegate inference and can\n be the source of this confusion. For\n this to work, the compiler must be\n able to figure out (i.e. infer) what\n delegate type it will create. It has\n to be a known concrete type. Let\n write some code to see why.\n\nprivate void MyMethod()\n{\n}\n\n\nDoes not compile: \n\n1) Delegate d = delegate() { }; // Cannot convert anonymous method to type ‘System.Delegate’ because it is not a delegate type\n2) Delegate d2 = MyMethod; // Cannot convert method group ‘MyMethod’ to non-delegate type ‘System.Delegate’\n3) Delegate d3 = (WaitCallback)MyMethod; // No overload for ‘MyMethod’ matches delegate ‘System.Threading.WaitCallback’\n\n\nLine 1 does not compile because the\n compiler can not infer any delegate\n type. It can plainly see the signature\n we desire, but there is no concrete\n delegate type the compiler can see. \n It could create an anonymous type of\n type delegate for us, but it does not\n work like that. Line 2 does not\n compile for a similar reason. Even\n though the compiler knows the method\n signature, we are not giving it a\n delegate type and it is not just going\n to pick one that would happen to work\n (not what side effects that could\n have). Line 3 does not work because\n we purposely mismatched the method\n signature with a delegate having a\n different signature (as WaitCallback\n takes and object).\nCompiles: \n\n4) Delegate d4 = (MethodInvoker)MyMethod; // Works because we cast to a delegate type of the same signature.\n5) Delegate d5 = (Action)delegate { }; // Works for same reason as d4.\n6) Action d6 = MyMethod; // Delegate inference at work here. New Action delegate is created and assigned.\n\n\nIn contrast, these work. Line 1 works\n because we tell the compiler what\n delegate type to use and they match,\n so it works. Line 5 works for the\n same reason. Note we used the special\n form of \"delegate\" without the parens.\n The compiler infers the method\n signature from the cast and creates\n the anonymous method with the same\n signature as the inferred delegate\n type. Line 6 works because the\n MyMethod() and Action use same\n signature.\nI hope this helps.\nAlso see:\n http://msdn.microsoft.com/msdnmag/issues/04/05/C20/\n\n", "What Mark said.\nThe problem is that Do takes a Delegate parameter. The compiler can't convert the anonymous methods to Delegate, only a \"delegate type\" i.e. a concrete type derived from Delegate.\nIf that Do function had took Action<>, Action<,> ... etc. overloads, you wouldn't need the cast.\n", "The problem is not with your delegate definition, it's that the parameter of the Do() method is of type System.Delegate, and the compiler generated delegate type (FakeSave) does not implicitly convert to System.Delegate.\nTry adding a cast in front of your anonymous delegate:\nExpect.Call(delegate { _dao.Save(t); }).Do((Delegate)delegate { t.Message = \"I drink goats blood\"; });\n\n", "Try something like:\nExpect.Call(delegate { _dao.Save(t); }).Do(new EventHandler(delegate(Transaction t2) { t2.CheckInInfo.CheckInMessage = \"I drink goats blood\"; }));\n\nNote the added EventHandler around the delegate.\nEDIT: might not work since the function signatures of EventHandler and the delegate are not the same... The solution you added to the bottom of your question may be the only way.\nAlternately, you could create a generic delegate type:\npublic delegate void UnitTestingDelegate<T>(T thing);\n\nSo that the delegate is not Transaction specific.\n" ]
[ 27, 3, 1, 0 ]
[]
[]
[ ".net_3.5", "anonymous_methods", "c#", "delegates", "lambda" ]
stackoverflow_0000059515_.net_3.5_anonymous_methods_c#_delegates_lambda.txt
Q: .NET Date Const (with Globalization) Does anyone know of a way to declare a date constant that is compatible with international dates? I've tried: ' not international compatible public const ADate as Date = #12/31/04# ' breaking change if you have an optional parameter that defaults to this value ' because it isnt constant. public shared readonly ADate As New Date(12, 31, 04) A: If you look at the IL generated by the statement public const ADate as Date = #12/31/04# You'll see this: .field public static initonly valuetype [mscorlib]System.DateTime ADate .custom instance void [mscorlib]System.Runtime.CompilerServices.DateTimeConstantAttribute::.ctor(int64) = ( 01 00 00 C0 2F CE E2 BC C6 08 00 00 ) Notice that the DateTimeConstantAttribute is being initialized with a constructor that takes an int64 tick count. Since this tick count is being determined at complile time, it seems unlikely that any localization is coming into play when this value is initialized at runtime. My guess is that the error is with some other date handling in your code, not the const initialization. A: According to the Microsoft documentation, "You must enclose a Date literal within number signs (# #). You must specify the date value in the format M/d/yyyy, for example #5/31/1993#. This requirement is independent of your locale and your computer's date and time format settings." Are you saying that this is not correct and the parsing is affected by the current locale? Edit: Did you try with a 4-digit year? A: Once you have data into Date objects in VB, you don't have to worry about globalization until you compare something to it or try to export it. This is fine: Dim FirstDate as Date = Date.UtcNow() 'or this: = NewDate (2008,09,10)' Dim SecondDate as Date SecondDate = FirstDate.AddDays(1) This pulls in the globalization rules and prints in the current thread's culture format: HeaderLabel.Text = SecondDate.ToString() This is bad: Dim BadDate as Date = CDate("2/20/2000") Actually--even that is OK if you force CDate in that case to use the right culture (InvariantCulture): Dim OkButBadPracticeDate as Date = CDate("2/20/2000", CultureInfo.InvariantCulture) If you want to force everything to a particular culture, you need to set the executing thread culture and UI culture to the desired culture (en-US, invariant, etc.). Make sure you aren't doing any work with dates as strings--make sure they are actual Date objects! A: OK, I am unsure what you are trying to do here: The code you are posting is NOT .NET, are you trying to port? DateTime's cannot be declared as constants. DateTime's are a data type, so once init'ed, the format that they were init'ed from is irrelevant. If you need a constant value, then just create a method to always return the same DateTime. For example: public static DateTime SadDayForAll() { return new DateTime(2001, 09, 11); } Update Where the hell are you getting all that from?! There are differences between C# and VB.NET, and this highlights one of them. Date is not a .NET data type - DateTime is. It looks like you can create DateTime constants in VB.NET but there are limitations The method was there to try and help you, since you cannot create a const from a variable (i.e. optional param). That doesn't even make sense. A: Ok right, I understand more where you are coming from.. How about: Create a static method that returns the date constant. This overcomes the international issue since it is returned as the specific DateTime value. Now I remember optional params from my VB6 days, but can you not just overload the method? If you are using the overloaded method without the date, just pull it from the static? EDIT: If you are unsure what I mean and would like a code sample, just comment this post and I will chuck one on.
.NET Date Const (with Globalization)
Does anyone know of a way to declare a date constant that is compatible with international dates? I've tried: ' not international compatible public const ADate as Date = #12/31/04# ' breaking change if you have an optional parameter that defaults to this value ' because it isnt constant. public shared readonly ADate As New Date(12, 31, 04)
[ "If you look at the IL generated by the statement\npublic const ADate as Date = #12/31/04#\n\nYou'll see this:\n.field public static initonly valuetype [mscorlib]System.DateTime ADate\n.custom instance void [mscorlib]System.Runtime.CompilerServices.DateTimeConstantAttribute::.ctor(int64) = ( 01 00 00 C0 2F CE E2 BC C6 08 00 00 )\n\nNotice that the DateTimeConstantAttribute is being initialized with a constructor that takes an int64 tick count. Since this tick count is being determined at complile time, it seems unlikely that any localization is coming into play when this value is initialized at runtime. My guess is that the error is with some other date handling in your code, not the const initialization.\n", "According to the Microsoft documentation,\n\"You must enclose a Date literal within number signs (# #). You must specify the date value in the format M/d/yyyy, for example #5/31/1993#. This requirement is independent of your locale and your computer's date and time format settings.\"\nAre you saying that this is not correct and the parsing is affected by the current locale?\nEdit: Did you try with a 4-digit year?\n", "Once you have data into Date objects in VB, you don't have to worry about globalization until you compare something to it or try to export it.\nThis is fine:\nDim FirstDate as Date = Date.UtcNow() 'or this: = NewDate (2008,09,10)'\nDim SecondDate as Date\n\nSecondDate = FirstDate.AddDays(1)\n\nThis pulls in the globalization rules and prints in the current thread's culture format:\nHeaderLabel.Text = SecondDate.ToString()\n\nThis is bad: \nDim BadDate as Date = CDate(\"2/20/2000\")\n\nActually--even that is OK if you force CDate in that case to use the right culture (InvariantCulture):\nDim OkButBadPracticeDate as Date = CDate(\"2/20/2000\", CultureInfo.InvariantCulture)\n\nIf you want to force everything to a particular culture, you need to set the executing thread culture and UI culture to the desired culture (en-US, invariant, etc.).\nMake sure you aren't doing any work with dates as strings--make sure they are actual Date objects!\n", "OK, I am unsure what you are trying to do here:\n\nThe code you are posting is NOT .NET, are you trying to port?\nDateTime's cannot be declared as constants.\nDateTime's are a data type, so once init'ed, the format that they were init'ed from is irrelevant.\nIf you need a constant value, then just create a method to always return the same DateTime.\n\nFor example:\npublic static DateTime SadDayForAll()\n{\n return new DateTime(2001, 09, 11);\n}\n\nUpdate\nWhere the hell are you getting all that from?!\n\nThere are differences between C# and VB.NET, and this highlights one of them.\nDate is not a .NET data type - DateTime is.\nIt looks like you can create DateTime constants in VB.NET but there are limitations\nThe method was there to try and help you, since you cannot create a const from a variable (i.e. optional param). That doesn't even make sense.\n\n", "Ok right, I understand more where you are coming from..\nHow about:\n\nCreate a static method that returns the date constant. This overcomes the international issue since it is returned as the specific DateTime value.\nNow I remember optional params from my VB6 days, but can you not just overload the method? If you are using the overloaded method without the date, just pull it from the static?\n\nEDIT: If you are unsure what I mean and would like a code sample, just comment this post and I will chuck one on.\n" ]
[ 6, 5, 1, 0, 0 ]
[]
[]
[ "datetime", "vb.net" ]
stackoverflow_0000057488_datetime_vb.net.txt
Q: Detecting application hangs with ActiveX controls in .Net I am working on upgrades to a screen scraping application. We are using an ActiveX control to scrape screens out of an IBM mainframe. The mainframe program often hangs and crashes the ActiveX control causing our application to crash. We don't have access to the mainframe or the ActiveX source code. We are not going to write our own active x control. What is the bast way to encapsulate an ActiveX control to detect application hangs with the control so we can kill the process and restart with code? Should I create 2 separate applications? One as a controller that checks on the other and kills/restarts the process when it hangs? Would they have to be on separate app domains? Is it possible have two programs communicate with each other even if they are on separate app domains? A: You can start an executable with System.Diagnostics.Process.Start(). This returns a Process object with a Responding property that you can use to check periodically if the process is still active. You'll need two separate applications to do this though. And the application you're monitoring needs to have a main window because the monitoring works by checking if the application still processes messages from the main-window messagequeue. This is the same way windows knows to add "Not responding" to a window-title
Detecting application hangs with ActiveX controls in .Net
I am working on upgrades to a screen scraping application. We are using an ActiveX control to scrape screens out of an IBM mainframe. The mainframe program often hangs and crashes the ActiveX control causing our application to crash. We don't have access to the mainframe or the ActiveX source code. We are not going to write our own active x control. What is the bast way to encapsulate an ActiveX control to detect application hangs with the control so we can kill the process and restart with code? Should I create 2 separate applications? One as a controller that checks on the other and kills/restarts the process when it hangs? Would they have to be on separate app domains? Is it possible have two programs communicate with each other even if they are on separate app domains?
[ "You can start an executable with System.Diagnostics.Process.Start(). This returns a Process object with a Responding property that you can use to check periodically if the process is still active.\nYou'll need two separate applications to do this though. And the application you're monitoring needs to have a main window because the monitoring works by checking if the application still processes messages from the main-window messagequeue. This is the same way windows knows to add \"Not responding\" to a window-title \n" ]
[ 1 ]
[]
[]
[ ".net", "activex" ]
stackoverflow_0000059622_.net_activex.txt
Q: How do I find if my particular computer is going to have problems when I install linux? The IT lady just gave me a laptop to keep! I've always wanted to have Linux install to play with so the first thing I did is search stackoverflow for Linux Distro suggestions and found it here. However they also mention that you should search around to see if anyone's had any problems with your drivers and that distro. Now all I know is that this is a Toshiba Tecra A5 - I havent' even booted it up yet but when I do how should I go about researching whether the drivers are compatible with Ubuntu or whatever I choose to use? Should I just be googling Ubunto+DriverName or are there better resources? A: Many distros, Ubuntu included, have a "live" mode. You download the .iso image, burn the CD, and then boot from the CD. The OS will run directly off the CD without installing anything. It will run slowly, because it's reading from the CD, but it should give you the opportunity to test your hardware. A: You can try Linux-On-Laptops. A quick search shows this Tecra A5. You can also download a LiveCD version, that will tell you if you can get most of your hardware working easily. If the LiveCD works, you're good. If it doesn't, you can just pop it out of the cd-rom drive. No harm done, and you can look at other options. A: Personally, I wouldn't worry about it... if you do dual boot (which I recommend), then you can easily fall back to Windows (or whatever is installed). I have set up 3 laptops with linux, 2 Ubuntu and 1 Fedora 8. 2 of them had issues with the wireless network card, and 2 had issues with the video card (1 was easier problem to fix). In both cases, I was able to solve the problem with enough googling... the Ubuntu forums are particularly good at having resolutions to problems you may face. So... it may be some work to resolve issues you have, but with enough effort, you should be able to overcome the problems (sometimes the effort may be high by the way... it took me about a week of idle attempts to solve the first wireless card issue). A: I like Linux on Laptops you pick a notebook, and they recommend the best distro for it. The data is sometimes slightly dated, so I recommend getting the newest distro of the brand they recommend. For your notebook they recommend Ubuntu. The other option is just to try a live CD. Many distros have them, including Ubuntu. A: I would look in (at least) these two places: http://www.linuxcompatible.org/compatibility.html http://www.linux-drivers.org/ A: Most Linux distros have Live CDs that let you run the OS before actually installing it. That is how I found the Linux distribution that would run on my laptop (Ubuntu). If your laptop is older and your afraid it won't be able to handle a modern Linux desktop, look to distro's like Xubuntu, Slax, Damn Small Linux, Puppy Linux, etc. as they ship with desktops that aren't as resource intensive.
How do I find if my particular computer is going to have problems when I install linux?
The IT lady just gave me a laptop to keep! I've always wanted to have Linux install to play with so the first thing I did is search stackoverflow for Linux Distro suggestions and found it here. However they also mention that you should search around to see if anyone's had any problems with your drivers and that distro. Now all I know is that this is a Toshiba Tecra A5 - I havent' even booted it up yet but when I do how should I go about researching whether the drivers are compatible with Ubuntu or whatever I choose to use? Should I just be googling Ubunto+DriverName or are there better resources?
[ "Many distros, Ubuntu included, have a \"live\" mode. You download the .iso image, burn the CD, and then boot from the CD. The OS will run directly off the CD without installing anything. It will run slowly, because it's reading from the CD, but it should give you the opportunity to test your hardware.\n", "You can try Linux-On-Laptops. A quick search shows this Tecra A5.\nYou can also download a LiveCD version, that will tell you if you can get most of your hardware working easily. If the LiveCD works, you're good. If it doesn't, you can just pop it out of the cd-rom drive. No harm done, and you can look at other options.\n", "Personally, I wouldn't worry about it... if you do dual boot (which I recommend), then you can easily fall back to Windows (or whatever is installed).\nI have set up 3 laptops with linux, 2 Ubuntu and 1 Fedora 8. 2 of them had issues with the wireless network card, and 2 had issues with the video card (1 was easier problem to fix). In both cases, I was able to solve the problem with enough googling... the Ubuntu forums are particularly good at having resolutions to problems you may face.\nSo... it may be some work to resolve issues you have, but with enough effort, you should be able to overcome the problems (sometimes the effort may be high by the way... it took me about a week of idle attempts to solve the first wireless card issue).\n", "I like Linux on Laptops you pick a notebook, and they recommend the best distro for it. The data is sometimes slightly dated, so I recommend getting the newest distro of the brand they recommend. For your notebook they recommend Ubuntu. The other option is just to try a live CD. Many distros have them, including Ubuntu.\n", "I would look in (at least) these two places:\nhttp://www.linuxcompatible.org/compatibility.html\nhttp://www.linux-drivers.org/\n", "Most Linux distros have Live CDs that let you run the OS before actually installing it. That is how I found the Linux distribution that would run on my laptop (Ubuntu). If your laptop is older and your afraid it won't be able to handle a modern Linux desktop, look to distro's like Xubuntu, Slax, Damn Small Linux, Puppy Linux, etc. as they ship with desktops that aren't as resource intensive.\n" ]
[ 5, 4, 1, 1, 0, 0 ]
[]
[]
[ "drivers", "installation", "linux" ]
stackoverflow_0000059627_drivers_installation_linux.txt