content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Data Encryption A database that stores a lot of credit card information is an inevitable part of the system we have just completed. What I want though is ultimate security of the card numbers whereby we setup a mechanism to encrypt and decrypt but of ourselves cannot decrypt any given number. What I am after is a way to secure this information even down at the database level so no one can go in and produce a file of card numbers. How have others overcome this issue? What is the 'Standard' approach to this? As for usage of the data well the links are all private and secure and no transmission of the card number is performed except when a record is created and that is encrypted so I am not worried about the front end just the back end. Well the database is ORACLE so I have PL/SQL and Java to play with. A: There's no shortage of processors willing to store your CC info and exchange it for a token with which you can bill against the stored number. That gets you out of PCI compliance, but still allows on demand billing. Depending on why you need to store the CC, that may be a better alternative. Most companies refer to this as something like "Customer Profile Management", and are actually pretty reasonable on fees. A few providers I know of (in no particular order): Authorize.NET Customer Information Manager TrustCommerce Citadel BrainTree A: Unless you are a payment processor you don't really need to store any kind of CC information. Review your requirements, there really is not many cases where you need to store CC information A: Don't store the credit card numbers, store a hash instead. When you need to verify if a new number matches a stored number, take a hash of the new number and compare it to the stored hash. If they match, the number is (in theory) the same. Alternatively, you could encrypt the data by getting the user who enters the card number to enter a pass phrase; you'd use this as an encryption/decryption key. However, anyone with access to your database and sourcecode (ie. you and your team) will find it trivial to decrypt that data (ie. modify the live code so that it emails any decryption keys entered to a disposable Hotmail account, etc). A: If you are storing the credit card information because you don't want the user to have to re-enter it then hashing of any form isn't going to help. When do you need to act on the credit card number? You could store the credit card numbers in a more secure database, and in the main db just store enough information to show to the user, and a reference to the card. The backend system can be much more locked down and use the actual credit card info just for order processing. You could encrypt these numbers by some master password if you like, but the password would have to be known by the code that needs to get the numbers. Yes, you have only moved the problem around somewhat, but a lot of security is more about reducing the attack footprint rather than eliminating it. If you want to eliminate it then don't store the credit card number anywhere! A: If you're using Oracle you might be interested in Transparent Data Encryption. Only available with an Enterprise license though. Oracle also has utilities for encryption - decryption, for example the DBMS_OBFUSCATION_TOOLKIT. As for "Standards", the proper standard you are interested in is the PCI DSS standard which describes which measures need to be taken to protect sensitive credit card information. A: For an e-commerce type use case (think Amazon 1-Click), you could encrypt the CC (or key) with the user's existing strong password. Assuming you only store a hash of the password, only the user (or a rainbow table - but, it'd have to be run on each user, and would not work if it didn't come up with the same password - not just 1 that hashed the same) can decrypt it. You'd have to take some care to re-encrypt the data when a password changes, and the data would be worthless (and need to be reentered by the user) if they forgot their password - but, if the payments are user-initiated, then it'd work nicely. A: It would be helpful to know the DB server and language/platform types so we could get more specific, but I would be looking into SHA. A: I'd symmetrically encrypt (AES) a secure salted hash (SHA-256 + salt). The salted hash would be enough with a big salt, but the encryption adds a bit extra in case the database and not the code leaks and there are rainbow tables for salted hashes by then or some other means. Store the key in the code, not in the database, of course. It's worth noting that nothing protects you from crooked teammates, they can also store a copy of the date before hashing, for instance. You have to take good care of the code repository and do frequent code revisions for all code in the credit card handling path. Also try to minimize the time from receiving the data and having it crypted/hashed, manually ensuring the variable where it was stored is cleared from memory.
Data Encryption
A database that stores a lot of credit card information is an inevitable part of the system we have just completed. What I want though is ultimate security of the card numbers whereby we setup a mechanism to encrypt and decrypt but of ourselves cannot decrypt any given number. What I am after is a way to secure this information even down at the database level so no one can go in and produce a file of card numbers. How have others overcome this issue? What is the 'Standard' approach to this? As for usage of the data well the links are all private and secure and no transmission of the card number is performed except when a record is created and that is encrypted so I am not worried about the front end just the back end. Well the database is ORACLE so I have PL/SQL and Java to play with.
[ "There's no shortage of processors willing to store your CC info and exchange it for a token with which you can bill against the stored number. That gets you out of PCI compliance, but still allows on demand billing. Depending on why you need to store the CC, that may be a better alternative.\nMost companies refer to this as something like \"Customer Profile Management\", and are actually pretty reasonable on fees.\nA few providers I know of (in no particular order):\n\nAuthorize.NET Customer Information Manager\nTrustCommerce Citadel\nBrainTree\n\n", "Unless you are a payment processor you don't really need to store any kind of CC information.\nReview your requirements, there really is not many cases where you need to store CC information\n", "Don't store the credit card numbers, store a hash instead. When you need to verify if a new number matches a stored number, take a hash of the new number and compare it to the stored hash. If they match, the number is (in theory) the same.\nAlternatively, you could encrypt the data by getting the user who enters the card number to enter a pass phrase; you'd use this as an encryption/decryption key.\nHowever, anyone with access to your database and sourcecode (ie. you and your team) will find it trivial to decrypt that data (ie. modify the live code so that it emails any decryption keys entered to a disposable Hotmail account, etc).\n", "If you are storing the credit card information because you don't want the user to have to re-enter it then hashing of any form isn't going to help.\nWhen do you need to act on the credit card number?\nYou could store the credit card numbers in a more secure database, and in the main db just store enough information to show to the user, and a reference to the card. The backend system can be much more locked down and use the actual credit card info just for order processing. You could encrypt these numbers by some master password if you like, but the password would have to be known by the code that needs to get the numbers.\nYes, you have only moved the problem around somewhat, but a lot of security is more about reducing the attack footprint rather than eliminating it. If you want to eliminate it then don't store the credit card number anywhere!\n", "If you're using Oracle you might be interested in Transparent Data Encryption. Only available with an Enterprise license though.\nOracle also has utilities for encryption - decryption, for example the DBMS_OBFUSCATION_TOOLKIT.\nAs for \"Standards\", the proper standard you are interested in is the PCI DSS standard which describes which measures need to be taken to protect sensitive credit card information. \n", "For an e-commerce type use case (think Amazon 1-Click), you could encrypt the CC (or key) with the user's existing strong password. Assuming you only store a hash of the password, only the user (or a rainbow table - but, it'd have to be run on each user, and would not work if it didn't come up with the same password - not just 1 that hashed the same) can decrypt it.\nYou'd have to take some care to re-encrypt the data when a password changes, and the data would be worthless (and need to be reentered by the user) if they forgot their password - but, if the payments are user-initiated, then it'd work nicely.\n", "It would be helpful to know the DB server and language/platform types so we could get more specific, but I would be looking into SHA.\n", "I'd symmetrically encrypt (AES) a secure salted hash (SHA-256 + salt). The salted hash would be enough with a big salt, but the encryption adds a bit extra in case the database and not the code leaks and there are rainbow tables for salted hashes by then or some other means. Store the key in the code, not in the database, of course.\nIt's worth noting that nothing protects you from crooked teammates, they can also store a copy of the date before hashing, for instance. You have to take good care of the code repository and do frequent code revisions for all code in the credit card handling path. Also try to minimize the time from receiving the data and having it crypted/hashed, manually ensuring the variable where it was stored is cleared from memory.\n" ]
[ 8, 5, 4, 3, 1, 1, 0, 0 ]
[]
[]
[ "database", "encryption", "security" ]
stackoverflow_0000059204_database_encryption_security.txt
Q: Default integer type in ASP.NET from a stored procedure I have a web page that I have hooked up to a stored procedure. In this SQL data source, I have a parameter that I'm passing back to the stored procedure of type int. ASP.NET seems to want to default to int32, but the number won't get higher than 6. Is it ok to override the ASP.NET default and put in 16 or will there be a conflict somewhere down the road? specification: the database field has a length of 4 and precision of 10, if that makes a difference in the answer. A: If you force it to be for example a byte and the number is over 255 you run the risk of a casting error (and an exception will be thrown). However if you know it not going to be higher than 6 it should not be a problem. If it was me, I would just use it as a normal int, I am not sure you save much if anything other than a few bytes by making it a byte. The risk of the exception being thrown is too high and you would lose all benefits by making it smaller. A: Stick with int32. That's what vb's "Integer" and SQL's INT is, anyway. You will not gain any significant performance improvement by using a tinyint/byte or an short/int16 instead of int/int32. In fact, the headaches you might run into in the future caused by all the casting you might have to do for objects that expect int32s will drive you crazy. A: When you say the DB field has a length of 4, that means 4 bytes, which is equivalent to an Int32 (4 bytes = 32 bits). That's why your column is being returned as an int32. There are different integer datatypes in SQL Server -- if you are sure the number won't get higher than 6, you should declare the column in the database as a "tinyint", which uses a single byte and can hold values from 0 to 255. Then the SQL data source should convert it to a "byte" datatype, which will be fine for your purposes. CLR "byte" == SQL "tinyint" (1 byte) CLR "Short" (or int16) == SQL "smallint" (2 bytes) CLR "int32" == SQL "int" EDIT: just because you can do something, doesn't mean you should -- I agree with Michael Haren, the development headache of managing these less common datatypes outweighs the small performance gain you would get, unless you are dealing with very high-performance software (in which case, why would you be using ASP.NET?) A: You're not saving much if anything by using an Int16 on the ASP side. It still has to load it into a 32-bit register eventually. A: FYI, the CLR maps int to Int32 internally anyways. A: Use whatever your SQL Server stored procedure has defined. If it's an int in SQL Server, then use Int32 in .NET. smallint in SQL is int16. Otherwise, SQL Server will just upconvert it automatically, or throw an error if it needs to be downconverted.
Default integer type in ASP.NET from a stored procedure
I have a web page that I have hooked up to a stored procedure. In this SQL data source, I have a parameter that I'm passing back to the stored procedure of type int. ASP.NET seems to want to default to int32, but the number won't get higher than 6. Is it ok to override the ASP.NET default and put in 16 or will there be a conflict somewhere down the road? specification: the database field has a length of 4 and precision of 10, if that makes a difference in the answer.
[ "If you force it to be for example a byte and the number is over 255 you run the risk of a casting error (and an exception will be thrown). However if you know it not going to be higher than 6 it should not be a problem.\nIf it was me, I would just use it as a normal int, I am not sure you save much if anything other than a few bytes by making it a byte. The risk of the exception being thrown is too high and you would lose all benefits by making it smaller.\n", "Stick with int32. That's what vb's \"Integer\" and SQL's INT is, anyway.\nYou will not gain any significant performance improvement by using a tinyint/byte or an short/int16 instead of int/int32.\nIn fact, the headaches you might run into in the future caused by all the casting you might have to do for objects that expect int32s will drive you crazy.\n", "When you say the DB field has a length of 4, that means 4 bytes, which is equivalent to an Int32 (4 bytes = 32 bits). That's why your column is being returned as an int32.\nThere are different integer datatypes in SQL Server -- if you are sure the number won't get higher than 6, you should declare the column in the database as a \"tinyint\", which uses a single byte and can hold values from 0 to 255. Then the SQL data source should convert it to a \"byte\" datatype, which will be fine for your purposes.\nCLR \"byte\" == SQL \"tinyint\" (1 byte)\nCLR \"Short\" (or int16) == SQL \"smallint\" (2 bytes)\nCLR \"int32\" == SQL \"int\"\nEDIT: just because you can do something, doesn't mean you should -- I agree with Michael Haren, the development headache of managing these less common datatypes outweighs the small performance gain you would get, unless you are dealing with very high-performance software (in which case, why would you be using ASP.NET?)\n", "You're not saving much if anything by using an Int16 on the ASP side. It still has to load it into a 32-bit register eventually.\n", "FYI, the CLR maps int to Int32 internally anyways.\n", "Use whatever your SQL Server stored procedure has defined. If it's an int in SQL Server, then use Int32 in .NET. smallint in SQL is int16.\nOtherwise, SQL Server will just upconvert it automatically, or throw an error if it needs to be downconverted.\n" ]
[ 1, 1, 1, 1, 1, 1 ]
[]
[]
[ "asp.net", "parameters", "sql_server", "stored_procedures" ]
stackoverflow_0000059651_asp.net_parameters_sql_server_stored_procedures.txt
Q: Unfiltering NSPasteboard Is there a way to unfilter an NSPasteboard for what the source application specifically declared it would provide? I'm attempting to serialize pasteboard data in my application. When another application places an RTF file on a pasteboard and then I ask for the available types, I get eleven different flavors of said RTF, everything from the original RTF to plain strings to dyn.* values. Saving off all that data into a plist or raw data on disk isn't usually a problem as it's pretty small, but when an image of any considerable size is placed on the pasteboard, the resulting output can be tens of times larger than the source data (with multiple flavors of TIFF and PICT data being made available via filtering). I'd like to just be able to save off what the original app made available if possible. John, you are far more observant than myself or the gentleman I work with who's been doing Mac programming since dinosaurs roamed the earth. Neither of us ever noticed the text you highlighted... and I've not a clue why. Starting too long at the problem, apparently. And while I accepted your answer as the correct answer, it doesn't exactly answer my original question. What I was looking for was a way to identify flavors that can become other flavors simply by placing them on the pasteboard AND to know which of these types were originally offered by the provider. While walking the types list will get me the preferred order for the application that provided them, it won't tell me which ones I can safely ignore as they'll be recreated when I refill the pasteboard later. I've come to the conclusion that there isn't a "good" way to do this. [NSPasteboard declaredTypesFromOwner] would be fabulous, but it doesn't exist. A: -[NSPasteboard types] will return all the available types for the data on the clipboard, but it should return them "in the order they were declared." The documentation for -[NSPasteboard declareTypes:owner:] says that "the types should be ordered according to the preference of the source application." A properly implemented pasteboard owner should, therefore, declare the richest representation of the content (probably the original content) as the first type; so a reasonable single representation should be: [pb dataForType:[[pb types] objectAtIndex:0]] A: You may be able to get some use out of +[NSPasteboard typesFilterableTo:]. I'm picturing a snippet like this: NSArray *allTypes = [pb types]; NSAssert([allTypes count] > 0, @"expected at least one type"); // We always require the first declared type, as a starting point. NSMutableSet *requiredTypes = [NSMutableSet setWithObject:[allTypes objectAtIndex:0]]; for (NSUInteger index = 1; index < [allTypes count]; index++) { NSString *aType = [allTypes objectAtIndex:index]; NSSet *filtersFrom = [NSSet setWithArray:[NSPasteboard typesFilterableTo:aType]]; // If this type can't be re-created with a filter we already use, add it to the // set of required types. if (![requiredTypes intersectsSet:filtersFrom]) [requiredTypes addObject:aType]; } I'm not sure how effective this would be at picking good types, however.
Unfiltering NSPasteboard
Is there a way to unfilter an NSPasteboard for what the source application specifically declared it would provide? I'm attempting to serialize pasteboard data in my application. When another application places an RTF file on a pasteboard and then I ask for the available types, I get eleven different flavors of said RTF, everything from the original RTF to plain strings to dyn.* values. Saving off all that data into a plist or raw data on disk isn't usually a problem as it's pretty small, but when an image of any considerable size is placed on the pasteboard, the resulting output can be tens of times larger than the source data (with multiple flavors of TIFF and PICT data being made available via filtering). I'd like to just be able to save off what the original app made available if possible. John, you are far more observant than myself or the gentleman I work with who's been doing Mac programming since dinosaurs roamed the earth. Neither of us ever noticed the text you highlighted... and I've not a clue why. Starting too long at the problem, apparently. And while I accepted your answer as the correct answer, it doesn't exactly answer my original question. What I was looking for was a way to identify flavors that can become other flavors simply by placing them on the pasteboard AND to know which of these types were originally offered by the provider. While walking the types list will get me the preferred order for the application that provided them, it won't tell me which ones I can safely ignore as they'll be recreated when I refill the pasteboard later. I've come to the conclusion that there isn't a "good" way to do this. [NSPasteboard declaredTypesFromOwner] would be fabulous, but it doesn't exist.
[ "-[NSPasteboard types] will return all the available types for the data on the clipboard, but it should return them \"in the order they were declared.\"\nThe documentation for -[NSPasteboard declareTypes:owner:] says that \"the types should be ordered according to the preference of the source application.\"\nA properly implemented pasteboard owner should, therefore, declare the richest representation of the content (probably the original content) as the first type; so a reasonable single representation should be:\n[pb dataForType:[[pb types] objectAtIndex:0]]\n\n", "You may be able to get some use out of +[NSPasteboard typesFilterableTo:]. I'm picturing a snippet like this:\nNSArray *allTypes = [pb types];\nNSAssert([allTypes count] > 0, @\"expected at least one type\");\n\n// We always require the first declared type, as a starting point.\nNSMutableSet *requiredTypes = [NSMutableSet setWithObject:[allTypes objectAtIndex:0]];\n\nfor (NSUInteger index = 1; index < [allTypes count]; index++) {\n NSString *aType = [allTypes objectAtIndex:index];\n NSSet *filtersFrom = [NSSet setWithArray:[NSPasteboard typesFilterableTo:aType]];\n\n // If this type can't be re-created with a filter we already use, add it to the\n // set of required types.\n if (![requiredTypes intersectsSet:filtersFrom])\n [requiredTypes addObject:aType];\n}\n\nI'm not sure how effective this would be at picking good types, however.\n" ]
[ 4, 0 ]
[]
[]
[ "cocoa", "filtering", "pasteboard" ]
stackoverflow_0000054760_cocoa_filtering_pasteboard.txt
Q: Can VS be configured to automatically remove blank line(s) after text is cut? Is there a way (or shortcut) to tell VS 2008 that it cuts a line like this: Before: Some Text here This gets cut Some Code there After: Some Text here Some Code there What I want: Some Text here Some Code there PS: I don't want to select the whole line or something like this... only the text I want to cut. A: Unless I misunderstood you: Just place cursor on the line you want to cut (no selection) and press Ctrl + x. That cuts the line (leaving no blanks) and puts the text in the Clipboard. (tested in MS VC# 2008 Express with no additional settings I'm aware of) Is that what you want? A: Shift+Delete also works. Select a line and hit Shift-Delete it will remove the line and place that line in your clipboard. A: Don't select anything, just hit ctrl+x when the cursor is on the line.
Can VS be configured to automatically remove blank line(s) after text is cut?
Is there a way (or shortcut) to tell VS 2008 that it cuts a line like this: Before: Some Text here This gets cut Some Code there After: Some Text here Some Code there What I want: Some Text here Some Code there PS: I don't want to select the whole line or something like this... only the text I want to cut.
[ "Unless I misunderstood you:\nJust place cursor on the line you want to cut (no selection) and press Ctrl + x. That cuts the line (leaving no blanks) and puts the text in the Clipboard. (tested in MS VC# 2008 Express with no additional settings I'm aware of)\nIs that what you want?\n", "Shift+Delete also works.\nSelect a line and hit Shift-Delete it will remove the line and place that line in your clipboard.\n", "Don't select anything, just hit ctrl+x when the cursor is on the line.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "editor", "ide", "visual_studio" ]
stackoverflow_0000059472_editor_ide_visual_studio.txt
Q: AJAX Partial Page Load? I have a page results page (you get there after submitting your search query elsewhere) whit a whole bunch of gridviews for different type of data objects. Obviously, some of the queries take longer than the others. How can I make each gridview render as soon as it has the data it needs? This has been tricky for me because it must work on a postback as well as a pageload. Also, the object data sources just fire automatically on page load/postback; I'm not calling any methods programatically to get the data. Will I have to change this? A: @Gareth Jenkins The page will execute all of the queries before returning even the first update panel, so he won't save any time there. The trick to do this is to move each of your complex gridviews into a user control, in the user control, get rid of the Object DataSource crap, and do your binding in the code behind. Write your bind code so that it only binds in this situation: if (this.isPostBack && ScriptManager.IsInAsyncPostback) Then, in the page, programaticly refresh the update panel using javascript once the page has loaded, and you'll get each individual gridview rendering once its ready. A: Could you put the DataGrids inside panels that have their visibility set to false, then call a client-side javascript function from the body's onload event that calls a server side function that sets the visibility of the panels to true? If you combined this with an asp:updateProgress control and wrapped the whole thing in an UpdatePanel, you should get something close to what you're looking for - especially if you rigged the js function called in onload to only show one panel and call a return function that showed the next etc.
AJAX Partial Page Load?
I have a page results page (you get there after submitting your search query elsewhere) whit a whole bunch of gridviews for different type of data objects. Obviously, some of the queries take longer than the others. How can I make each gridview render as soon as it has the data it needs? This has been tricky for me because it must work on a postback as well as a pageload. Also, the object data sources just fire automatically on page load/postback; I'm not calling any methods programatically to get the data. Will I have to change this?
[ "@Gareth Jenkins\nThe page will execute all of the queries before returning even the first update panel, so he won't save any time there.\nThe trick to do this is to move each of your complex gridviews into a user control, in the user control, get rid of the Object DataSource crap, and do your binding in the code behind.\nWrite your bind code so that it only binds in this situation: \nif (this.isPostBack && ScriptManager.IsInAsyncPostback)\n\nThen, in the page, programaticly refresh the update panel using javascript once the page has loaded, and you'll get each individual gridview rendering once its ready.\n", "Could you put the DataGrids inside panels that have their visibility set to false, then call a client-side javascript function from the body's onload event that calls a server side function that sets the visibility of the panels to true?\nIf you combined this with an asp:updateProgress control and wrapped the whole thing in an UpdatePanel, you should get something close to what you're looking for - especially if you rigged the js function called in onload to only show one panel and call a return function that showed the next etc.\n" ]
[ 2, 0 ]
[]
[]
[ "ajax", "asp.net" ]
stackoverflow_0000059628_ajax_asp.net.txt
Q: Is there anyway to disable the client-side validation for dojo date text box? In my example below I'm using a dijit.form.DateTextBox: <input type="text" name="startDate" dojoType="dijit.form.DateTextBox" constraints="{datePattern:'MM/dd/yyyy'}" value='<c:out value="${sessionScope.adminMessageForm.startDate}"/>' /> So for example, if the user starts to enter "asdf" into the date the field turns yellow and a popup error message appears saying The value entered is not valid.. Even if I remove the constraints="{datePattern:'MM/dd/yyyy'}" it still validates. Without going into details as to why, I would like to be able keep the dojoType and still prevent validation in particular circumstances. A: Try overriding the validate method in your markup. This will work (just tested): <input type="text" name="startDate" dojoType="dijit.form.DateTextBox" constraints="{datePattern:'MM/dd/yyyy'}" value='<c:out value="${sessionScope.adminMessageForm.startDate}"/>' validate='return true;' /> A: My only suggestion is to programmatically remove the dojoType on the server-side or client-side. It is not possible to keep the dojoType and not have it validate. Unless you create your own type that has you logic in it. A: I had a similar problem, where the ValidationTextBox met all my needs but it was necessary to disable the validation routines until after the user had first pressed Submit. My solution was to clone this into a ValidationConditionalTextBox with a couple new methods: enableValidator:function() { this.validatorOn = true; }, disableValidator: function() { this.validatorOn = false; }, Then -- in the validator:function() I added a single check: if (this.validatorOn) { ... } Fairly straightforward, my default value for validatorOn is false (this appears right at the top of the javascript). When my form submits, simply call enableValidator(). You can view the full JavaScript here: http://lilawnsprinklers.com/js/dijit/form/ValidationTextBox.js
Is there anyway to disable the client-side validation for dojo date text box?
In my example below I'm using a dijit.form.DateTextBox: <input type="text" name="startDate" dojoType="dijit.form.DateTextBox" constraints="{datePattern:'MM/dd/yyyy'}" value='<c:out value="${sessionScope.adminMessageForm.startDate}"/>' /> So for example, if the user starts to enter "asdf" into the date the field turns yellow and a popup error message appears saying The value entered is not valid.. Even if I remove the constraints="{datePattern:'MM/dd/yyyy'}" it still validates. Without going into details as to why, I would like to be able keep the dojoType and still prevent validation in particular circumstances.
[ "Try overriding the validate method in your markup.\nThis will work (just tested):\n<input type=\"text\" name=\"startDate\" dojoType=\"dijit.form.DateTextBox\" \n constraints=\"{datePattern:'MM/dd/yyyy'}\" \n value='<c:out value=\"${sessionScope.adminMessageForm.startDate}\"/>'\n validate='return true;'\n/>\n\n", "My only suggestion is to programmatically remove the dojoType on the server-side or client-side. It is not possible to keep the dojoType and not have it validate. Unless you create your own type that has you logic in it.\n", "I had a similar problem, where the ValidationTextBox met all my needs but it was necessary to disable the validation routines until after the user had first pressed Submit.\nMy solution was to clone this into a ValidationConditionalTextBox with a couple new methods:\n enableValidator:function() {\n this.validatorOn = true;\n },\n\n disableValidator: function() {\n this.validatorOn = false;\n },\n\nThen -- in the validator:function() I added a single check:\n if (this.validatorOn)\n { ... }\n\nFairly straightforward, my default value for validatorOn is false (this appears right at the top of the javascript). When my form submits, simply call enableValidator(). You can view the full JavaScript here:\nhttp://lilawnsprinklers.com/js/dijit/form/ValidationTextBox.js\n" ]
[ 6, 1, 1 ]
[]
[]
[ "dojo", "javascript" ]
stackoverflow_0000015514_dojo_javascript.txt
Q: Why do all methods in the Google Analytics tracking code start with an underscore? Prefixing variable and method names with an underscore is a common convention for marking things as private. Why does all the methods on the page tracker class in the Google Analytics tracking code (ga.js) start with an underscore, even the ones that are clearly public, like _getTracker and _trackPageView? A: Because Google can't be bothered to follow the Module Pattern and therefore they don't want accidental collisions in the global namespace? A: Just in case you have a getTracker() function in your own code, or similar. In other words, to avoid naming conflicts with the page's javascript code, probably. @Theo: Didn't realize (ie, not read carefully enough) they were methods. Then maybe to encourage caution or discourage use? Dunno, really. A: I've always read this like so: If the property/method is prefixed with an underscore, it is for some "internal" workings. Therefore if you are about to use/call/alter this property/method, you had better darn well know what you are doing, and or expect it to possibly be renamed/removed in a future release.
Why do all methods in the Google Analytics tracking code start with an underscore?
Prefixing variable and method names with an underscore is a common convention for marking things as private. Why does all the methods on the page tracker class in the Google Analytics tracking code (ga.js) start with an underscore, even the ones that are clearly public, like _getTracker and _trackPageView?
[ "Because Google can't be bothered to follow the Module Pattern and therefore they don't want accidental collisions in the global namespace?\n", "Just in case you have a getTracker() function in your own code, or similar.\nIn other words, to avoid naming conflicts with the page's javascript code, probably. \n@Theo: Didn't realize (ie, not read carefully enough) they were methods. Then maybe to encourage caution or discourage use? Dunno, really.\n", "I've always read this like so:\nIf the property/method is prefixed with an underscore, it is for some \"internal\" workings. Therefore if you are about to use/call/alter this property/method, you had better darn well know what you are doing, and or expect it to possibly be renamed/removed in a future release.\n" ]
[ 5, 1, 0 ]
[]
[]
[ "google_analytics" ]
stackoverflow_0000059462_google_analytics.txt
Q: Wildcard Subdomain Exceptions I have a wildcard subdomain enabled and dynamically parse the URL by passing it as-is to my index.php (ex. somecity.domain.com). Now, I wish to create a few subdomains that are static where I can install different application and not co-mingle with my current one (ex. blog.domain.com). My .htaccess currently reads: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] Can I manipulate this .htaccess to achieve what I need? Can it be done through Apache? A: Your .htaccess does nothing useful, as Apache is probably configured with DirectoryIndex index.php. Well, it does move domain.com/a to domain.com/index.php, but I doubt that is what you want. Your wildcard virtualhost works because you probably have ServerAlias *.domain.com in your configuration, or a single virtualhost and DNS pointing to the address of your server. (When you have a single virtualhost, it shows up for any request, and the first listed virtualhost is the default one) You have to create new VirtualHosts for the static domains, leaving the default one as, well, the default one :) Check these tutorials that explain it all. A: You'll have to configure apache for those static sub-domains. The "catch-all" site will be the default site configured, so that one will catch the other ones. A: I'm not sure I understand completely what you need to accomplish, but it might helpful to setup virtual domains within your Apache configuration file. You can map them to folders on the drive with different applications installed. Each virtual domain is treated much like a root directory. I have my development environment setup locally on my Windows machine a lot like this: NameVirtualHost *:80 # Begin virtual host directives. <VirtualHost *:80> # myblog.com virtual host. ServerAdmin [email protected] DocumentRoot "c:/apache_www/myblog.com/www" ServerName myblog.com ServerAlias *.myblog.com ErrorLog "c:/apache_www/myblog.com/logs/log" ScriptAlias /cgi-bin/ "c:/apache_www/myblog.com/cgi-bin/" <Directory "c:/apache_www/myblog.com/www"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> If this does not help get you on the right track, then try researching the VirtualHost directive to come up with a solution. I find trying to do all this in an .htaccess to be cumbersome and difficult to manage. A: I don't know if you have cPanel installed on your host, but I was able to do this by adding a new subdomain * and then sending all that traffic to a particular subdomain, for example: *.domain.com -> master.domain.com. Then you can read out which URL you are at in master.domain.com and go from there.
Wildcard Subdomain Exceptions
I have a wildcard subdomain enabled and dynamically parse the URL by passing it as-is to my index.php (ex. somecity.domain.com). Now, I wish to create a few subdomains that are static where I can install different application and not co-mingle with my current one (ex. blog.domain.com). My .htaccess currently reads: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] Can I manipulate this .htaccess to achieve what I need? Can it be done through Apache?
[ "Your .htaccess does nothing useful, as Apache is probably configured with DirectoryIndex index.php. Well, it does move domain.com/a to domain.com/index.php, but I doubt that is what you want.\nYour wildcard virtualhost works because you probably have ServerAlias *.domain.com in your configuration, or a single virtualhost and DNS pointing to the address of your server. (When you have a single virtualhost, it shows up for any request, and the first listed virtualhost is the default one)\nYou have to create new VirtualHosts for the static domains, leaving the default one as, well, the default one :)\nCheck these tutorials that explain it all.\n", "You'll have to configure apache for those static sub-domains. The \"catch-all\" site will be the default site configured, so that one will catch the other ones.\n", "I'm not sure I understand completely what you need to accomplish, but it might helpful to setup virtual domains within your Apache configuration file. You can map them to folders on the drive with different applications installed. Each virtual domain is treated much like a root directory. I have my development environment setup locally on my Windows machine a lot like this:\nNameVirtualHost *:80\n\n# Begin virtual host directives.\n\n<VirtualHost *:80>\n\n# myblog.com virtual host.\n\nServerAdmin [email protected]\nDocumentRoot \"c:/apache_www/myblog.com/www\"\nServerName myblog.com\nServerAlias *.myblog.com\nErrorLog \"c:/apache_www/myblog.com/logs/log\"\nScriptAlias /cgi-bin/ \"c:/apache_www/myblog.com/cgi-bin/\"\n\n<Directory \"c:/apache_www/myblog.com/www\">\n Options Indexes FollowSymLinks\n AllowOverride All\n Order allow,deny\n Allow from all\n</Directory>\n\n</VirtualHost>\n\nIf this does not help get you on the right track, then try researching the VirtualHost directive to come up with a solution. I find trying to do all this in an .htaccess to be cumbersome and difficult to manage.\n", "I don't know if you have cPanel installed on your host, but I was able to do this by adding a new subdomain * and then sending all that traffic to a particular subdomain, for example: *.domain.com -> master.domain.com. Then you can read out which URL you are at in master.domain.com and go from there.\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ ".htaccess", "apache", "php", "wildcard_subdomain" ]
stackoverflow_0000059380_.htaccess_apache_php_wildcard_subdomain.txt
Q: How do you paste multiple tabbed lines into Vi? I want to paste something I have cut from my desktop into a file open in Vi. But if I paste the tabs embed on top of each other across the page. I think it is some sort of visual mode change but can't find the command. A: If you're using plain vi: You probably have autoindent on. To turn it off while pasting: <Esc> :set noai <paste all you want> <Esc> :set ai I have in my .exrc the following shortcuts: map ^P :set noai^M map ^N :set ai^M Note that these have to be the actual control characters - insert them using Ctrl-V Ctrl-P and so on. If you're using vim: Use the paste option. In addition to disabling autoindent it will also set other options such as textwidth and wrapmargin to paste-friendly defaults: <Esc> :set paste <paste all you want> <Esc> :set nopaste You can also set a key to toggle the paste mode. My .vimrc has the following line: set pastetoggle=<C-P> " Ctrl-P toggles paste mode A: If you are using VIM, you can use "*p (i.e. double quotes, asterisk, letter p). A: I found that if I copy tabbed lines first into a text editor and then recopy them from there to vim, then the tabs are correct.
How do you paste multiple tabbed lines into Vi?
I want to paste something I have cut from my desktop into a file open in Vi. But if I paste the tabs embed on top of each other across the page. I think it is some sort of visual mode change but can't find the command.
[ "If you're using plain vi:\nYou probably have autoindent on. To turn it off while pasting:\n<Esc> :set noai\n\n<paste all you want>\n\n<Esc> :set ai\n\nI have in my .exrc the following shortcuts:\nmap ^P :set noai^M\nmap ^N :set ai^M\n\nNote that these have to be the actual control characters - insert them using Ctrl-V Ctrl-P and so on.\nIf you're using vim:\nUse the paste option. In addition to disabling autoindent it will also set other options such as textwidth and wrapmargin to paste-friendly defaults:\n<Esc> :set paste\n\n<paste all you want>\n\n<Esc> :set nopaste\n\nYou can also set a key to toggle the paste mode. My .vimrc has the following line:\nset pastetoggle=<C-P> \" Ctrl-P toggles paste mode\n\n", "If you are using VIM, you can use \"*p (i.e. double quotes, asterisk, letter p).\n", "I found that if I copy tabbed lines first into a text editor and then recopy them from there to vim, then the tabs are correct.\n" ]
[ 52, 2, 0 ]
[]
[]
[ "vi", "vim" ]
stackoverflow_0000058774_vi_vim.txt
Q: Recover corrupt zip or gzip files? The most common method for corrupting compressed files is to inadvertently do an ASCII-mode FTP transfer, which causes a many-to-one trashing of CR and/or LF characters. Obviously, there is information loss, and the best way to fix this problem is to transfer again, in FTP binary mode. However, if the original is lost, and it's important, how recoverable is the data? [Actually, I already know what I think is the best answer (it's very difficult but sometimes possible - I'll post more later), and the common non-answers (lots of off-the-shelf programs for repairing CRCs without repairing data), but I thought it would be interesting to try out this question during the stackoverflow beta period, and see if anyone else has gone down the successful-recovery path or discovered tools I don't know about.] A: From Bukys Software Approximately 1 in 256 bytes is known to be corrupted, and the corruption is known to occur only in bytes with the value '\012'. So the byte error rate is 1/256 (0.39% of input), and 2/256 bytes (0.78% of input) are suspect. But since only three bits per smashed byte are affected, the bit error rate is only 3/(256*8): 0.15% is bad, 0.29% is suspect. ... An error in the compressed input disrupts the decompression process for all subsequent bytes...The fact that the decompressed output is recognizably bad so quickly is cause for hope -- a search for the correct answer can identify wrong answers quickly. Ultimately, several techniques were combined to successfully extract reasonable data from these files: Domain-specific parsing of fields and quoted strings Machine learning from previous data with low probability of damage Tolerance for file damage due to other causes (e.g. disk full while logging) Lookahead for guiding the search along the highest-probability paths These techniques identify 75% of the necessary repairs with certainty, and the remainder are explored highest-probability-first, so that plausible reconstructions are identified immediately. A: You could try writing a little script to replace all of the CRs with CRLFs (assuming the direction of trashing was CRLF to CR), swapping them randomly per block until you had the correct crc. Assuming that the data wasn't particularly large, I guess that might not use all of your CPU until the heat death of the universe to complete. As there is definite information loss, I don't know that there is a better way. Loss in the CR to CRLF direction might be slightly easier to roll back.
Recover corrupt zip or gzip files?
The most common method for corrupting compressed files is to inadvertently do an ASCII-mode FTP transfer, which causes a many-to-one trashing of CR and/or LF characters. Obviously, there is information loss, and the best way to fix this problem is to transfer again, in FTP binary mode. However, if the original is lost, and it's important, how recoverable is the data? [Actually, I already know what I think is the best answer (it's very difficult but sometimes possible - I'll post more later), and the common non-answers (lots of off-the-shelf programs for repairing CRCs without repairing data), but I thought it would be interesting to try out this question during the stackoverflow beta period, and see if anyone else has gone down the successful-recovery path or discovered tools I don't know about.]
[ "From Bukys Software\n\nApproximately 1 in 256 bytes is known\n to be corrupted, and the corruption is\n known to occur only in bytes with the\n value '\\012'. So the byte error rate\n is 1/256 (0.39% of input), and 2/256\n bytes (0.78% of input) are suspect.\n But since only three bits per smashed\n byte are affected, the bit error rate\n is only 3/(256*8): 0.15% is bad, 0.29%\n is suspect.\n...\nAn error in the compressed input\n disrupts the decompression process for\n all subsequent bytes...The fact that\n the decompressed output is\n recognizably bad so quickly is cause\n for hope -- a search for the correct\n answer can identify wrong answers\n quickly.\nUltimately, several techniques were\n combined to successfully extract\n reasonable data from these files:\n\nDomain-specific parsing of fields and quoted strings\nMachine learning from previous data with low probability of damage\nTolerance for file damage due to other causes (e.g. disk full while\n logging)\nLookahead for guiding the search along the highest-probability paths\n\nThese techniques identify 75% of the\n necessary repairs with certainty, and\n the remainder are explored\n highest-probability-first, so that\n plausible reconstructions are\n identified immediately.\n\n", "You could try writing a little script to replace all of the CRs with CRLFs (assuming the direction of trashing was CRLF to CR), swapping them randomly per block until you had the correct crc. Assuming that the data wasn't particularly large, I guess that might not use all of your CPU until the heat death of the universe to complete.\nAs there is definite information loss, I don't know that there is a better way. Loss in the CR to CRLF direction might be slightly easier to roll back.\n" ]
[ 4, 2 ]
[]
[]
[ "corrupt", "gzip", "recovery", "zip" ]
stackoverflow_0000059735_corrupt_gzip_recovery_zip.txt
Q: In SQL, what’s the difference between count(*) and count('x')? I have the following code: SELECT <column>, count(*) FROM <table> GROUP BY <column> HAVING COUNT(*) > 1; Is there any difference to the results or performance if I replace the COUNT(*) with COUNT('x')? (This question is related to a previous one) A: To say that SELECT COUNT(*) vs COUNT(1) results in your DBMS returning "columns" is pure bunk. That may have been the case long, long ago but any self-respecting query optimizer will choose some fast method to count the rows in the table - there is NO performance difference between SELECT COUNT(*), COUNT(1), COUNT('this is a silly conversation') Moreover, SELECT(1) vs SELECT(*) will NOT have any difference in INDEX usage -- most DBMS will actually optimize SELECT( n ) into SELECT(*) anyway. See the ASK TOM: Oracle has been optimizing SELECT(n) into SELECT(*) for the better part of a decade, if not longer: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1156151916789 problem is in count(col) to count() conversion **03/23/00 05:46 pm *** one workaround is to set event 10122 to turn off count(col) ->count() optimization. Another work around is to change the count(col) to count(), it means the same, when the col has a NOT NULL constraint. The bug number is 1215372. One thing to note - if you are using COUNT(col) (don't!) and col is marked NULL, then it will actually have to count the number of occurrences in the table (either via index scan, histogram, etc. if they exist, or a full table scan otherwise). Bottom line: if what you want is the count of rows in a table, use COUNT(*) A: The major performance difference is that COUNT(*) can be satisfied by examining the primary key on the table. i.e. in the simple case below, the query will return immediately, without needing to examine any rows. select count(*) from table I'm not sure if the query optimizer in SQL Server will do so, but in the example above, if the column you are grouping on has an index the server should be able to satisfy the query without hitting the actual table at all. To clarify: this answer refers specifically to SQL Server. I don't know how other DBMS products handle this. A: This question is slightly different that the other referenced. In the referenced question, it was asked what the difference was when using count(*) and count(SomeColumnName), and SQLMenace's answer was spot on. To address this question, essentially there is no difference in the result. Both count(*) and count('x') and say count(1) will return the same number. The difference is that when using " * " just like in a SELECT all columns are returned, then counted. When a constant is used (e.g. 'x' or 1) then a row with one column is returned and then counted. The performance difference would be seen when " * " returns many columns. Update: The above statement about performance is probably not quite right as discussed in other answers, but does apply to subselect queries when using EXISTS and NOT EXISTS A: MySQL: According to the MySQL website, COUNT(*) is faster for single table queries when using MyISAM: http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_count I'm guessing with a having clause with a count in it may change things.
In SQL, what’s the difference between count(*) and count('x')?
I have the following code: SELECT <column>, count(*) FROM <table> GROUP BY <column> HAVING COUNT(*) > 1; Is there any difference to the results or performance if I replace the COUNT(*) with COUNT('x')? (This question is related to a previous one)
[ "To say that SELECT COUNT(*) vs COUNT(1) results in your DBMS returning \"columns\" is pure bunk. That may have been the case long, long ago but any self-respecting query optimizer will choose some fast method to count the rows in the table - there is NO performance difference between SELECT COUNT(*), COUNT(1), COUNT('this is a silly conversation')\nMoreover, SELECT(1) vs SELECT(*) will NOT have any difference in INDEX usage -- most DBMS will actually optimize SELECT( n ) into SELECT(*) anyway. See the ASK TOM: Oracle has been optimizing SELECT(n) into SELECT(*) for the better part of a decade, if not longer:\nhttp://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1156151916789\n\nproblem is in count(col) to count()\n conversion\n **03/23/00 05:46 pm *** one workaround is to set event 10122 to\n turn off count(col) ->count()\n optimization. Another work around is\n to change the count(col) to count(),\n it means the same, when the col has a\n NOT NULL constraint. The bug number is\n 1215372.\n\nOne thing to note - if you are using COUNT(col) (don't!) and col is marked NULL, then it will actually have to count the number of occurrences in the table (either via index scan, histogram, etc. if they exist, or a full table scan otherwise). \nBottom line: if what you want is the count of rows in a table, use COUNT(*)\n", "The major performance difference is that COUNT(*) can be satisfied by examining the primary key on the table.\ni.e. in the simple case below, the query will return immediately, without needing to examine any rows.\nselect count(*) from table\n\nI'm not sure if the query optimizer in SQL Server will do so, but in the example above, if the column you are grouping on has an index the server should be able to satisfy the query without hitting the actual table at all.\nTo clarify: this answer refers specifically to SQL Server. I don't know how other DBMS products handle this.\n", "This question is slightly different that the other referenced. In the referenced question, it was asked what the difference was when using count(*) and count(SomeColumnName), and SQLMenace's answer was spot on.\nTo address this question, essentially there is no difference in the result. Both count(*) and count('x') and say count(1) will return the same number. The difference is that when using \" * \" just like in a SELECT all columns are returned, then counted. When a constant is used (e.g. 'x' or 1) then a row with one column is returned and then counted. The performance difference would be seen when \" * \" returns many columns.\nUpdate: The above statement about performance is probably not quite right as discussed in other answers, but does apply to subselect queries when using EXISTS and NOT EXISTS\n", "MySQL: According to the MySQL website, COUNT(*) is faster for single table queries when using MyISAM:\nhttp://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_count\nI'm guessing with a having clause with a count in it may change things.\n" ]
[ 18, 3, 3, 1 ]
[]
[]
[ "sql" ]
stackoverflow_0000059322_sql.txt
Q: Is there a real benefit of using J#? I just saw a comment of suggesting J#, and it made me wonder... is there a real, beneficial use of J# over Java? So, my feeling is that the only reason you would even consider using J# is that management has decreed that the company should jump on the Java bandwagon... and the .NET bandwagon. If you use J#, you are effectively losing the biggest benefit of picking Java... rich cross platform support. Sure there is Mono, but it's not as richly supported or as full featured right? I remember hearing Forms are not fully (perhaps at all) supported. I'm not trying to bash .NET here, I'm just saying, if you are going to go the Microsoft route, why not just use C#? If you are going to go the Java route, why would J# enter the picture? I'm hoping to find some real world cases here, so please especially respond if you've ACTUALLY used J# in a REAL project, and why. A: J# is no longer included in VS2008. Unless you already have J# code, you should probably stay away. From j# product page: Since customers have told us that the existing J# feature set largely meets their needs and usage of J# is declining, Microsoft is retiring the Visual J# product and Java Language Conversion Assistant tool to better allocate resources for other customer requirements. The J# language and JLCA tool will not be available in future versions of Visual Studio. To preserve existing customer investments in J#, Microsoft will continue to support the J# and JLCA technology that shipped with Visual Studio 2005 through to 2015 as per our product life-cycle strategy. For more information, see Expanded Microsoft Support Lifecycle Policy for Business & Development Products. A: The whole purpose of J# is to ease the transition of Java developers to the .NET environment which didn't work so well (I guessing here) so Microsoft dropped J# from Visual Studio 2008. For your question, "Is there a real benefit of using J#?".. in a nutshell... No.. A: Instead of J#, I would rather prefer IKVM (http://www.ikvm.net/) to convert my JARs to .NET assemblies as well as access Java APIs in C#. A: One of the killers I've found with J# in the past is that there is no built in support for referencing web services. That alone has been enough to deter me from it ever since. A: C# syntax is so close to Java (and better in some ways) that you might as well learn C# instead of J#. And since C# is more widely used, you can easily find Java --> C# tutorials on google or check out http://www.asp.net/learn and watch some videos. A: I don't think it's a matter of which language is better. In the .NET world there are some inconsistencies between the libraries different languages provide. There are certain functionality that is available in VB.NET that you might like to use from C# but can't. I remember I had to use J# to use some ZIP libraries that were not available in any other language in .NET. A: I have used J# as an easy interim step to port a java library into C#. It made for a good way to port code I don't plan to maintain from Java to .Net. However, all new development is being done in C#. A: Strongly agree that syntactically C# beats Java hands down, so there is really no reason to lament the demise of j#. Now trying to get c# compiling to Java bytecode might be an interesting move as Sun's hotspot jvm is great software. Or, for a bit of fun with what might well become the next generation of Java, how about Scala on the CLR...
Is there a real benefit of using J#?
I just saw a comment of suggesting J#, and it made me wonder... is there a real, beneficial use of J# over Java? So, my feeling is that the only reason you would even consider using J# is that management has decreed that the company should jump on the Java bandwagon... and the .NET bandwagon. If you use J#, you are effectively losing the biggest benefit of picking Java... rich cross platform support. Sure there is Mono, but it's not as richly supported or as full featured right? I remember hearing Forms are not fully (perhaps at all) supported. I'm not trying to bash .NET here, I'm just saying, if you are going to go the Microsoft route, why not just use C#? If you are going to go the Java route, why would J# enter the picture? I'm hoping to find some real world cases here, so please especially respond if you've ACTUALLY used J# in a REAL project, and why.
[ "J# is no longer included in VS2008. Unless you already have J# code, you should probably stay away.\nFrom j# product page:\n\nSince customers have told us that the\n existing J# feature set largely meets\n their needs and usage of J# is\n declining, Microsoft is retiring the\n Visual J# product and Java Language\n Conversion Assistant tool to better\n allocate resources for other customer\n requirements. The J# language and JLCA\n tool will not be available in future\n versions of Visual Studio. To preserve\n existing customer investments in J#,\n Microsoft will continue to support the\n J# and JLCA technology that shipped\n with Visual Studio 2005 through to\n 2015 as per our product life-cycle\n strategy. For more information, see\n Expanded Microsoft Support Lifecycle\n Policy for Business & Development\n Products.\n\n", "The whole purpose of J# is to ease the transition of Java developers to the .NET environment which didn't work so well (I guessing here) so Microsoft dropped J# from Visual Studio 2008.\nFor your question, \"Is there a real benefit of using J#?\"..\nin a nutshell... No..\n", "Instead of J#, I would rather prefer IKVM (http://www.ikvm.net/) to convert my JARs to .NET assemblies as well as access Java APIs in C#.\n", "One of the killers I've found with J# in the past is that there is no built in support for referencing web services. That alone has been enough to deter me from it ever since.\n", "C# syntax is so close to Java (and better in some ways) that you might as well learn C# instead of J#. And since C# is more widely used, you can easily find Java --> C# tutorials on google or check out http://www.asp.net/learn and watch some videos.\n", "I don't think it's a matter of which language is better. In the .NET world there are some inconsistencies between the libraries different languages provide. There are certain functionality that is available in VB.NET that you might like to use from C# but can't. I remember I had to use J# to use some ZIP libraries that were not available in any other language in .NET.\n", "I have used J# as an easy interim step to port a java library into C#. It made for a good way to port code I don't plan to maintain from Java to .Net. However, all new development is being done in C#.\n", "Strongly agree that syntactically C# beats Java hands down, so there is really no reason to lament the demise of j#. Now trying to get c# compiling to Java bytecode might be an interesting move as Sun's hotspot jvm is great software.\nOr, for a bit of fun with what might well become the next generation of Java, how about Scala on the CLR...\n" ]
[ 22, 11, 2, 1, 1, 1, 1, 0 ]
[]
[]
[ "j#", "java" ]
stackoverflow_0000005527_j#_java.txt
Q: Should I use a dedicated network channel between the database and the application server? Should I use a dedicated network channel between the database and the application server? ...or... Connecting both in the switch along with all other computer nodes makes no diference at all? The matter is performance! A: It all depends on the throughput needs of your application. If you absolutely need the lowest latency possible, then it would make sense to optimize the routes. Aside from hugely scalable software, I would argue that this is rarely needed and you can just connect everything in a generic fashion. A: It depends on your non-functional requirements. Assuming the NICs are running at the same rate, keeping the database traffic away from the front-end traffic can only be a good thing from a bandwidth perspective - if bandwidth is an issue. Far more significant is that security is improved by keeping the front-side and data-sides on different networks as the only way to gain direct access to the database is to compromise the application server. A: Using the shared switch could give increased latency, especially if the switch is busy. Also, you may be able to hook up a faster dedicated network channel (e.g. gigabit ethernet, if your switch is 100Mbit). Whether any of this is worth doing or not depends on your application though. You may also want to use a dedicated channel for increased security (making your database server less accessible).
Should I use a dedicated network channel between the database and the application server?
Should I use a dedicated network channel between the database and the application server? ...or... Connecting both in the switch along with all other computer nodes makes no diference at all? The matter is performance!
[ "It all depends on the throughput needs of your application. If you absolutely need the lowest latency possible, then it would make sense to optimize the routes. Aside from hugely scalable software, I would argue that this is rarely needed and you can just connect everything in a generic fashion.\n", "It depends on your non-functional requirements. Assuming the NICs are running at the same rate, keeping the database traffic away from the front-end traffic can only be a good thing from a bandwidth perspective - if bandwidth is an issue.\nFar more significant is that security is improved by keeping the front-side and data-sides on different networks as the only way to gain direct access to the database is to compromise the application server.\n", "Using the shared switch could give increased latency, especially if the switch is busy. Also, you may be able to hook up a faster dedicated network channel (e.g. gigabit ethernet, if your switch is 100Mbit). Whether any of this is worth doing or not depends on your application though.\nYou may also want to use a dedicated channel for increased security (making your database server less accessible).\n" ]
[ 1, 1, 1 ]
[]
[]
[ "database", "latency", "networking", "performance" ]
stackoverflow_0000059857_database_latency_networking_performance.txt
Q: How do I get a custom application name and starting window name in Visual C# 2008 using WPF? I'm using Microsft Visual C# 2008 and am creating WPF applications. If you create a new solution and pick the WPF application template it lets you provide a single string to name the solution. It then automatically turns that string into a base project name and a namespace using underscores instead of spaces. It also generates a class that inherits from the application class named App and a starting window with a Grid control in it named Window1. I want to customize pretty much everything. What's the simplest method of renaming App, Window1, and the starting namespace which won't corrupt the Solution? A: Follow these steps: Rename the application and window .xaml's in the solution explorer. Edit the application's .xaml (App.xaml originally) so the StartupUri points to the new name of the starting window the line will be as follows: StartupUri="Window1.xaml" Edit in the original window's .cs codebehind window so Window1 becomes the new window's name. Use the mouse on the drop-down after the new window name to copy the changed name elsewhere. Edit the title of the window.
How do I get a custom application name and starting window name in Visual C# 2008 using WPF?
I'm using Microsft Visual C# 2008 and am creating WPF applications. If you create a new solution and pick the WPF application template it lets you provide a single string to name the solution. It then automatically turns that string into a base project name and a namespace using underscores instead of spaces. It also generates a class that inherits from the application class named App and a starting window with a Grid control in it named Window1. I want to customize pretty much everything. What's the simplest method of renaming App, Window1, and the starting namespace which won't corrupt the Solution?
[ "Follow these steps:\n\nRename the application and window .xaml's in the solution explorer.\nEdit the application's .xaml (App.xaml originally) so the StartupUri points to the new name of the starting window the line will be as follows: \n StartupUri=\"Window1.xaml\"\nEdit in the original window's .cs codebehind window so Window1 becomes the new window's name.\nUse the mouse on the drop-down after the new window name to copy the changed name elsewhere.\nEdit the title of the window.\n\n" ]
[ 2 ]
[]
[]
[ "c#", "visual_studio_2008" ]
stackoverflow_0000059786_c#_visual_studio_2008.txt
Q: Old-school SQL DB access versus ORM (NHibernate, EF, et al). Who wins? I've been successful with writing my own SQL access code with a combination of stored procedures and parameterized queries and a little wrapper library I've written to minimize the ADO.NET grunge. This has all worked very well for me in the past and I've been pretty productive with it. I'm heading into a new project--should I put my old school stuff behind me and dig into an ORM-based solution? (I know there are vast high-concepts differences between NHibernate and EF--I don't want to get into that here. For the sake of argument, let's even lump LINQ with the old-school alternatives.) I'm looking for advice on the real-world application of ORM type stuff against what I know (and know pretty well). Old-school ADO.NET code or ORM? I'm sure there is a curve--does the curve have an ROI that makes things worthwhile? I'm anxious and willing to learn, but do have a deadline. A: I find that LINQ to SQL is much, much faster when I'm prototyping code. It just blows away any other method when I need something now. But there is a cost. Compared to hand-rolled stored procs, LINQ is slow. Especially if you aren't very careful as seemingly minor changes can suddenly make a single turn into 1+N queries. My recommendation. Use LINQ to SQL at first, then swtich to procs if you aren't getting the performance you need. A: A good question but a very controversial topic. This blog post from Frans Bouma from a few years back citing the pros of dynamic SQL (implying ORMs) over stored procedures sparked quite the fiery flame war. A: There was a great discussion on this topic at DevTeach in Montreal. If you go to this URL: http://www.dotnetrocks.com/default.aspx?showNum=240 you will be able to hear two experts in the field (Ted Neward and Oren Eini) discuss the pros and cons of each approach. Probably the best answer you will find on a subject that has no real definite answer.
Old-school SQL DB access versus ORM (NHibernate, EF, et al). Who wins?
I've been successful with writing my own SQL access code with a combination of stored procedures and parameterized queries and a little wrapper library I've written to minimize the ADO.NET grunge. This has all worked very well for me in the past and I've been pretty productive with it. I'm heading into a new project--should I put my old school stuff behind me and dig into an ORM-based solution? (I know there are vast high-concepts differences between NHibernate and EF--I don't want to get into that here. For the sake of argument, let's even lump LINQ with the old-school alternatives.) I'm looking for advice on the real-world application of ORM type stuff against what I know (and know pretty well). Old-school ADO.NET code or ORM? I'm sure there is a curve--does the curve have an ROI that makes things worthwhile? I'm anxious and willing to learn, but do have a deadline.
[ "I find that LINQ to SQL is much, much faster when I'm prototyping code. It just blows away any other method when I need something now.\nBut there is a cost. Compared to hand-rolled stored procs, LINQ is slow. Especially if you aren't very careful as seemingly minor changes can suddenly make a single turn into 1+N queries.\nMy recommendation. Use LINQ to SQL at first, then swtich to procs if you aren't getting the performance you need.\n", "A good question but a very controversial topic.\nThis blog post from Frans Bouma from a few years back citing the pros of dynamic SQL (implying ORMs) over stored procedures sparked quite the fiery flame war.\n", "There was a great discussion on this topic at DevTeach in Montreal. If you go to this URL: http://www.dotnetrocks.com/default.aspx?showNum=240 you will be able to hear two experts in the field (Ted Neward and Oren Eini) discuss the pros and cons of each approach. Probably the best answer you will find on a subject that has no real definite answer. \n" ]
[ 2, 1, 0 ]
[]
[]
[ "ado.net", "orm" ]
stackoverflow_0000059972_ado.net_orm.txt
Q: bug in linq Contains statement - is there a fix or workaround? I found a bug in the Contains statement in Linq (not sure if it is really in Linq or Linq to SQL) and want to know if anyone else has seen this and if there is a fix or workaround. If the querysource you do the contains with has more than 10 items in it, it does not pass the items correctly to the SQL query. It is hard to explain what it does, an example will show it best. If you look at the raw query, the parameters look like this: @P0 = 'aaa' @P1 = 'bbb' @P2 = 'ccc' ... [@P3 through @P9] @P10 = '111' @P11 = '222' ... [@p12 through @P19] @P20 = 'sss' ... [@P21 through @P99] @P100 = 'qqq' when the values are passed into the final query (all parameters resolved) it has resolved the parameters as if these were the values passed: @P0 = 'aaa' @P1 = 'bbb' @P2 = 'ccc' ... @P10 = 'bbb'0 @P11 = 'bbb'1 ... @P20 = 'ccc'0 ... @P100 = 'bbb'00 So it looks like the parameter resolving looks at the first digit only after the @P and resolves that, then adds on anything left at the end of the parameter name. At least that is what the Sql Server Query Visualizer plugin to Visual Studio shows the query doing. Really strange. So if any one has advice please share. Thanks! Update: I have rewritten the original linq statement to where I now use a join instead of the Contains, but would still like to know if there is a way around this issue. A: The more I look at it, and after running more tests, I'm thinking the bug may be in the Sql Server Query Visualizer plugin for Visual Studio, not actually in Linq to SQL itself. So it is not nearly as bad a situation as I thought - the query will return the right results, but you can't trust what the Visualizer is showing. Not great, but better than what I thought was going on. A: Try actually looking at the output from your datacontext before you pass judgement. DataContext.Log() will give you the generated SQL.
bug in linq Contains statement - is there a fix or workaround?
I found a bug in the Contains statement in Linq (not sure if it is really in Linq or Linq to SQL) and want to know if anyone else has seen this and if there is a fix or workaround. If the querysource you do the contains with has more than 10 items in it, it does not pass the items correctly to the SQL query. It is hard to explain what it does, an example will show it best. If you look at the raw query, the parameters look like this: @P0 = 'aaa' @P1 = 'bbb' @P2 = 'ccc' ... [@P3 through @P9] @P10 = '111' @P11 = '222' ... [@p12 through @P19] @P20 = 'sss' ... [@P21 through @P99] @P100 = 'qqq' when the values are passed into the final query (all parameters resolved) it has resolved the parameters as if these were the values passed: @P0 = 'aaa' @P1 = 'bbb' @P2 = 'ccc' ... @P10 = 'bbb'0 @P11 = 'bbb'1 ... @P20 = 'ccc'0 ... @P100 = 'bbb'00 So it looks like the parameter resolving looks at the first digit only after the @P and resolves that, then adds on anything left at the end of the parameter name. At least that is what the Sql Server Query Visualizer plugin to Visual Studio shows the query doing. Really strange. So if any one has advice please share. Thanks! Update: I have rewritten the original linq statement to where I now use a join instead of the Contains, but would still like to know if there is a way around this issue.
[ "The more I look at it, and after running more tests, I'm thinking the bug may be in the Sql Server Query Visualizer plugin for Visual Studio, not actually in Linq to SQL itself. So it is not nearly as bad a situation as I thought - the query will return the right results, but you can't trust what the Visualizer is showing. Not great, but better than what I thought was going on.\n", "Try actually looking at the output from your datacontext before you pass judgement.\nDataContext.Log() will give you the generated SQL.\n" ]
[ 1, 0 ]
[]
[]
[ ".net", "linq" ]
stackoverflow_0000059840_.net_linq.txt
Q: Implementing large system changes If you're familiar with the phrase "build one to throw away", well, we seem to have done that; we’re reaching the limits of version 1 of our online app. It's time to clean things up by: Re-organizing code and UI Unifying UI processes Adding more functionality Building for the future Modifying our database structure to handle all of the above What's the best way to make this transition happen? We want to avoid throwing all of our users over to a new system (once it's finished) ... they'd freak out and we couldn't handle the call load. Our users run the gamut, from technically proficient used-to-write-software types, to those that don't know what HTML is. Should we start a new "installation" of our system and move users over to it gradually after we ensure this new design sufficiently solves enough of the problems with version 1? Should we (somehow) change each module of our system incrementally, and phase? This may be difficult because the database layout will change, resulting in having to tweak the "core code" and the code for several surrounding modules. Is it common to have a set of trusted, patient, "beta tester" clients using a cutting edge version of an app? (The goal here would be to get feedback and test for bugs on a new system) Any other advice? First-hand experience? A: The answer, I'm afraid, is it depends. It depends on the kind of application and the kind of users you have. Without knowing what the system is and the scope of the changes in the version, it is difficult to offer an answer. That said, there are some rules of thumb. Firstly, avoid the big bang launch. Any launch of a system is going to have problems. The industry is littered with projects where people thought the bang-bang launch was a great idea, only for teething problems to bring the launch to its knees. Cuil was a recent high-profile causality of the big-bang launch. In order to make the teething problems manageable, you need to work with small numbers of users initially, then slowly ratchet up the number of users. Secondly, the thing that you must absolutely must positively do is put the user first. The user should have to do the least amount of work possible to use V2 of the system. The ideal amount of work would be zero. This means that if you pick to slowly migrate users from one system to the other, you are responsible for making sure all their data and settings are migrated. For example, don't do anything stupid like telling the user they must use V1 for all records before 12/09/2008 and V2 for all records after. The point of releasing V2 should be making the users' life easier, not making it needlessly more difficult. Thirdly, have a beta program. This applies even for Intranet applications. Developing an application is much like Newton-Raphson's method for finding the root of a polynomial. You make a guess of what the user wants, you deliver it to the user, the user provides feedback and slowly but surely each iteration takes you closer to the solution to the problem. A beta program will help you find the root much faster than just foisting new versions on to people without time for them to comment on the changes. Betas help get your users on-board earlier and make them feel included in the process; the importance of which I can not stress enough. A: We just finished plopping a brand new CRM system on our users, and let me tell you it was a TERRIBLE idea to do it that way: It was extremely painful for my team and for our customers. I'd go through every possible mean to do gradual releases, even if it means doing more work. You'll be grateful because you won't have to go through heroic efforts to get everything moved, and your customers will appreciate the ability to get introduced to the product a bit a a time. Hope that helps! A: I agree with Esteban that gradual releases are best. It's like remodeling a house: getting it over with seems like a good idea initially. But means you have to plan everything upfront, hire a bunch of contractors and move out. Then something changes in the plan or a contractor disappears, and all that time you hoped to save is gone. Meanwhile, gradual change gives everyone a chance to stop and think between steps. Sometimes you can avoid later changes when earlier changes work out better than you planned. I work on a system that had a huge scaling problem. We made a list of all the changes we thought we'd need and prioritized them by probable impact. Then we started making one change at a time. About half-way through the list, we found we'd solved the scaling problem. I still have the list, but I may never need to finish it. I'm free to add features and solve other problems. Of course, there are times when it's best to bit the bullet and tear the whole thing down. But that's a lot less common than people tend to believe. And for critical operational systems, the "tear-down" decision can be fatal. Look at the big government projects that everyone agrees have to be brought to the modern computing era, but can't because some vital service will be lost. If the philosophy had been gradual change, maybe they would have been modernized one piece at a time. A: It sounds like incremental re-architecture should be your agile buzz-phrase of choice. I've never done it on a web application, but I have been through some fairly radical client application changes that were done incrementally. If you invest a little bit of time up front to make sure that pieces of work are sequenced in a fairly sensible way it can work well. A small investment in good refactoring aids will be very helpful if you don't have them already. I can personally recommend jetBrains Resharper if you are using .NET, and if you are Java-based I believe IntelliJ IDEA includes similar functionality.
Implementing large system changes
If you're familiar with the phrase "build one to throw away", well, we seem to have done that; we’re reaching the limits of version 1 of our online app. It's time to clean things up by: Re-organizing code and UI Unifying UI processes Adding more functionality Building for the future Modifying our database structure to handle all of the above What's the best way to make this transition happen? We want to avoid throwing all of our users over to a new system (once it's finished) ... they'd freak out and we couldn't handle the call load. Our users run the gamut, from technically proficient used-to-write-software types, to those that don't know what HTML is. Should we start a new "installation" of our system and move users over to it gradually after we ensure this new design sufficiently solves enough of the problems with version 1? Should we (somehow) change each module of our system incrementally, and phase? This may be difficult because the database layout will change, resulting in having to tweak the "core code" and the code for several surrounding modules. Is it common to have a set of trusted, patient, "beta tester" clients using a cutting edge version of an app? (The goal here would be to get feedback and test for bugs on a new system) Any other advice? First-hand experience?
[ "The answer, I'm afraid, is it depends. It depends on the kind of application and the kind of users you have. Without knowing what the system is and the scope of the changes in the version, it is difficult to offer an answer. \nThat said, there are some rules of thumb.\nFirstly, avoid the big bang launch. Any launch of a system is going to have problems. The industry is littered with projects where people thought the bang-bang launch was a great idea, only for teething problems to bring the launch to its knees. Cuil was a recent high-profile causality of the big-bang launch.\nIn order to make the teething problems manageable, you need to work with small numbers of users initially, then slowly ratchet up the number of users. \nSecondly, the thing that you must absolutely must positively do is put the user first. The user should have to do the least amount of work possible to use V2 of the system. The ideal amount of work would be zero. \nThis means that if you pick to slowly migrate users from one system to the other, you are responsible for making sure all their data and settings are migrated. For example, don't do anything stupid like telling the user they must use V1 for all records before 12/09/2008 and V2 for all records after.\nThe point of releasing V2 should be making the users' life easier, not making it needlessly more difficult. \nThirdly, have a beta program. This applies even for Intranet applications. Developing an application is much like Newton-Raphson's method for finding the root of a polynomial. You make a guess of what the user wants, you deliver it to the user, the user provides feedback and slowly but surely each iteration takes you closer to the solution to the problem.\nA beta program will help you find the root much faster than just foisting new versions on to people without time for them to comment on the changes. Betas help get your users on-board earlier and make them feel included in the process; the importance of which I can not stress enough.\n", "We just finished plopping a brand new CRM system on our users, and let me tell you it was a TERRIBLE idea to do it that way: It was extremely painful for my team and for our customers.\nI'd go through every possible mean to do gradual releases, even if it means doing more work. You'll be grateful because you won't have to go through heroic efforts to get everything moved, and your customers will appreciate the ability to get introduced to the product a bit a a time.\nHope that helps!\n", "I agree with Esteban that gradual releases are best. It's like remodeling a house: getting it over with seems like a good idea initially. But means you have to plan everything upfront, hire a bunch of contractors and move out. Then something changes in the plan or a contractor disappears, and all that time you hoped to save is gone. Meanwhile, gradual change gives everyone a chance to stop and think between steps. Sometimes you can avoid later changes when earlier changes work out better than you planned.\nI work on a system that had a huge scaling problem. We made a list of all the changes we thought we'd need and prioritized them by probable impact. Then we started making one change at a time. About half-way through the list, we found we'd solved the scaling problem. I still have the list, but I may never need to finish it. I'm free to add features and solve other problems.\nOf course, there are times when it's best to bit the bullet and tear the whole thing down. But that's a lot less common than people tend to believe. And for critical operational systems, the \"tear-down\" decision can be fatal. Look at the big government projects that everyone agrees have to be brought to the modern computing era, but can't because some vital service will be lost. If the philosophy had been gradual change, maybe they would have been modernized one piece at a time.\n", "It sounds like incremental re-architecture should be your agile buzz-phrase of choice.\nI've never done it on a web application, but I have been through some fairly radical client application changes that were done incrementally. If you invest a little bit of time up front to make sure that pieces of work are sequenced in a fairly sensible way it can work well. A small investment in good refactoring aids will be very helpful if you don't have them already. I can personally recommend jetBrains Resharper if you are using .NET, and if you are Java-based I believe IntelliJ IDEA includes similar functionality.\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "architecture", "database_design", "deployment", "testing" ]
stackoverflow_0000059974_architecture_database_design_deployment_testing.txt
Q: XMLSerialization in C# I have a simple type that explicitly implemets an Interface. public interface IMessageHeader { string FromAddress { get; set; } string ToAddress { get; set; } } [Serializable] public class MessageHeader:IMessageHeader { private string from; private string to; [XmlAttribute("From")] string IMessageHeade.FromAddress { get { return this.from;} set { this.from = value;} } [XmlAttribute("To")] string IMessageHeade.ToAddress { get { return this.to;} set { this.to = value;} } } Is there a way to Serialize and Deserialize objects of type IMessageHeader?? I got the following error when tried "Cannot serialize interface IMessageHeader" A: You cannot serialize IMessageHeader because you can't do Activator.CreateInstance(typeof(IMessageHeader)) which is what serialization is going to do under the covers. You need a concrete type. You can do typeof(MessageHeader) or you could say, have an instance of MessageHeader and do XmlSerializer serializer = new XmlSerializer(instance.GetType()) A: No, because the serializer needs a concrete class that it can instantiate. Given the following code: XmlSerializer ser = new XmlSerializer(typeof(IMessageHeader)); IMessageHeader header = (IMessageHeader)ser.Deserialize(data); What class does the serializer create to return from Deserialize()? In theory it's possible to serialize/deserialize an interface, just not with XmlSerializer. A: Try adding IXmlSerializable to your IMessageHeader declaration, although I don't think that will work. From what I recall, the .net xml serializer only works for concrete classes that have a default constructor. A: The issue stems from the fact that you can't deserialize an interface but need to instantiate a concrete class. The XmlInclude attribute can be used to tell the serializer what concrete classes implement the interface.
XMLSerialization in C#
I have a simple type that explicitly implemets an Interface. public interface IMessageHeader { string FromAddress { get; set; } string ToAddress { get; set; } } [Serializable] public class MessageHeader:IMessageHeader { private string from; private string to; [XmlAttribute("From")] string IMessageHeade.FromAddress { get { return this.from;} set { this.from = value;} } [XmlAttribute("To")] string IMessageHeade.ToAddress { get { return this.to;} set { this.to = value;} } } Is there a way to Serialize and Deserialize objects of type IMessageHeader?? I got the following error when tried "Cannot serialize interface IMessageHeader"
[ "You cannot serialize IMessageHeader because you can't do Activator.CreateInstance(typeof(IMessageHeader)) which is what serialization is going to do under the covers. You need a concrete type.\nYou can do typeof(MessageHeader) or you could say, have an instance of MessageHeader and do \nXmlSerializer serializer = new XmlSerializer(instance.GetType())\n\n", "No, because the serializer needs a concrete class that it can instantiate.\nGiven the following code:\nXmlSerializer ser = new XmlSerializer(typeof(IMessageHeader));\n\nIMessageHeader header = (IMessageHeader)ser.Deserialize(data);\n\nWhat class does the serializer create to return from Deserialize()?\nIn theory it's possible to serialize/deserialize an interface, just not with XmlSerializer.\n", "Try adding IXmlSerializable to your IMessageHeader declaration, although I don't think that will work.\nFrom what I recall, the .net xml serializer only works for concrete classes that have a default constructor.\n", "The issue stems from the fact that you can't deserialize an interface but need to instantiate a concrete class.\nThe XmlInclude attribute can be used to tell the serializer what concrete classes implement the interface.\n" ]
[ 3, 0, 0, 0 ]
[ "You can create an abstract base class the implements IMessageHeader and also inherits MarshalByRefObject\n" ]
[ -1 ]
[ ".net", "c#", "interface", "serialization" ]
stackoverflow_0000059986_.net_c#_interface_serialization.txt
Q: Uninstall Command Fails Only in Release Mode I'm able to successfully uninstall a third-party application via the command line and via a custom Inno Setup installer. Command line Execution: MSIEXEC.exe /x {14D74337-01C2-4F8F-B44B-67FC613E5B1F} /qn Inno Setup Command: [Run] Filename: msiexec.exe; Flags: runhidden waituntilterminated; Parameters: "/x {{14D74337-01C2-4F8F-B44B-67FC613E5B1F} /qn"; StatusMsg: "Uninstalling Service..."; I am also able to uninstall the application programmatically when executing the following C# code in debug mode. C# Code: string fileName = "MSIEXEC.exe"; string arguments = "/x {14D74337-01C2-4F8F-B44B-67FC613E5B1F} /qn"; ProcessStartInfo psi = new ProcessStartInfo(fileName, arguments) { CreateNoWindow = true, UseShellExecute = false, RedirectStandardOutput = true }; Process process = Process.Start(psi); string errorMsg = process.StandardOutput.ReadToEnd(); process.WaitForExit(); The same C# code, however, produces the following failure output when run as a compiled, deployed Windows Service: "This action is only valid for products that are currently installed." Additional Comments: The Windows Service which is issuing the uninstall command is running on the same machine as the code being tested in Debug Mode. The Windows Service is running/logged on as the Local system account. I have consulted my application logs and I have validated that the executed command arguments are thhe same in both debug and release mode. I have consulted the Event Viewer but it doesn't offer any clues. Thoughts? Any help would be greatly appreciated. Thanks. A: Step 1: Check the MSI error log files I'm suspicious that your problem is due to running as LocalSystem. The Local System account is not the same as a normal user account which happens to have admin rights. It has no access to the network, and its interaction with the registry and file system is quite different. From memory any requests to read/write to your 'home directory' or HKCU under the registry actually go into either the default user profile, or in the case of temp dirs, c:\windows\temp A: I've come across similar problems in the past with installation, a customer was using the SYSTEM account to install and this was causing all sorts of permission problems for non-administrative users. MSI log files aren't really going to help if the application doesn't appear "installed", I'd suggest starting with capturing the output of MSIINV.EXE under the system account, that will get you an "Inventory" of the currently installed programs (or what that user sees installed) http://blogs.msdn.com/brada/archive/2005/06/24/432209.aspx I think you probably need to go back to the drawing board and see if you really need the windows service to do the uninstall. You'll probably come across all sorts of Vista UAC issues if you haven't already... A: Thanks to those offering help. This appears to be a permissions issue. I have updated my service to run under an Administrator account and it was able to successfully uninstall the third-party application. To Orion's point, though the Local System account is a powerful account that has full access to the system -- http://technet.microsoft.com/en-us/library/cc782435.aspx -- it doesn't seem to have the necessary rights to perform the uninstall. [See additional comments for full story regarding the LocalSystem being able to uninstall application for which it installed.] A: This is bizarre. LocalSystem definitely has the privileges to install applications (that's how Windows Update and software deployment in Active Directory work), so it should be able to uninstall as well. Perhaps the application is initially installed per-user instead of per-machine? A: @Paul Lalonde The app's installer is wrapped within a custom InnoSetup Installer. The InnoSetup installer, in turn, is manually executed by the logged in user. That said, the uninstall is trigged by a service running under the Local System account. Apparently, you were on to something. I put together a quick test which had the service running under the LocalSystem account install as well as uninstall the application and everything worked flawlessly. You were correct. The LocalSystem account has required uninstall permissions for applications in which it installs. You saved the day. Thanks for the feedback!
Uninstall Command Fails Only in Release Mode
I'm able to successfully uninstall a third-party application via the command line and via a custom Inno Setup installer. Command line Execution: MSIEXEC.exe /x {14D74337-01C2-4F8F-B44B-67FC613E5B1F} /qn Inno Setup Command: [Run] Filename: msiexec.exe; Flags: runhidden waituntilterminated; Parameters: "/x {{14D74337-01C2-4F8F-B44B-67FC613E5B1F} /qn"; StatusMsg: "Uninstalling Service..."; I am also able to uninstall the application programmatically when executing the following C# code in debug mode. C# Code: string fileName = "MSIEXEC.exe"; string arguments = "/x {14D74337-01C2-4F8F-B44B-67FC613E5B1F} /qn"; ProcessStartInfo psi = new ProcessStartInfo(fileName, arguments) { CreateNoWindow = true, UseShellExecute = false, RedirectStandardOutput = true }; Process process = Process.Start(psi); string errorMsg = process.StandardOutput.ReadToEnd(); process.WaitForExit(); The same C# code, however, produces the following failure output when run as a compiled, deployed Windows Service: "This action is only valid for products that are currently installed." Additional Comments: The Windows Service which is issuing the uninstall command is running on the same machine as the code being tested in Debug Mode. The Windows Service is running/logged on as the Local system account. I have consulted my application logs and I have validated that the executed command arguments are thhe same in both debug and release mode. I have consulted the Event Viewer but it doesn't offer any clues. Thoughts? Any help would be greatly appreciated. Thanks.
[ "Step 1: Check the MSI error log files\nI'm suspicious that your problem is due to running as LocalSystem. \nThe Local System account is not the same as a normal user account which happens to have admin rights. It has no access to the network, and its interaction with the registry and file system is quite different.\nFrom memory any requests to read/write to your 'home directory' or HKCU under the registry actually go into either the default user profile, or in the case of temp dirs, c:\\windows\\temp\n", "I've come across similar problems in the past with installation, a customer was using the SYSTEM account to install and this was causing all sorts of permission problems for non-administrative users. \nMSI log files aren't really going to help if the application doesn't appear \"installed\", I'd suggest starting with capturing the output of MSIINV.EXE under the system account, that will get you an \"Inventory\" of the currently installed programs (or what that user sees installed) http://blogs.msdn.com/brada/archive/2005/06/24/432209.aspx\nI think you probably need to go back to the drawing board and see if you really need the windows service to do the uninstall. You'll probably come across all sorts of Vista UAC issues if you haven't already...\n", "Thanks to those offering help. This appears to be a permissions issue. I have updated my service to run under an Administrator account and it was able to successfully uninstall the third-party application. To Orion's point, though the Local System account is a powerful account that has full access to the system -- http://technet.microsoft.com/en-us/library/cc782435.aspx -- it doesn't seem to have the necessary rights to perform the uninstall.\n[See additional comments for full story regarding the LocalSystem being able to uninstall application for which it installed.]\n", "This is bizarre. LocalSystem definitely has the privileges to install applications (that's how Windows Update and software deployment in Active Directory work), so it should be able to uninstall as well.\nPerhaps the application is initially installed per-user instead of per-machine?\n", "@Paul Lalonde\nThe app's installer is wrapped within a custom InnoSetup Installer. The InnoSetup installer, in turn, is manually executed by the logged in user. That said, the uninstall is trigged by a service running under the Local System account. \nApparently, you were on to something. I put together a quick test which had the service running under the LocalSystem account install as well as uninstall the application and everything worked flawlessly. You were correct. The LocalSystem account has required uninstall permissions for applications in which it installs. You saved the day. Thanks for the feedback! \n" ]
[ 2, 2, 1, 0, 0 ]
[]
[]
[ "c#", "installation", "service" ]
stackoverflow_0000055482_c#_installation_service.txt
Q: Best method to obfuscate or secure .Net assemblies I'm looking for a technique or tool which we can use to obfuscate or somehow secure our compiled c# code. The goal is not for user/data security but to hinder reverse engineering of some of the technology in our software. This is not for use on the web, but for a desktop application. So, do you know of any tools available to do this type of thing? (They need not be free) What kind of performance implications do they have if any? Does this have any negative side effects when using a debugger during development? We log stack traces of problems in the field. How would obfuscation affect this? A: This is a pretty good list of obfuscators from Visual Studio Marketplace Obfuscators ArmDot Crypto Obfuscator Demeanor for .NET DeployLX CodeVeil Dotfuscator .NET Obfuscator Semantic Designs: C# Source Code Obfuscator Smartassembly Spices.Net Xenocode Postbuild 2006 .NET Reactor I have not observed any performance issues when obfuscating my code. If your just sending text basted stack traces you might have a problem translating the method names. A: There are tools that also 'deobfuscate' obfuscated DLLs - I'd suggest turning the piece that needs to be protected into an unmanaged component. A: http://xheo.com/products/code-protection Done the job for me in the past.
Best method to obfuscate or secure .Net assemblies
I'm looking for a technique or tool which we can use to obfuscate or somehow secure our compiled c# code. The goal is not for user/data security but to hinder reverse engineering of some of the technology in our software. This is not for use on the web, but for a desktop application. So, do you know of any tools available to do this type of thing? (They need not be free) What kind of performance implications do they have if any? Does this have any negative side effects when using a debugger during development? We log stack traces of problems in the field. How would obfuscation affect this?
[ "This is a pretty good list of obfuscators from Visual Studio Marketplace\nObfuscators\n\nArmDot\nCrypto Obfuscator\nDemeanor for .NET\nDeployLX CodeVeil\nDotfuscator .NET Obfuscator\nSemantic Designs: C# Source Code Obfuscator\nSmartassembly\nSpices.Net\nXenocode Postbuild 2006\n.NET Reactor\n\nI have not observed any performance issues when obfuscating my code. If your just sending text basted stack traces you might have a problem translating the method names.\n", "There are tools that also 'deobfuscate' obfuscated DLLs - I'd suggest turning the piece that needs to be protected into an unmanaged component.\n", "http://xheo.com/products/code-protection\nDone the job for me in the past.\n" ]
[ 36, 0, 0 ]
[ "You are wasting your time going down that path. If you have code that you don't want anyone to see, you need to keep it behind closed doors. For example, only execute that code on your own server using a web service interface.\nObfuscating your code only deters the most casual of people. As the video game industry leaned a long time ago, no code is safe from cracking.\n" ]
[ -8 ]
[ ".net", ".net_2.0", "c#" ]
stackoverflow_0000059893_.net_.net_2.0_c#.txt
Q: What does the const operator mean when used with a method in C++? Given a declaration like this: class A { public: void Foo() const; }; What does it mean? Google turns up this: Member functions should be declared with the const keyword after them if they can operate on a const (this) object. If the function is not declared const, in can not be applied to a const object, and the compiler will give an error message. But I find that somewhat confusing; can anyone out there put it in better terms? Thanks. A: Consider a variation of your class A. class A { public: void Foo() const; void Moo(); private: int m_nState; // Could add mutable keyword if desired int GetState() const { return m_nState; } void SetState(int val) { m_nState = val; } }; const A *A1 = new A(); A *A2 = new A(); A1->Foo(); // OK A2->Foo(); // OK A1->Moo(); // Error - Not allowed to call non-const function on const object instance A2->Moo(); // OK The const keyword on a function declaration indicates to the compiler that the function is contractually obligated not to modify the state of A. Thus you are unable to call non-const functions within A::Foo nor change the value of member variables. To illustrate, Foo() may not invoke A::SetState as it is declared non-const, A::GetState however is ok because it is explicitly declared const. The member m_nState may not be changed either unless declared with the keyword mutable. One example of this usage of const is for 'getter' functions to obtain the value of member variables. @1800 Information: I forgot about mutable! The mutable keyword instructs the compiler to accept modifications to the member variable which would otherwise cause a compiler error. It is used when the function needs to modify state but the object is considered logically consistent (constant) regardless of the modification. A: This is not an answer, just a side comment. It is highly recommended to declare variables and constants const as much as possible. This communicates your intent to users of your class (even/especially yourself). The compiler will keep you honest to those intentions. -- i.e., it's like compiler checked documentation. By definition, this prevents state changes you weren't expecting and can, possibly, allow you to make reasonable assumptions while in your methods. const has a funny way of propagating through your code. Thus, it's a really good idea to start using const as early and as often as possible. Deciding to start const-ifying your code late in the game can be painful (easy, but annoying). If you're using a language with static, compile time checks it's a great idea to make as much use of them as possible... it's just another kind of testing really. A: Functions with const qualifier are not allowed to modify any member variables. For example: class A { int x; mutable int y; void f() const { x = 1; // error y = 1; // ok because y is mutable } }; A: C++ objects can be declared to be const: const A obj = new A(); When an object is const, the only member functions that can be called on that object are functions declared to be const. Making an object const can be interpreted as making the object readonly. A const object cannot be changed, i.e. no data members of the object can be changed. Declaring a member function const means that the function is not allowed to make any changes to the data members of the object. A: Two suggested best practices from experience: (1) Declare const functions whenever possible. At first, I found this to be just extra work, but then I started passing my objects to functions with signatures like f(const Object& o), and suddenly the compiler barfed on a line in f such as o.GetAValue(), because I hadn't marked GetAValue as a const function. This can surprise you especially when you subclass something and don't mark your version of the virtual methods as const - in that case the compile could fail on some function you've never heard of before that was written for the base class. (2) Avoid mutable variables when it's practical. A tempting trap can be to allow read operations to alter state, such as if you're building a "smart" object that does lazy or asynchronous i/o operations. If you can manage this with only one small mutable variable (like a bool), then, in my experience, this makes sense. However, if you find yourself marking every member variable as mutable in order to keep some operations const, you're defeating the purpose of the const keyword. What can go wrong is that a function which thinks it's not altering your class (since it only calls const methods) my invoke a bug in your code, and it could take a lot of effort to even realize this bug is in your class, since the other coder (rightly) assumes your data is const because he or she is only calling const methods. A: const has a funny way of propagating through your code. Thus, it's a really good idea to start using const as early and as often as possible. Deciding to start const-ifying your code late in the game can be painful (easy, but annoying). Additionally, you will easily run into problems if methods that should be const aren't! This will creep through the code as well, and make it worse and worse. A: that will cause the method to not be able to alter any member variables of the object
What does the const operator mean when used with a method in C++?
Given a declaration like this: class A { public: void Foo() const; }; What does it mean? Google turns up this: Member functions should be declared with the const keyword after them if they can operate on a const (this) object. If the function is not declared const, in can not be applied to a const object, and the compiler will give an error message. But I find that somewhat confusing; can anyone out there put it in better terms? Thanks.
[ "Consider a variation of your class A.\nclass A {\npublic:\n void Foo() const;\n void Moo();\n\nprivate:\n int m_nState; // Could add mutable keyword if desired\n int GetState() const { return m_nState; }\n void SetState(int val) { m_nState = val; }\n};\n\nconst A *A1 = new A();\nA *A2 = new A();\n\nA1->Foo(); // OK\nA2->Foo(); // OK\n\nA1->Moo(); // Error - Not allowed to call non-const function on const object instance\nA2->Moo(); // OK\n\nThe const keyword on a function declaration indicates to the compiler that the function is contractually obligated not to modify the state of A. Thus you are unable to call non-const functions within A::Foo nor change the value of member variables.\nTo illustrate, Foo() may not invoke A::SetState as it is declared non-const, A::GetState however is ok because it is explicitly declared const. The member m_nState may not be changed either unless declared with the keyword mutable.\nOne example of this usage of const is for 'getter' functions to obtain the value of member variables.\n\n@1800 Information: I forgot about mutable!\n\nThe mutable keyword instructs the compiler to accept modifications to the member variable which would otherwise cause a compiler error. It is used when the function needs to modify state but the object is considered logically consistent (constant) regardless of the modification.\n", "This is not an answer, just a side comment. It is highly recommended to declare variables and constants const as much as possible. \n\nThis communicates your intent to users of your class (even/especially yourself).\nThe compiler will keep you honest to those intentions. -- i.e., it's like compiler checked documentation.\nBy definition, this prevents state changes you weren't expecting and can, possibly, allow you to make reasonable assumptions while in your methods.\nconst has a funny way of propagating through your code. Thus, it's a really good idea to start using const as early and as often as possible. Deciding to start const-ifying your code late in the game can be painful (easy, but annoying).\n\nIf you're using a language with static, compile time checks it's a great idea to make as much use of them as possible... it's just another kind of testing really.\n", "Functions with const qualifier are not allowed to modify any member variables. For example:\nclass A\n{\n int x;\n mutable int y;\n\n void f() const\n {\n x = 1; // error\n y = 1; // ok because y is mutable\n }\n};\n\n", "C++ objects can be declared to be const:\nconst A obj = new A();\n\nWhen an object is const, the only member functions that can be called on that object are functions declared to be const. Making an object const can be interpreted as making the object readonly. A const object cannot be changed, i.e. no data members of the object can be changed. Declaring a member function const means that the function is not allowed to make any changes to the data members of the object.\n", "Two suggested best practices from experience:\n(1) Declare const functions whenever possible. At first, I found this to be just extra work, but then I started passing my objects to functions with signatures like f(const Object& o), and suddenly the compiler barfed on a line in f such as o.GetAValue(), because I hadn't marked GetAValue as a const function. This can surprise you especially when you subclass something and don't mark your version of the virtual methods as const - in that case the compile could fail on some function you've never heard of before that was written for the base class.\n(2) Avoid mutable variables when it's practical. A tempting trap can be to allow read operations to alter state, such as if you're building a \"smart\" object that does lazy or asynchronous i/o operations. If you can manage this with only one small mutable variable (like a bool), then, in my experience, this makes sense. However, if you find yourself marking every member variable as mutable in order to keep some operations const, you're defeating the purpose of the const keyword. What can go wrong is that a function which thinks it's not altering your class (since it only calls const methods) my invoke a bug in your code, and it could take a lot of effort to even realize this bug is in your class, since the other coder (rightly) assumes your data is const because he or she is only calling const methods.\n", "\nconst has a funny way of propagating through your code. Thus, it's a really good idea to start using const as early and as often as possible. Deciding to start const-ifying your code late in the game can be painful (easy, but annoying).\n\nAdditionally, you will easily run into problems if methods that should be const aren't! This will creep through the code as well, and make it worse and worse.\n", "that will cause the method to not be able to alter any member variables of the object\n" ]
[ 12, 5, 3, 2, 2, 2, 1 ]
[]
[]
[ "c++" ]
stackoverflow_0000049035_c++.txt
Q: Algorithm to find a common multiplier to convert decimal numbers to whole numbers I have an array of numbers that potentially have up to 8 decimal places and I need to find the smallest common number I can multiply them by so that they are all whole numbers. I need this so all the original numbers can all be multiplied out to the same scale and be processed by a sealed system that will only deal with whole numbers, then I can retrieve the results and divide them by the common multiplier to get my relative results. Currently we do a few checks on the numbers and multiply by 100 or 1,000,000, but the processing done by the *sealed system can get quite expensive when dealing with large numbers so multiplying everything by a million just for the sake of it isn’t really a great option. As an approximation lets say that the sealed algorithm gets 10 times more expensive every time you multiply by a factor of 10. What is the most efficient algorithm, that will also give the best possible result, to accomplish what I need and is there a mathematical name and/or formula for what I’m need? *The sealed system isn’t really sealed. I own/maintain the source code for it but its 100,000 odd lines of proprietary magic and it has been thoroughly bug and performance tested, altering it to deal with floats is not an option for many reasons. It is a system that creates a grid of X by Y cells, then rects that are X by Y are dropped into the grid, “proprietary magic” occurs and results are spat out – obviously this is an extremely simplified version of reality, but it’s a good enough approximation. So far there are quiet a few good answers and I wondered how I should go about choosing the ‘correct’ one. To begin with I figured the only fair way was to create each solution and performance test it, but I later realised that pure speed wasn’t the only relevant factor – an more accurate solution is also very relevant. I wrote the performance tests anyway, but currently the I’m choosing the correct answer based on speed as well accuracy using a ‘gut feel’ formula. My performance tests process 1000 different sets of 100 randomly generated numbers. Each algorithm is tested using the same set of random numbers. Algorithms are written in .Net 3.5 (although thus far would be 2.0 compatible) I tried pretty hard to make the tests as fair as possible. Greg – Multiply by large number and then divide by GCD – 63 milliseconds Andy – String Parsing – 199 milliseconds Eric – Decimal.GetBits – 160 milliseconds Eric – Binary search – 32 milliseconds Ima – sorry I couldn’t figure out a how to implement your solution easily in .Net (I didn’t want to spend too long on it) Bill – I figure your answer was pretty close to Greg’s so didn’t implement it. I’m sure it’d be a smidge faster but potentially less accurate. So Greg’s Multiply by large number and then divide by GCD” solution was the second fastest algorithm and it gave the most accurate results so for now I’m calling it correct. I really wanted the Decimal.GetBits solution to be the fastest, but it was very slow, I’m unsure if this is due to the conversion of a Double to a Decimal or the Bit masking and shifting. There should be a similar usable solution for a straight Double using the BitConverter.GetBytes and some knowledge contained here: http://blogs.msdn.com/bclteam/archive/2007/05/29/bcl-refresher-floating-point-types-the-good-the-bad-and-the-ugly-inbar-gazit-matthew-greig.aspx but my eyes just kept glazing over every time I read that article and I eventually ran out of time to try to implement a solution. I’m always open to other solutions if anyone can think of something better. A: I'd multiply by something sufficiently large (100,000,000 for 8 decimal places), then divide by the GCD of the resulting numbers. You'll end up with a pile of smallest integers that you can feed to the other algorithm. After getting the result, reverse the process to recover your original range. A: If you want to find some integer N so that N*x is also an exact integer for a set of floats x in a given set are all integers, then you have a basically unsolvable problem. Suppose x = the smallest positive float your type can represent, say it's 10^-30. If you multiply all your numbers by 10^30, and then try to represent them in binary (otherwise, why are you even trying so hard to make them ints?), then you'll lose basically all the information of the other numbers due to overflow. So here are two suggestions: If you have control over all the related code, find another approach. For example, if you have some function that takes only int's, but you have floats, and you want to stuff your floats into the function, just re-write or overload this function to accept floats as well. If you don't have control over the part of your system that requires int's, then choose a precision to which you care about, accept that you will simply have to lose some information sometimes (but it will always be "small" in some sense), and then just multiply all your float's by that constant, and round to the nearest integer. By the way, if you're dealing with fractions, rather than float's, then it's a different game. If you have a bunch of fractions a/b, c/d, e/f; and you want a least common multiplier N such that N*(each fraction) = an integer, then N = abc / gcd(a,b,c); and gcd(a,b,c) = gcd(a, gcd(b, c)). You can use Euclid's algorithm to find the gcd of any two numbers. A: Multiple all the numbers by 10 until you have integers. Divide by 2,3,5,7 while you still have all integers. I think that covers all cases. 2.1 * 10/7 -> 3 0.008 * 10^3/2^3 -> 1 That's assuming your multiplier can be a rational fraction. A: What language are you programming in? Something like myNumber.ToString().Substring(myNumber.ToString().IndexOf(".")+1).Length would give you the number of decimal places for a double in C#. You could run each number through that and find the largest number of decimal places(x), then multiply each number by 10 to the power of x. Edit: Out of curiosity, what is this sealed system which you can pass only integers to? A: Greg: Nice solution but won't calculating a GCD that's common in an array of 100+ numbers get a bit expensive? And how would you go about that? Its easy to do GCD for two numbers but for 100 it becomes more complex (I think). Evil Andy: I'm programing in .Net and the solution you pose is pretty much a match for what we do now. I didn't want to include it in my original question cause I was hoping for some outside the box (or my box anyway) thinking and I didn't want to taint peoples answers with a potential solution. While I don't have any solid performance statistics (because I haven't had any other method to compare it against) I know the string parsing would be relatively expensive and I figured a purely mathematical solution could potentially be more efficient. To be fair the current string parsing solution is in production and there have been no complaints about its performance yet (its even in production in a separate system in a VB6 format and no complaints there either). It's just that it doesn't feel right, I guess it offends my programing sensibilities - but it may well be the best solution. That said I'm still open to any other solutions, purely mathematical or otherwise. A: In a loop get mantissa and exponent of each number as integers. You can use frexp for exponent, but I think bit mask will be required for mantissa. Find minimal exponent. Find most significant digits in mantissa (loop through bits looking for last "1") - or simply use predefined number of significant digits. Your multiple is then something like 2^(numberOfDigits-minMantissa). "Something like" because I don't remember biases/offsets/ranges, but I think idea is clear enough. A: So basically you want to determine the number of digits after the decimal point for each number. This would be rather easier if you had the binary representation of the number. Are the numbers being converted from rationals or scientific notation earlier in your program? If so, you could skip the earlier conversion and have a much easier time. Otherwise you might want to pass each number to a function in an external DLL written in C, where you could work with the floating point representation directly. Or you could cast the numbers to decimal and do some work with Decimal.GetBits. The fastest approach I can think of in-place and following your conditions would be to find the smallest necessary power-of-ten (or 2, or whatever) as suggested before. But instead of doing it in a loop, save some computation by doing binary search on the possible powers. Assuming a maximum of 8, something like: int NumDecimals( double d ) { // make d positive for clarity; it won't change the result if( d<0 ) d=-d; // now do binary search on the possible numbers of post-decimal digits to // determine the actual number as quickly as possible: if( NeedsMore( d, 10e4 ) ) { // more than 4 decimals if( NeedsMore( d, 10e6 ) ) { // > 6 decimal places if( NeedsMore( d, 10e7 ) ) return 10e8; return 10e7; } else { // <= 6 decimal places if( NeedsMore( d, 10e5 ) ) return 10e6; return 10e5; } } else { // <= 4 decimal places // etc... } } bool NeedsMore( double d, double e ) { // check whether the representation of D has more decimal points than the // power of 10 represented in e. return (d*e - Math.Floor( d*e )) > 0; } PS: you wouldn't be passing security prices to an option pricing engine would you? It has exactly the flavor...
Algorithm to find a common multiplier to convert decimal numbers to whole numbers
I have an array of numbers that potentially have up to 8 decimal places and I need to find the smallest common number I can multiply them by so that they are all whole numbers. I need this so all the original numbers can all be multiplied out to the same scale and be processed by a sealed system that will only deal with whole numbers, then I can retrieve the results and divide them by the common multiplier to get my relative results. Currently we do a few checks on the numbers and multiply by 100 or 1,000,000, but the processing done by the *sealed system can get quite expensive when dealing with large numbers so multiplying everything by a million just for the sake of it isn’t really a great option. As an approximation lets say that the sealed algorithm gets 10 times more expensive every time you multiply by a factor of 10. What is the most efficient algorithm, that will also give the best possible result, to accomplish what I need and is there a mathematical name and/or formula for what I’m need? *The sealed system isn’t really sealed. I own/maintain the source code for it but its 100,000 odd lines of proprietary magic and it has been thoroughly bug and performance tested, altering it to deal with floats is not an option for many reasons. It is a system that creates a grid of X by Y cells, then rects that are X by Y are dropped into the grid, “proprietary magic” occurs and results are spat out – obviously this is an extremely simplified version of reality, but it’s a good enough approximation. So far there are quiet a few good answers and I wondered how I should go about choosing the ‘correct’ one. To begin with I figured the only fair way was to create each solution and performance test it, but I later realised that pure speed wasn’t the only relevant factor – an more accurate solution is also very relevant. I wrote the performance tests anyway, but currently the I’m choosing the correct answer based on speed as well accuracy using a ‘gut feel’ formula. My performance tests process 1000 different sets of 100 randomly generated numbers. Each algorithm is tested using the same set of random numbers. Algorithms are written in .Net 3.5 (although thus far would be 2.0 compatible) I tried pretty hard to make the tests as fair as possible. Greg – Multiply by large number and then divide by GCD – 63 milliseconds Andy – String Parsing – 199 milliseconds Eric – Decimal.GetBits – 160 milliseconds Eric – Binary search – 32 milliseconds Ima – sorry I couldn’t figure out a how to implement your solution easily in .Net (I didn’t want to spend too long on it) Bill – I figure your answer was pretty close to Greg’s so didn’t implement it. I’m sure it’d be a smidge faster but potentially less accurate. So Greg’s Multiply by large number and then divide by GCD” solution was the second fastest algorithm and it gave the most accurate results so for now I’m calling it correct. I really wanted the Decimal.GetBits solution to be the fastest, but it was very slow, I’m unsure if this is due to the conversion of a Double to a Decimal or the Bit masking and shifting. There should be a similar usable solution for a straight Double using the BitConverter.GetBytes and some knowledge contained here: http://blogs.msdn.com/bclteam/archive/2007/05/29/bcl-refresher-floating-point-types-the-good-the-bad-and-the-ugly-inbar-gazit-matthew-greig.aspx but my eyes just kept glazing over every time I read that article and I eventually ran out of time to try to implement a solution. I’m always open to other solutions if anyone can think of something better.
[ "I'd multiply by something sufficiently large (100,000,000 for 8 decimal places), then divide by the GCD of the resulting numbers. You'll end up with a pile of smallest integers that you can feed to the other algorithm. After getting the result, reverse the process to recover your original range.\n", "If you want to find some integer N so that N*x is also an exact integer for a set of floats x in a given set are all integers, then you have a basically unsolvable problem. Suppose x = the smallest positive float your type can represent, say it's 10^-30. If you multiply all your numbers by 10^30, and then try to represent them in binary (otherwise, why are you even trying so hard to make them ints?), then you'll lose basically all the information of the other numbers due to overflow.\nSo here are two suggestions:\n\nIf you have control over all the related code, find another\napproach. For example, if you have some function that takes only\nint's, but you have floats, and you want to stuff your floats into\nthe function, just re-write or overload this function to accept\nfloats as well.\nIf you don't have control over the part of your system that requires\nint's, then choose a precision to which you care about, accept that\nyou will simply have to lose some information sometimes (but it will\nalways be \"small\" in some sense), and then just multiply all your\nfloat's by that constant, and round to the nearest integer.\n\nBy the way, if you're dealing with fractions, rather than float's, then it's a different game. If you have a bunch of fractions a/b, c/d, e/f; and you want a least common multiplier N such that N*(each fraction) = an integer, then N = abc / gcd(a,b,c); and gcd(a,b,c) = gcd(a, gcd(b, c)). You can use Euclid's algorithm to find the gcd of any two numbers.\n", "\nMultiple all the numbers by 10\nuntil you have integers.\nDivide\nby 2,3,5,7 while you still have all\nintegers.\n\nI think that covers all cases.\n2.1 * 10/7 -> 3\n0.008 * 10^3/2^3 -> 1\n\nThat's assuming your multiplier can be a rational fraction.\n", "What language are you programming in? Something like\nmyNumber.ToString().Substring(myNumber.ToString().IndexOf(\".\")+1).Length\n\nwould give you the number of decimal places for a double in C#. You could run each number through that and find the largest number of decimal places(x), then multiply each number by 10 to the power of x.\nEdit: Out of curiosity, what is this sealed system which you can pass only integers to?\n", "Greg: Nice solution but won't calculating a GCD that's common in an array of 100+ numbers get a bit expensive? And how would you go about that? Its easy to do GCD for two numbers but for 100 it becomes more complex (I think).\nEvil Andy: I'm programing in .Net and the solution you pose is pretty much a match for what we do now. I didn't want to include it in my original question cause I was hoping for some outside the box (or my box anyway) thinking and I didn't want to taint peoples answers with a potential solution. While I don't have any solid performance statistics (because I haven't had any other method to compare it against) I know the string parsing would be relatively expensive and I figured a purely mathematical solution could potentially be more efficient.\nTo be fair the current string parsing solution is in production and there have been no complaints about its performance yet (its even in production in a separate system in a VB6 format and no complaints there either). It's just that it doesn't feel right, I guess it offends my programing sensibilities - but it may well be the best solution.\nThat said I'm still open to any other solutions, purely mathematical or otherwise.\n", "In a loop get mantissa and exponent of each number as integers. You can use frexp for exponent, but I think bit mask will be required for mantissa. Find minimal exponent. Find most significant digits in mantissa (loop through bits looking for last \"1\") - or simply use predefined number of significant digits.\nYour multiple is then something like 2^(numberOfDigits-minMantissa). \"Something like\" because I don't remember biases/offsets/ranges, but I think idea is clear enough.\n", "So basically you want to determine the number of digits after the decimal point for each number.\nThis would be rather easier if you had the binary representation of the number. Are the numbers being converted from rationals or scientific notation earlier in your program? If so, you could skip the earlier conversion and have a much easier time. Otherwise you might want to pass each number to a function in an external DLL written in C, where you could work with the floating point representation directly. Or you could cast the numbers to decimal and do some work with Decimal.GetBits.\nThe fastest approach I can think of in-place and following your conditions would be to find the smallest necessary power-of-ten (or 2, or whatever) as suggested before. But instead of doing it in a loop, save some computation by doing binary search on the possible powers. Assuming a maximum of 8, something like:\nint NumDecimals( double d )\n{\n // make d positive for clarity; it won't change the result\n if( d<0 ) d=-d;\n\n // now do binary search on the possible numbers of post-decimal digits to \n // determine the actual number as quickly as possible:\n\n if( NeedsMore( d, 10e4 ) )\n {\n // more than 4 decimals\n if( NeedsMore( d, 10e6 ) )\n {\n // > 6 decimal places\n if( NeedsMore( d, 10e7 ) ) return 10e8;\n return 10e7;\n }\n else\n {\n // <= 6 decimal places\n if( NeedsMore( d, 10e5 ) ) return 10e6;\n return 10e5;\n }\n }\n else\n {\n // <= 4 decimal places\n // etc...\n }\n\n}\n\nbool NeedsMore( double d, double e )\n{\n // check whether the representation of D has more decimal points than the \n // power of 10 represented in e.\n return (d*e - Math.Floor( d*e )) > 0;\n}\n\nPS: you wouldn't be passing security prices to an option pricing engine would you? It has exactly the flavor...\n" ]
[ 6, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "algorithm", "math" ]
stackoverflow_0000058493_algorithm_math.txt
Q: The Google Calculator Glitch, could float vs. double be a possible reason? I did this Just for kicks (so, not exactly a question, i can see the downmodding happening already) but, in lieu of Google's newfound inability to do math correctly (check it! according to google 500,000,000,000,002 - 500,000,000,000,001 = 0), i figured i'd try the following in C to run a little theory. int main() { char* a = "399999999999999"; char* b = "399999999999998"; float da = atof(a); float db = atof(b); printf("%s - %s = %f\n", a, b, da-db); a = "500000000000002"; b = "500000000000001"; da = atof(a); db = atof(b); printf("%s - %s = %f\n", a, b, da-db); } When you run this program, you get the following 399999999999999 - 399999999999998 = 0.000000 500000000000002 - 500000000000001 = 0.000000 It would seem like Google is using simple 32 bit floating precision (the error here), if you switch float for double in the above code, you fix the issue! Could this be it? /mp A: For more of this kind of silliness see this nice article pertaining to Windows calculator. When you change the insides, nobody notices The innards of Calc - the arithmetic engine - was completely thrown away and rewritten from scratch. The standard IEEE floating point library was replaced with an arbitrary-precision arithmetic library. This was done after people kept writing ha-ha articles about how Calc couldn't do decimal arithmetic correctly, that for example computing 10.21 - 10.2 resulted in 0.0100000000000016. A: It would seem like Google is using simple 32 bit floating precision (the error here), if you switch float for double in the above code, you fix the issue! Could this be it? No, you just defer the issue. doubles still exhibit the same issue, just with larger numbers. A: in C#, try (double.maxvalue == (double.maxvalue - 100)) , you'll get true ... but thats what it is supposed to be: http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems thinking about it, you have 64 bit representing a number greater than 2^64 (double.maxvalue), so inaccuracy is expected. A: @ebel thinking about it, you have 64 bit representing a number greater than 2^64 (double.maxvalue), so inaccuracy is expected. 2^64 is not the maximum value of a double. 2^64 is the number of unique values that a double (or any other 64-bit type) can hold. Double.MaxValue is equal to 1.79769313486232e308. Inaccuracy with floating point numbers doesn't come from representing values larger than Double.MaxValue (which is impossible, excluding Double.PositiveInfinity). It comes from the fact that the desired range of values is simply too large to fit into the datatype. So we give up precision in exchange for a larger effective range. In essense, we are dropping significant digits in return for a larger exponent range. @DrPizza Not even; the IEEE encodings use multiple encodings for the same values. Specifically, NaN is represented by an exponent of all-bits-1, and then any non-zero value for the mantissa. As such, there are 252 NaNs for doubles, 223 NaNs for singles. True. I didn't account for duplicate encodings. There are actually 252-1 NaNs for doubles and 223-1 NaNs for singles, though. :p A: 2^64 is not the maximum value of a double. 2^64 is the number of unique values that a double (or any other 64-bit type) can hold. Double.MaxValue is equal to 1.79769313486232e308. Not even; the IEEE encodings use multiple encodings for the same values. Specifically, NaN is represented by an exponent of all-bits-1, and then any non-zero value for the mantissa. As such, there are 252 NaNs for doubles, 223 NaNs for singles. A: True. I didn't account for duplicate encodings. There are actually 252-1 NaNs for doubles and 223-1 NaNs for singles, though. :p Doh, forgot to subtract the infinities. A: The rough estimate version of this issue that I learned is that 32-bit floats give you 5 digits of precision and 64-bit floats give you 15 digits of precision. This will of course vary depending on how the floats are encoded, but it's a pretty good starting point.
The Google Calculator Glitch, could float vs. double be a possible reason?
I did this Just for kicks (so, not exactly a question, i can see the downmodding happening already) but, in lieu of Google's newfound inability to do math correctly (check it! according to google 500,000,000,000,002 - 500,000,000,000,001 = 0), i figured i'd try the following in C to run a little theory. int main() { char* a = "399999999999999"; char* b = "399999999999998"; float da = atof(a); float db = atof(b); printf("%s - %s = %f\n", a, b, da-db); a = "500000000000002"; b = "500000000000001"; da = atof(a); db = atof(b); printf("%s - %s = %f\n", a, b, da-db); } When you run this program, you get the following 399999999999999 - 399999999999998 = 0.000000 500000000000002 - 500000000000001 = 0.000000 It would seem like Google is using simple 32 bit floating precision (the error here), if you switch float for double in the above code, you fix the issue! Could this be it? /mp
[ "For more of this kind of silliness see this nice article pertaining to Windows calculator.\nWhen you change the insides, nobody notices\n\nThe innards of Calc - the arithmetic\n engine - was completely thrown away\n and rewritten from scratch. The\n standard IEEE floating point library\n was replaced with an\n arbitrary-precision arithmetic\n library. This was done after people\n kept writing ha-ha articles about how\n Calc couldn't do decimal arithmetic\n correctly, that for example computing\n 10.21 - 10.2 resulted in 0.0100000000000016.\n\n", "\nIt would seem like Google is using simple 32 bit floating precision (the error here), if you switch float for double in the above code, you fix the issue! Could this be it?\n\nNo, you just defer the issue. doubles still exhibit the same issue, just with larger numbers.\n", "in C#, try (double.maxvalue == (double.maxvalue - 100)) , you'll get true ...\nbut thats what it is supposed to be:\nhttp://en.wikipedia.org/wiki/Floating_point#Accuracy_problems \nthinking about it, you have 64 bit representing a number greater than 2^64 (double.maxvalue), so inaccuracy is expected. \n", "@ebel\n\nthinking about it, you have 64 bit representing a number greater than 2^64 (double.maxvalue), so inaccuracy is expected. \n\n2^64 is not the maximum value of a double. 2^64 is the number of unique values that a double (or any other 64-bit type) can hold. Double.MaxValue is equal to 1.79769313486232e308.\nInaccuracy with floating point numbers doesn't come from representing values larger than Double.MaxValue (which is impossible, excluding Double.PositiveInfinity). It comes from the fact that the desired range of values is simply too large to fit into the datatype. So we give up precision in exchange for a larger effective range. In essense, we are dropping significant digits in return for a larger exponent range.\n@DrPizza\n\nNot even; the IEEE encodings use multiple encodings for the same values. Specifically, NaN is represented by an exponent of all-bits-1, and then any non-zero value for the mantissa. As such, there are 252 NaNs for doubles, 223 NaNs for singles.\n\nTrue. I didn't account for duplicate encodings. There are actually 252-1 NaNs for doubles and 223-1 NaNs for singles, though. :p\n", "\n2^64 is not the maximum value of a double. 2^64 is the number of unique values that a double (or any other 64-bit type) can hold. Double.MaxValue is equal to 1.79769313486232e308.\n\nNot even; the IEEE encodings use multiple encodings for the same values. Specifically, NaN is represented by an exponent of all-bits-1, and then any non-zero value for the mantissa. As such, there are 252 NaNs for doubles, 223 NaNs for singles.\n", "\nTrue. I didn't account for duplicate encodings. There are actually 252-1 NaNs for doubles and 223-1 NaNs for singles, though. :p\n\nDoh, forgot to subtract the infinities.\n", "The rough estimate version of this issue that I learned is that 32-bit floats give you 5 digits of precision and 64-bit floats give you 15 digits of precision. This will of course vary depending on how the floats are encoded, but it's a pretty good starting point. \n" ]
[ 4, 2, 2, 1, 0, 0, 0 ]
[]
[]
[ "c", "google_search", "math" ]
stackoverflow_0000027095_c_google_search_math.txt
Q: Select rows in dataset table based on other dataset table I have a dataset that has two tables in it. I want to do the following (or something like it) is it possible and is how I have it correct? dsTabData.Tables("FilingTabs").Select("fs_ID not in (select fsp_fsid from ParentTabs)") how do you reference data from other table in the same dataset? A: ok ok before y'all flame me! ;) I did some more looking around online and found what looks like the stuff I need, now off to read some more from here: Navigating a Relationship Between Tables
Select rows in dataset table based on other dataset table
I have a dataset that has two tables in it. I want to do the following (or something like it) is it possible and is how I have it correct? dsTabData.Tables("FilingTabs").Select("fs_ID not in (select fsp_fsid from ParentTabs)") how do you reference data from other table in the same dataset?
[ "ok ok before y'all flame me! ;)\nI did some more looking around online and found what looks like the stuff I need, now off to read some more from here:\nNavigating a Relationship Between Tables\n" ]
[ 1 ]
[]
[]
[ ".net", "dataset", "select", "vb.net" ]
stackoverflow_0000060122_.net_dataset_select_vb.net.txt
Q: Change priority of the current process in C On Windows I can do: HANDLE hCurrentProcess = GetCurrentProcess(); SetPriorityClass(hCurrentProcess, ABOVE_NORMAL_PRIORITY_CLASS); How can I do the same thing on *nix? A: Try: #include <sys/time.h> #include <sys/resource.h> int main(){ setpriority(PRIO_PROCESS, 0, -20); } Note that you must be running as superuser for this to work. (for more info, type 'man setpriority' at a prompt.) A: If doing something like this under unix your want to (as root) chmod you task and set the s bit. Then you can change who you are running as, what your priority is, your thread scheduling, etc. at run time. It is great as long as you are not writing a massively multithreaded app with a bug in it so that you take over a 48 CPU box and nobody can shut you down because your have each CPU spinning at 100% with all thread set to SHED_FIFO (runs to completion) running as root. Nah .. I wouldn't be speaking from experience .... A: @ allain Can you lower your own process' priority without being superuser? Sure. Be aware, however, that this is a one way street. You can't even get back to where you started. And even fairly small reductions in priority can have startlingly large effects on running time when there is significant load on the system.
Change priority of the current process in C
On Windows I can do: HANDLE hCurrentProcess = GetCurrentProcess(); SetPriorityClass(hCurrentProcess, ABOVE_NORMAL_PRIORITY_CLASS); How can I do the same thing on *nix?
[ "Try:\n#include <sys/time.h>\n#include <sys/resource.h>\n\nint main(){\n setpriority(PRIO_PROCESS, 0, -20);\n}\n\nNote that you must be running as superuser for this to work.\n(for more info, type 'man setpriority' at a prompt.)\n", "If doing something like this under unix your want to (as root) chmod you task and set the s bit. Then you can change who you are running as, what your priority is, your thread scheduling, etc. at run time. \nIt is great as long as you are not writing a massively multithreaded app with a bug in it so that you take over a 48 CPU box and nobody can shut you down because your have each CPU spinning at 100% with all thread set to SHED_FIFO (runs to completion) running as root. \nNah .. I wouldn't be speaking from experience .... \n", "@ allain Can you lower your own process' priority without being superuser?\nSure. Be aware, however, that this is a one way street. You can't even get back to where you started. And even fairly small reductions in priority can have startlingly large effects on running time when there is significant load on the system.\n" ]
[ 25, 5, 2 ]
[]
[]
[ "c", "cross_platform", "process_management", "unix" ]
stackoverflow_0000029621_c_cross_platform_process_management_unix.txt
Q: Whats the best way to do throbber in C#? Specifically what I am looking to do is make the icons for the Nodes in my System.Windows.Forms.TreeView control to throb while a long loading operation is taking place. A: If you load each frame into an ImageList, you can use a loop to update to each frame. Example: bool runThrobber = true; private void AnimateThrobber(TreeNode animatedNode) { BackgroundWorker bg = new BackgroundWorker(); bg.DoWork += new DoWorkEventHandler(delegate { while (runThrobber) { this.Invoke((MethodInvoker)delegate { animatedNode.SelectedImageIndex++; if (animatedNode.SelectedImageIndex >= imageList1.Images.Count) > animatedNode.SelectedImageIndex = 0; }); Thread.Sleep(100); } }); bg.RunWorkerAsync(); } Obviously there's more than a few ways to implement this, but here's the basic idea.
Whats the best way to do throbber in C#?
Specifically what I am looking to do is make the icons for the Nodes in my System.Windows.Forms.TreeView control to throb while a long loading operation is taking place.
[ "If you load each frame into an ImageList, you can use a loop to update to each frame. \nExample:\n\n bool runThrobber = true;\n private void AnimateThrobber(TreeNode animatedNode)\n {\n BackgroundWorker bg = new BackgroundWorker();\n bg.DoWork += new DoWorkEventHandler(delegate\n {\n while (runThrobber)\n {\n this.Invoke((MethodInvoker)delegate\n {\n animatedNode.SelectedImageIndex++;\n if (animatedNode.SelectedImageIndex >= imageList1.Images.Count) > animatedNode.SelectedImageIndex = 0;\n });\n Thread.Sleep(100);\n }\n });\n bg.RunWorkerAsync();\n }\n\n\nObviously there's more than a few ways to implement this, but here's the basic idea.\n" ]
[ 4 ]
[]
[]
[ "c#", "treeview" ]
stackoverflow_0000060151_c#_treeview.txt
Q: Is it possible to embed and use a portable executable in a .net DLL? The easiest way to think of my question is to think of a single, simple unix command (albeit, this is for windows) and I need progmatic access to run it. I have a single command-line based executable that performs some unit of work. I want to call that executable with the .net process library, as I can do with any other executable. However, it dawned on me that there is potential for the dll to become useless or break with unintended updates to the executable or a non-existant executable. Is it possible to run the executable from the Process object in the .net framework, as I would an external executable file? A: No, you can't execute it directly. You could probably unpack it to a temporary directory and execute it from there. A: Is this where PInvoke can help? A: Depending on the functionality of the command line program you want to run, it might be possible to duplicate the functionality in PowerShell, where you can embed the PowerShell runtime in your .NET application.
Is it possible to embed and use a portable executable in a .net DLL?
The easiest way to think of my question is to think of a single, simple unix command (albeit, this is for windows) and I need progmatic access to run it. I have a single command-line based executable that performs some unit of work. I want to call that executable with the .net process library, as I can do with any other executable. However, it dawned on me that there is potential for the dll to become useless or break with unintended updates to the executable or a non-existant executable. Is it possible to run the executable from the Process object in the .net framework, as I would an external executable file?
[ "No, you can't execute it directly. You could probably unpack it to a temporary directory and execute it from there.\n", "Is this where PInvoke can help?\n", "Depending on the functionality of the command line program you want to run, it might be possible to duplicate the functionality in PowerShell, where you can embed the PowerShell runtime in your .NET application.\n" ]
[ 1, 0, 0 ]
[]
[]
[ ".net", "command_line", "dll", "resources" ]
stackoverflow_0000060143_.net_command_line_dll_resources.txt
Q: How do I bind a regular expression to a key combination in emacs? For context, I am something of an emacs newbie. I haven't used it for very long, but have been using it more and more (I like it a lot). Also I'm comfortable with lisp, but not super familiar with elisp. What I need to do is bind a regular expression to a keyboard combination because I use this particular regex so often. What I've been doing: M-C-s ^.*Table\(\(.*\n\)*?GO\) Note, I used newline above, but I've found that for isearch-forward-regexp, you really need to replace the \n in the regular expression with the result of C-q Q-j. This inserts a literal newline (without ending the command) enabling me to put a newline into the expression and match across lines. How can I bind this to a key combination? I vaguely understand that I need to create an elisp function which executes isearch-forward-regexp with the expression, but I'm fuzzy on the details. I've searched google and found most documentation to be a tad confusing. How can I bind a regular expression to a key combination in emacs? Mike Stone had the best answer so far -- not exactly what I was looking for but it worked for what I needed Edit - this sort of worked, but after storing the macro, when I went back to use it later, I couldn't use it with C-x e. (i.e., if I reboot emacs and then type M-x macro-name, and then C-x e, I get a message in the minibuffer like 'no last kbd macro' or something similar) @Mike Stone - Thanks for the information. I tried creating a macro like so: C-x( M-C-s ^.*Table\(\(.*C-q C-J\)*?GO\) C-x) This created my macro, but when I executed my macro I didn't get the same highlighting that I ordinarily get when I use isearch-forward-regexp. Instead it just jumped to the end of the next match of the expression. So that doesn't really work for what I need. Any ideas? Edit: It looks like I can use macros to do what I want, I just have to think outside the box of isearch-forward-regexp. I'll try what you suggested. A: You can use macros, just do C-x ( then do everything for the macro, then C-x ) to end the macro, then C-x e will execute the last defined macro. Then, you can name it using M-x name-last-kbd-macro which lets you assign a name to it, which you can then invoke with M-x TESTIT, then store the definition using M-x insert-kbd-macro which will put the macro into your current buffer, and then you can store it in your .emacs file. Example: C-x( abc *return* C-x) Will define a macro to type "abc" and press return. C-xeee Executes the above macro immediately, 3 times (first e executes it, then following 2 e's will execute it twice more). M-x name-last-kbd-macro testit Names the macro to "testit" M-x testit Executes the just named macro (prints "abc" then return). M-x insert-kbd-macro Puts the following in your current buffer: (fset 'testit [?a ?b ?c return]) Which can then be saved in your .emacs file to use the named macro over and over again after restarting emacs. A: I've started with solving your problem literally, (defun search-maker (s) `(lambda () (interactive) (let ((regexp-search-ring (cons ,s regexp-search-ring)) ;add regexp to history (isearch-mode-map (copy-keymap isearch-mode-map))) (define-key isearch-mode-map (vector last-command-event) 'isearch-repeat-forward) ;make last key repeat (isearch-forward-regexp)))) ;` (global-set-key (kbd "C-. t") (search-maker "^.*Table\\(\\(.*\\n\\)*?GO\\)")) (global-set-key (kbd "<f6>") (search-maker "HELLO WORLD")) The keyboard sequence from (kbd ...) starts a new blank search. To actually search for your string, you press last key again as many times as you need. So C-. t t t or <f6> <f6> <f6>. The solution is basically a hack, but I'll leave it here if you want to experiment with it. The following is probably the closest to what you need, (defmacro define-isearch-yank (key string) `(define-key isearch-mode-map ,key (lambda () (interactive) (isearch-yank-string ,string)))) ;` (define-isearch-yank (kbd "C-. t") "^.*Table\\(\\(.*\\n\\)*?GO\\)") (define-isearch-yank (kbd "<f6>") "HELLO WORLD") The key combos now only work in isearch mode. You start the search normally, and then press key combos to insert your predefined string. A: @Justin: When executing a macro, it's a little different... incremental searches will just happen once, and you will have to execute the macro again if you want to search again. You can do more powerful and complex things though, such as search for a keyword, jump to the beginning of the line, mark, go to end of the line, M-w (to copy), then jump to another buffer, then C-y (paste), then jump back to the other buffer and end your macro. Then, each time you execute the macro you will be copying a line to the next buffer. The really cool thing about emacs macros is it will stop when it sees the bell... which happens when you fail to match an incremental search (among other things). So the above macro, you can do C-u 1000 C-x e which will execute the macro 1000 times... but since you did a search, it will only copy 1000 lines, OR UNTIL THE SEARCH FAILS! Which means if there are 100 matches, it will only execute the macro 100 times. EDIT: Check out C-hf highlight-lines-matching-regexp which will show the help of a command that highlights everything matching a regex... I don't know how to undo the highlighting though... anyways you could use a stored macro to highlight all matching the regex, and then another macro to find the next one...? FURTHER EDIT: M-x unhighlight-regexp will undo the highlighting, you have to enter the last regex though (but it defaults to the regex you used to highlight) A: In general, to define a custom keybinding in Emacs, you'd write (define-key global-map (kbd "C-c C-f") 'function-name) define-key is, unsurprisingly, the function to define a new key. global-map is the global keymap, as opposed to individual maps for each mode. (kbd "C-c C-f") returns a string representing the key sequence C-c C-f. There are other ways of doing this, including inputting the string directly, but this is usually the most straightforward since it takes the normal written representation. 'function-name is a symbol that's the name of the function. Now, unless your function is already defined, you'll want to define it before you use this. To do that, write (defun function-name (args) (interactive) stuff ...) defun defines a function - use C-h f defun for more specific information. The (interactive) there isn't really a function call; it tells the compiler that it's okay for the function to be called by the user using M-x function-name and via keybindings. Now, for interactive searching in particular, this is tricky; the isearch module doesn't really seem to be set up for what you're trying to do. But you can use this to do something similar.
How do I bind a regular expression to a key combination in emacs?
For context, I am something of an emacs newbie. I haven't used it for very long, but have been using it more and more (I like it a lot). Also I'm comfortable with lisp, but not super familiar with elisp. What I need to do is bind a regular expression to a keyboard combination because I use this particular regex so often. What I've been doing: M-C-s ^.*Table\(\(.*\n\)*?GO\) Note, I used newline above, but I've found that for isearch-forward-regexp, you really need to replace the \n in the regular expression with the result of C-q Q-j. This inserts a literal newline (without ending the command) enabling me to put a newline into the expression and match across lines. How can I bind this to a key combination? I vaguely understand that I need to create an elisp function which executes isearch-forward-regexp with the expression, but I'm fuzzy on the details. I've searched google and found most documentation to be a tad confusing. How can I bind a regular expression to a key combination in emacs? Mike Stone had the best answer so far -- not exactly what I was looking for but it worked for what I needed Edit - this sort of worked, but after storing the macro, when I went back to use it later, I couldn't use it with C-x e. (i.e., if I reboot emacs and then type M-x macro-name, and then C-x e, I get a message in the minibuffer like 'no last kbd macro' or something similar) @Mike Stone - Thanks for the information. I tried creating a macro like so: C-x( M-C-s ^.*Table\(\(.*C-q C-J\)*?GO\) C-x) This created my macro, but when I executed my macro I didn't get the same highlighting that I ordinarily get when I use isearch-forward-regexp. Instead it just jumped to the end of the next match of the expression. So that doesn't really work for what I need. Any ideas? Edit: It looks like I can use macros to do what I want, I just have to think outside the box of isearch-forward-regexp. I'll try what you suggested.
[ "You can use macros, just do C-x ( then do everything for the macro, then C-x ) to end the macro, then C-x e will execute the last defined macro. Then, you can name it using M-x name-last-kbd-macro which lets you assign a name to it, which you can then invoke with M-x TESTIT, then store the definition using M-x insert-kbd-macro which will put the macro into your current buffer, and then you can store it in your .emacs file.\nExample:\nC-x( abc *return* C-x)\n\nWill define a macro to type \"abc\" and press return.\nC-xeee\n\nExecutes the above macro immediately, 3 times (first e executes it, then following 2 e's will execute it twice more).\nM-x name-last-kbd-macro testit\n\nNames the macro to \"testit\"\nM-x testit\n\nExecutes the just named macro (prints \"abc\" then return).\nM-x insert-kbd-macro\n\nPuts the following in your current buffer:\n(fset 'testit\n [?a ?b ?c return])\n\nWhich can then be saved in your .emacs file to use the named macro over and over again after restarting emacs.\n", "I've started with solving your problem literally,\n(defun search-maker (s)\n `(lambda ()\n (interactive)\n (let ((regexp-search-ring (cons ,s regexp-search-ring)) ;add regexp to history\n (isearch-mode-map (copy-keymap isearch-mode-map)))\n (define-key isearch-mode-map (vector last-command-event) 'isearch-repeat-forward) ;make last key repeat\n (isearch-forward-regexp)))) ;`\n\n(global-set-key (kbd \"C-. t\") (search-maker \"^.*Table\\\\(\\\\(.*\\\\n\\\\)*?GO\\\\)\"))\n(global-set-key (kbd \"<f6>\") (search-maker \"HELLO WORLD\"))\n\nThe keyboard sequence from (kbd ...) starts a new blank search. To actually search for your string, you press last key again as many times as you need. So C-. t t t or <f6> <f6> <f6>. The solution is basically a hack, but I'll leave it here if you want to experiment with it.\nThe following is probably the closest to what you need,\n(defmacro define-isearch-yank (key string)\n `(define-key isearch-mode-map ,key \n (lambda ()\n (interactive) \n (isearch-yank-string ,string)))) ;`\n\n(define-isearch-yank (kbd \"C-. t\") \"^.*Table\\\\(\\\\(.*\\\\n\\\\)*?GO\\\\)\")\n(define-isearch-yank (kbd \"<f6>\") \"HELLO WORLD\")\n\nThe key combos now only work in isearch mode. You start the search normally, and then press key combos to insert your predefined string.\n", "@Justin:\nWhen executing a macro, it's a little different... incremental searches will just happen once, and you will have to execute the macro again if you want to search again. You can do more powerful and complex things though, such as search for a keyword, jump to the beginning of the line, mark, go to end of the line, M-w (to copy), then jump to another buffer, then C-y (paste), then jump back to the other buffer and end your macro. Then, each time you execute the macro you will be copying a line to the next buffer.\nThe really cool thing about emacs macros is it will stop when it sees the bell... which happens when you fail to match an incremental search (among other things). So the above macro, you can do C-u 1000 C-x e which will execute the macro 1000 times... but since you did a search, it will only copy 1000 lines, OR UNTIL THE SEARCH FAILS! Which means if there are 100 matches, it will only execute the macro 100 times.\nEDIT: Check out C-hf highlight-lines-matching-regexp which will show the help of a command that highlights everything matching a regex... I don't know how to undo the highlighting though... anyways you could use a stored macro to highlight all matching the regex, and then another macro to find the next one...?\nFURTHER EDIT: M-x unhighlight-regexp will undo the highlighting, you have to enter the last regex though (but it defaults to the regex you used to highlight)\n", "In general, to define a custom keybinding in Emacs, you'd write\n(define-key global-map (kbd \"C-c C-f\") 'function-name)\n\ndefine-key is, unsurprisingly, the function to define a new key. global-map is the global keymap, as opposed to individual maps for each mode. (kbd \"C-c C-f\") returns a string representing the key sequence C-c C-f. There are other ways of doing this, including inputting the string directly, but this is usually the most straightforward since it takes the normal written representation. 'function-name is a symbol that's the name of the function.\nNow, unless your function is already defined, you'll want to define it before you use this. To do that, write\n(defun function-name (args)\n (interactive)\n stuff\n ...)\n\ndefun defines a function - use C-h f defun for more specific information. The (interactive) there isn't really a function call; it tells the compiler that it's okay for the function to be called by the user using M-x function-name and via keybindings.\nNow, for interactive searching in particular, this is tricky; the isearch module doesn't really seem to be set up for what you're trying to do. But you can use this to do something similar.\n" ]
[ 6, 2, 1, 1 ]
[]
[]
[ "emacs", "lisp", "regex" ]
stackoverflow_0000010149_emacs_lisp_regex.txt
Q: How do you resize an IE browser window to 1024 x 768 In Firefox you can enter the following into the awesome bar and hit enter: javascript:self.resizeTo(1024,768); How do you do the same thing in IE? A: javascript:resizeTo(1024,768); vbscript:resizeto(1024,768) Will work in IE7, But consider using something like javascript:moveTo(0,0);resizeTo(1024,768); because IE7 doesn't allow the window to "resize" beyond the screen borders. If you work on a 1024,768 desktop, this is what happens...Firefox: 1024x768 Window, going behind the taskbar. If you drop the moveTo part, the top left corner of the window won't change position.(You still get a 1024x768 window) IE7: As close as possible to the requested size without obscuring the taskbar or allowing any part of the window to lie beyond the screen borders. safari: As close as possible to the requested size without obscuring the taskbar or allowing any part of the window to lie beyond the screen borders, but you can ommit the moveTo part. Safari will move the top left corner of the window for you. Opera: Nothing happens. Chrome: Nothing happens. A: Maybe not directly related if you were looking for only a JavaScript solution but you can use the free Windows utility Sizer to automatically resize any (browser) window to a predefined size like 800x600, 1024,768, etc. A: Your code works in IE, you just need to "Allow blocked Content" in the Security Toolbar A: Try: javascript:resizeTo(1024,768); This works in IE7 at least. A: It works in IE6, but I think IE7 added some security around this?
How do you resize an IE browser window to 1024 x 768
In Firefox you can enter the following into the awesome bar and hit enter: javascript:self.resizeTo(1024,768); How do you do the same thing in IE?
[ "javascript:resizeTo(1024,768);\nvbscript:resizeto(1024,768)\nWill work in IE7, But consider using something like\njavascript:moveTo(0,0);resizeTo(1024,768);\nbecause IE7 doesn't allow the window to \"resize\" beyond the screen borders. If you work on a 1024,768 desktop, this is what happens...Firefox: 1024x768 Window, going behind the taskbar. If you drop the moveTo part, the top left corner of the window won't change position.(You still get a 1024x768 window)\nIE7: As close as possible to the requested size without obscuring the taskbar or allowing any part of the window to lie beyond the screen borders.\nsafari: As close as possible to the requested size without obscuring the taskbar or allowing any part of the window to lie beyond the screen borders, but you can ommit the moveTo part. Safari will move the top left corner of the window for you.\nOpera: Nothing happens.\nChrome: Nothing happens.\n\n", "Maybe not directly related if you were looking for only a JavaScript solution but you can use the free Windows utility Sizer to automatically resize any (browser) window to a predefined size like 800x600, 1024,768, etc.\n\n", "Your code works in IE, you just need to \"Allow blocked Content\" in the Security Toolbar \n\n", "Try:\njavascript:resizeTo(1024,768);\n\nThis works in IE7 at least.\n", "It works in IE6, but I think IE7 added some security around this?\n" ]
[ 12, 10, 3, 2, 0 ]
[]
[]
[ "internet_explorer", "javascript" ]
stackoverflow_0000060030_internet_explorer_javascript.txt
Q: Should I use one big SQL Select statement or several small ones? I'm building a PHP page with data sent from MySQL. Is it better to have 1 SELECT query with 4 table joins, or 4 small SELECT queries with no table join; I do select from an ID Which is faster and what is the pro/con of each method? I only need one row from each tables. A: You should run a profiling tool if you're truly worried cause it depends on many things and it can vary but as a rule its better to have fewer queries being compiled and fewer round trips to the database. Make sure you filter things as well as you can using your where and join on clauses. But honestly, it usually doesn't matter since you're probably not going to be hit all that hard compared to what the database can do, so unless optimization is your spec you should not do it prematurely and do whats simplest. A: Generally, it's better to have one SELECT statement. One of the main reasons to have databases is that they are fast at processing information, particularly if it is in the format of query. If there is any drawback to this approach, it's that there are some kinds of analysis that you can't do with one big SELECT statement. RDBMS purists will insist that this is a database design problem, in which case you are back to my original suggestion. A: When you use JOINs instead of multiple queries, you allow the database to apply its optimizations. You also are potentially retrieving rows that you don't need (if you were to replace an INNER join with multiple selects), which increases the network traffic between your app server and database server. Even if they're on the same box, this matters. A: It might depend on what you do with the data after you fetch it from the DB. If you use each of the four results independently, then it would be more logical and clear to have four separate SELECT statements. On the other hand, if you use all the data together, like to create a unified row in a table or something, then I would go with the single SELECT and JOINs. I've done a bit of PHP/MySQL work, and I find that even for queries on huge tables with tons of JOINs, the database is pretty good at optimizing - if you have smart indexes. So if you are serious about performance, start reading up on query optimization and indexing. A: I would say 1 query with the join. This way you need to hit the server only once. And if your tables are joined with indexes, it should be fast. A: Well under Oracle you'd want to take advantage of the query caching, and if you have a lot of small queries you are doing in your sequential processing, it would suck if the last query pushed the first one out of the cache...just in time for you to loop around and run that first query again (with different parameter values obviously) on the next pass. We were building an XML output file using Java stored procedures and definitely found the round trip times for each individual query were eating us alive. We found it was much faster to get all the data in as few queries as possible, then plug those values into the XML DOM as needed. The only downside is that the Java code was a bit less elegant, as the data fetch was now remote from its usage. But we had to generate a large complex XML file in as close to zero time as possible, so we had to optimize for speed. A: Be careful when dealing with a merge table however. It has been my experience that although a single join can be good in most situations, when merge tables are involved you can run into strange situations.
Should I use one big SQL Select statement or several small ones?
I'm building a PHP page with data sent from MySQL. Is it better to have 1 SELECT query with 4 table joins, or 4 small SELECT queries with no table join; I do select from an ID Which is faster and what is the pro/con of each method? I only need one row from each tables.
[ "You should run a profiling tool if you're truly worried cause it depends on many things and it can vary but as a rule its better to have fewer queries being compiled and fewer round trips to the database.\nMake sure you filter things as well as you can using your where and join on clauses.\nBut honestly, it usually doesn't matter since you're probably not going to be hit all that hard compared to what the database can do, so unless optimization is your spec you should not do it prematurely and do whats simplest.\n", "Generally, it's better to have one SELECT statement. One of the main reasons to have databases is that they are fast at processing information, particularly if it is in the format of query.\nIf there is any drawback to this approach, it's that there are some kinds of analysis that you can't do with one big SELECT statement. RDBMS purists will insist that this is a database design problem, in which case you are back to my original suggestion.\n", "When you use JOINs instead of multiple queries, you allow the database to apply its optimizations. You also are potentially retrieving rows that you don't need (if you were to replace an INNER join with multiple selects), which increases the network traffic between your app server and database server. Even if they're on the same box, this matters.\n", "It might depend on what you do with the data after you fetch it from the DB. If you use each of the four results independently, then it would be more logical and clear to have four separate SELECT statements. On the other hand, if you use all the data together, like to create a unified row in a table or something, then I would go with the single SELECT and JOINs.\nI've done a bit of PHP/MySQL work, and I find that even for queries on huge tables with tons of JOINs, the database is pretty good at optimizing - if you have smart indexes. So if you are serious about performance, start reading up on query optimization and indexing.\n", "I would say 1 query with the join. This way you need to hit the server only once. And if your tables are joined with indexes, it should be fast.\n", "Well under Oracle you'd want to take advantage of the query caching, and if you have a lot of small queries you are doing in your sequential processing, it would suck if the last query pushed the first one out of the cache...just in time for you to loop around and run that first query again (with different parameter values obviously) on the next pass.\nWe were building an XML output file using Java stored procedures and definitely found the round trip times for each individual query were eating us alive. We found it was much faster to get all the data in as few queries as possible, then plug those values into the XML DOM as needed.\nThe only downside is that the Java code was a bit less elegant, as the data fetch was now remote from its usage. But we had to generate a large complex XML file in as close to zero time as possible, so we had to optimize for speed.\n", "Be careful when dealing with a merge table however. It has been my experience that although a single join can be good in most situations, when merge tables are involved you can run into strange situations.\n" ]
[ 19, 5, 5, 4, 2, 2, 1 ]
[]
[]
[ "mysql", "optimization", "performance", "php" ]
stackoverflow_0000055463_mysql_optimization_performance_php.txt
Q: Multiple permission types (roles) stored in database as single decimal I was going to ask a question here about whether or not my design for some users/roles database tables was acceptable, but after some research I came across this question: What is the best way to handle multiple permission types? It sounds like an innovative approach, so instead of a many-to-many relationship users_to_roles table, I have multiple permissions defined as a single decimal (int data type I presume). That means all permissions for a single user are in one row. It probably won't make sense until you read the other question and answer I can't get my brain around this one. Can someone please explain the conversion process? It sounds "right", but I'm just not getting how I convert the roles to a decimal before it goes in the db, and how it gets converted back when it comes out of the db. I'm using Java, but if you stubbed it out, that would be cool as well. Here is the original answer in the off chance the other question gets deleted: "Personally, I sometimes use a flagged enumeration of permissions. This way you can use AND, OR, NOT and XOR bitwise operations on the enumeration's items. [Flags] public enum Permission { VIEWUSERS = 1, // 2^0 // 0000 0001 EDITUSERS = 2, // 2^1 // 0000 0010 VIEWPRODUCTS = 4, // 2^2 // 0000 0100 EDITPRODUCTS = 8, // 2^3 // 0000 1000 VIEWCLIENTS = 16, // 2^4 // 0001 0000 EDITCLIENTS = 32, // 2^5 // 0010 0000 DELETECLIENTS = 64, // 2^6 // 0100 0000 } Then, you can combine several permissions using the AND bitwise operator. For example, if a user can view & edit users, the binary result of the operation is 0000 0011 which converted to decimal is 3. You can then store the permission of one user into a single column of your DataBase (in our case it would be 3). Inside your application, you just need another bitwise operation (OR) to verify if a user has a particular permission or not." A: You use bitwise operations. The pseudo-code would be something like: bool HasPermission(User user, Permission permission) { return (user.Permission & permission) != 0; } void SetPermission(User user, Permission permission) { user.Permission |= permission; } void ClearPermission(User user, Permission permission) { user.Permission &= ~permission; } Permission is the enum type defined in your post, though whatever type it is needs to be based on an integer-like type. The same applies to the User.Permission field. If those operators (&, |=, and &=) don't make sense to you, then read up on bitwise operations (bitwise AND and bitwise OR). A: Actually, this is how we determine authority within a fairly large web application that I'm the DBA for. If you are going to do something like this, you'll really benefit from having a numbers table. It will make your calculations much faster. The basic setup includes the following tables: Groups - for doing many to many of users and security points Security points - which contain a value for anonymous authorization and one for authenticated users who are not part of a separate group Group security point join table A special BitMask numbers table that contains entries for the ^2 values. Thus there is one entry for 2 (2) and two entries for three (2 and 1). This keeps us from having to calculate values each time. First we determine if the user is logged in. If they aren't we return the anonymous authorization for the security point. Next we determine if the user is a member of any groups associated with the security point through a simple EXISTS using a JOIN. If they aren't we return the value associated with authenticated user. Most of the anonymous and authenticated defaults are set to 1 on our system because we require you to belong to specific groups. Note: If an anonymous user gets a no access, the interface throws them over to a log in box to allow them to log in and try again. If the user is a member of one or more groups, then we select distinct values from the BitMask table for each of the values defined for the groups. For example, if you belonged to three groups and had one authorization level of 8, one with 12 and the last with 36, our select against the Bit Mask table would return 8, 8 and 4, and 4 and 32 respectively. By doing a distinct we get the number 4, 8 and 32 which correctly bit masks to 101100. That value is returned as the users authorization level and processed by the web site. Make sense?
Multiple permission types (roles) stored in database as single decimal
I was going to ask a question here about whether or not my design for some users/roles database tables was acceptable, but after some research I came across this question: What is the best way to handle multiple permission types? It sounds like an innovative approach, so instead of a many-to-many relationship users_to_roles table, I have multiple permissions defined as a single decimal (int data type I presume). That means all permissions for a single user are in one row. It probably won't make sense until you read the other question and answer I can't get my brain around this one. Can someone please explain the conversion process? It sounds "right", but I'm just not getting how I convert the roles to a decimal before it goes in the db, and how it gets converted back when it comes out of the db. I'm using Java, but if you stubbed it out, that would be cool as well. Here is the original answer in the off chance the other question gets deleted: "Personally, I sometimes use a flagged enumeration of permissions. This way you can use AND, OR, NOT and XOR bitwise operations on the enumeration's items. [Flags] public enum Permission { VIEWUSERS = 1, // 2^0 // 0000 0001 EDITUSERS = 2, // 2^1 // 0000 0010 VIEWPRODUCTS = 4, // 2^2 // 0000 0100 EDITPRODUCTS = 8, // 2^3 // 0000 1000 VIEWCLIENTS = 16, // 2^4 // 0001 0000 EDITCLIENTS = 32, // 2^5 // 0010 0000 DELETECLIENTS = 64, // 2^6 // 0100 0000 } Then, you can combine several permissions using the AND bitwise operator. For example, if a user can view & edit users, the binary result of the operation is 0000 0011 which converted to decimal is 3. You can then store the permission of one user into a single column of your DataBase (in our case it would be 3). Inside your application, you just need another bitwise operation (OR) to verify if a user has a particular permission or not."
[ "You use bitwise operations. The pseudo-code would be something like:\nbool HasPermission(User user, Permission permission) {\n return (user.Permission & permission) != 0;\n}\n\nvoid SetPermission(User user, Permission permission) {\n user.Permission |= permission;\n}\n\nvoid ClearPermission(User user, Permission permission) {\n user.Permission &= ~permission;\n}\n\nPermission is the enum type defined in your post, though whatever type it is needs to be based on an integer-like type. The same applies to the User.Permission field.\nIf those operators (&, |=, and &=) don't make sense to you, then read up on bitwise operations (bitwise AND and bitwise OR).\n", "Actually, this is how we determine authority within a fairly large web application that I'm the DBA for. \nIf you are going to do something like this, you'll really benefit from having a numbers table. It will make your calculations much faster.\nThe basic setup includes the following tables:\n\nGroups - for doing many to many of users and security points\nSecurity points - which contain a value for anonymous authorization and one for authenticated users who are not part of a separate group\nGroup security point join table \nA special BitMask numbers table that contains entries for the ^2 values. Thus there is one entry for 2 (2) and two entries for three (2 and 1). This keeps us from having to calculate values each time.\n\nFirst we determine if the user is logged in. If they aren't we return the anonymous authorization for the security point.\nNext we determine if the user is a member of any groups associated with the security point through a simple EXISTS using a JOIN. If they aren't we return the value associated with authenticated user. Most of the anonymous and authenticated defaults are set to 1 on our system because we require you to belong to specific groups.\n\nNote: If an anonymous user gets a no access, the interface throws them over to a log in box to allow them to log in and try again.\n\nIf the user is a member of one or more groups, then we select distinct values from the BitMask table for each of the values defined for the groups. For example, if you belonged to three groups and had one authorization level of 8, one with 12 and the last with 36, our select against the Bit Mask table would return 8, 8 and 4, and 4 and 32 respectively. By doing a distinct we get the number 4, 8 and 32 which correctly bit masks to 101100.\nThat value is returned as the users authorization level and processed by the web site.\nMake sense?\n" ]
[ 6, 3 ]
[]
[]
[ "database", "permissions", "roles" ]
stackoverflow_0000060204_database_permissions_roles.txt
Q: Are unit-test names important? If unit-test names can become outdated over time and if you consider that the test itself is the most important thing, then is it important to choose wise test names? ie [Test] public void ShouldValidateUserNameIsLessThan100Characters() {} verse [Test] public void UserNameTestValidation1() {} A: The name of any method should make it clear what it does. IMO, your first suggestion is a bit long and the second one isn't informative enough. Also it's probably a bad idea to put "100" in the name, as that's very likely to change. What about: public void validateUserNameLength() If the test changes, the name should be updated accordingly. A: Yes, the names are totally important, specially when you are running the tests in console or continuous integration servers. Jay Fields wrote a post about it. Moreover, put good test names with one assertion per test and your suite will give you great reports when a test fails. A: Very. Equally important as choosing good method and variable names. Much more if your test suite is going to referred to by new devs in the future. As for your original question, definitely Answer1. Typing in a few more characters is a small price to pay for the readability. For you and others. It'll eliminate the 'what was I thinking here?' as well as 'WTF is this guy getting at in this test?' Quick zoom in when you're in to fix something someone else wrote instant update for any test-suite visitor. If done correctly, just going over the names of the test cases will inform the reader of the specs for the unit. A: Yes. [Test] public void UsernameValidator_LessThanLengthLimit_ShouldValidate() {} Put the test subject first, the test statement next, and the expected result last. That way, you get a clear indication of what it is doing, and you can easily sort by name :) A: In Clean Code, page 124, Robert C. Martin writes: The moral of the story is simple: Test code is just as important as production code. It is not a second-class citizen. It requires thought, design, and care. It must be kept as clean as production code. A: I think if one can not find a good concise name for a test method it's a sign that design of this test is incorrect. Also good method name helps you to find out what happened in less time. A: Yes, the whole point of the test name is that it tells you what doesn't work when the test fails. A: i wouldn't put conditions that test needs to meet in the name, because conditions may change in time. in your example, i'd recommend naming like UserNameLengthValidate() or UserNameLengthTest() or something similar to explain what the test does, but not presuming the testing/validation parameters. A: Yes, the names of the code under test (methods, properties, whatever) can change, but I contend your existing tests should fail if the expectations change. That is the true value of having well-constructed tests, not perusing a list of test names. That being said, well named test methods are great tools for getting new developers on board, helping them locate "executable documentation" with which they can kick the tires of existing code -- so I would keep the names of test methods up to date just as I would keep the assertions made by the test methods up to date. I name my test using the following pattern. Each test fixture attempts to focus on one class and is usually name {ClassUnderTest}Test. I name each test method {MemberUnderTest}_{Assertion}. [TestFixture] public class IndexableFileTest { [Test] public void Connect_InitializesReadOnlyProperties() { // ... } [Test,ExpectedException(typeof(NotInitializedException))] public void IsIndexable_ErrorWhenNotConnected() { // ... } [Test] public void IsIndexable_True() { // ... } [Test] public void IsIndexable_False() { // ... } } A: Having a very descriptive name helps to instantly see what is not working correctly, so that you don't actually need to look at the unit test code. Also, a list of all the unit tests describes the intended behavior of the unit, and can be used (more or less) as documentation to the behavior of the unit under test. Note, this only works when unit tests are very specific and do not validate too much within one unit test. So for example: [Test] void TestThatExceptionIsRaisedWhenStringLengthLargerThen100() [Test] void TestThatStringLengthOf99IsAccepted() A: The name needs to matter within reason. I don't want an email from the build saying that test 389fb2b5-28ad3 failed, but just knowing that it was a UserName test as opposed to something else would help ensure the right person gets to do the diagnosis. A: [RowTest] [Row("GoodName")] [Row("GoodName2")] public void Should_validate_username() { } [RowTest] [Row("BadUserName")] [Row("Bad%!Name")] public void Should_invalidate_username() { } This might make more sense for more complex types of validation really. A: Yes, they are. I'd personally recommend looking at SSW's rules to better unit tests. It contains some very helpful naming guidelines.
Are unit-test names important?
If unit-test names can become outdated over time and if you consider that the test itself is the most important thing, then is it important to choose wise test names? ie [Test] public void ShouldValidateUserNameIsLessThan100Characters() {} verse [Test] public void UserNameTestValidation1() {}
[ "The name of any method should make it clear what it does.\nIMO, your first suggestion is a bit long and the second one isn't informative enough. Also it's probably a bad idea to put \"100\" in the name, as that's very likely to change. What about:\npublic void validateUserNameLength()\n\nIf the test changes, the name should be updated accordingly. \n", "Yes, the names are totally important, specially when you are running the tests in console or continuous integration servers. Jay Fields wrote a post about it.\nMoreover, put good test names with one assertion per test and your suite will give you great reports when a test fails.\n", "Very. Equally important as choosing good method and variable names.\nMuch more if your test suite is going to referred to by new devs in the future.\nAs for your original question, definitely Answer1. Typing in a few more characters is a small price to pay for \n\nthe readability. For you and others. It'll eliminate the 'what was I thinking here?' as well as 'WTF is this guy getting at in this test?'\nQuick zoom in when you're in to fix something someone else wrote\ninstant update for any test-suite visitor. If done correctly, just going over the names of the test cases will inform the reader of the specs for the unit.\n\n", "Yes.\n [Test]\n public void UsernameValidator_LessThanLengthLimit_ShouldValidate() {}\n\nPut the test subject first, the test statement next, and the expected result last.\nThat way, you get a clear indication of what it is doing, and you can easily sort by name :)\n", "In Clean Code, page 124, Robert C. Martin writes:\n\nThe moral of the story is simple: Test code is just as important as production code. It is not a second-class citizen. It requires thought, design, and care. It must be kept as clean as production code.\n\n", "I think if one can not find a good concise name for a test method it's a sign that design of this test is incorrect. Also good method name helps you to find out what happened in less time.\n", "Yes, the whole point of the test name is that it tells you what doesn't work when the test fails.\n", "i wouldn't put conditions that test needs to meet in the name, because conditions may change in time. in your example, i'd recommend naming like\nUserNameLengthValidate()\n\nor\nUserNameLengthTest()\n\nor something similar to explain what the test does, but not presuming the testing/validation parameters.\n", "Yes, the names of the code under test (methods, properties, whatever) can change, but I contend your existing tests should fail if the expectations change. That is the true value of having well-constructed tests, not perusing a list of test names. That being said, well named test methods are great tools for getting new developers on board, helping them locate \"executable documentation\" with which they can kick the tires of existing code -- so I would keep the names of test methods up to date just as I would keep the assertions made by the test methods up to date.\nI name my test using the following pattern. Each test fixture attempts to focus on one class and is usually name {ClassUnderTest}Test. I name each test method {MemberUnderTest}_{Assertion}.\n[TestFixture]\npublic class IndexableFileTest\n{\n [Test]\n public void Connect_InitializesReadOnlyProperties()\n {\n // ...\n }\n\n [Test,ExpectedException(typeof(NotInitializedException))]\n public void IsIndexable_ErrorWhenNotConnected()\n {\n // ...\n }\n\n [Test]\n public void IsIndexable_True()\n {\n // ...\n }\n\n [Test]\n public void IsIndexable_False()\n {\n // ...\n }\n}\n\n", "Having a very descriptive name helps to instantly see what is not working correctly, so that you don't actually need to look at the unit test code.\nAlso, a list of all the unit tests describes the intended behavior of the unit, and can be used (more or less) as documentation to the behavior of the unit under test.\nNote, this only works when unit tests are very specific and do not validate too much within one unit test. \nSo for example:\n[Test]\nvoid TestThatExceptionIsRaisedWhenStringLengthLargerThen100()\n\n[Test]\nvoid TestThatStringLengthOf99IsAccepted()\n\n", "The name needs to matter within reason. I don't want an email from the build saying that test 389fb2b5-28ad3 failed, but just knowing that it was a UserName test as opposed to something else would help ensure the right person gets to do the diagnosis.\n", "[RowTest]\n[Row(\"GoodName\")]\n[Row(\"GoodName2\")]\npublic void Should_validate_username()\n{\n}\n\n[RowTest]\n[Row(\"BadUserName\")]\n[Row(\"Bad%!Name\")]\npublic void Should_invalidate_username()\n{\n}\n\nThis might make more sense for more complex types of validation really.\n", "Yes, they are. I'd personally recommend looking at SSW's rules to better unit tests. It contains some very helpful naming guidelines.\n" ]
[ 18, 10, 7, 6, 6, 2, 2, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "unit_testing" ]
stackoverflow_0000047475_unit_testing.txt
Q: ARMV4i (Windows Mobile 6) Native Code disassembler Does anyone know of a disassembler for ARMV4i executables and DLLs? I've got a plug-in DLL I'm writing with a very rare data abort (<5% of the time) that I have narrowed down to a specific function (via dumpbin and the address output by the data abort). However, it is a fairly large function and I would like to narrow it down a little. I know it's happening in a memset() call, but that particular function has about 35 of them, so I was hoping that by looking at the disassembly I could figure out where about the problem actually is. A: I believe that IDA Pro will do what you want. It was mentioned in the O'Reilly Security Warrior book and I've seen it recommended on Windows Mobile developer forums. A: IDA Pro will definitely do ARM disassembly. And they (Datarescue) once arranged me a licence at about 11PM local time, so I like to recommend them... I see from http://www.datarescue.com/idabase/ that there's been some rearrangement of the company, but I guess it's still a good product. Here's the link to the new publisher: http://www.hex-rays.com/idapro/ A: ChARMeD is a Windows Mobile / Pocket PC / Win CE (for ARM CPUs) Disassembler and Assembler You might also look at BDASM, a shareware disassembler - later versions have ARM plugins. The website seems to be down, but if you search for it you'll find the shareware distribution. The source code for the simple ARM disassembler, DISARM, is available as well. The binutils (linux compiler tools) objdump can be used to produce disassembly, "objdump -b binary -m arm7tdmi -D file_name" -Adam A: A couple of years ago I found an ARM disassembler I used while doing some embedded work. However, I don't remember its name - though I think it was part of a larger package like an emulator or something. In your case, could you ask your compiler to generate an assembly listing of the compiled code? That might help give you some scope. Failing that, you could break up your function into one or more new functions, if all you can get is the stack trace. Then break up the new function into one or more again. This is the tried-and-true "divide and conquer" method. And if you have 35 calls to memset() in one function, it might be a good idea from a design standpoint too! Update: I found the package I used: ARMphetamine. It worked for the ARM9 code I was developing, but it looks like it hasn't been updated in quite some time.
ARMV4i (Windows Mobile 6) Native Code disassembler
Does anyone know of a disassembler for ARMV4i executables and DLLs? I've got a plug-in DLL I'm writing with a very rare data abort (<5% of the time) that I have narrowed down to a specific function (via dumpbin and the address output by the data abort). However, it is a fairly large function and I would like to narrow it down a little. I know it's happening in a memset() call, but that particular function has about 35 of them, so I was hoping that by looking at the disassembly I could figure out where about the problem actually is.
[ "I believe that IDA Pro will do what you want. It was mentioned in the O'Reilly Security Warrior book and I've seen it recommended on Windows Mobile developer forums.\n", "IDA Pro will definitely do ARM disassembly. And they (Datarescue) once arranged me a licence at about 11PM local time, so I like to recommend them...\nI see from http://www.datarescue.com/idabase/ that there's been some rearrangement of the company, but I guess it's still a good product.\nHere's the link to the new publisher: http://www.hex-rays.com/idapro/\n", "ChARMeD is a Windows Mobile / Pocket PC / Win CE (for ARM CPUs) Disassembler and Assembler\nYou might also look at BDASM, a shareware disassembler - later versions have ARM plugins. The website seems to be down, but if you search for it you'll find the shareware distribution.\nThe source code for the simple ARM disassembler, DISARM, is available as well.\nThe binutils (linux compiler tools) objdump can be used to produce disassembly, \"objdump -b binary -m arm7tdmi -D file_name\"\n-Adam\n", "A couple of years ago I found an ARM disassembler I used while doing some embedded work. However, I don't remember its name - though I think it was part of a larger package like an emulator or something.\nIn your case, could you ask your compiler to generate an assembly listing of the compiled code? That might help give you some scope.\nFailing that, you could break up your function into one or more new functions, if all you can get is the stack trace. Then break up the new function into one or more again. This is the tried-and-true \"divide and conquer\" method. And if you have 35 calls to memset() in one function, it might be a good idea from a design standpoint too!\nUpdate: I found the package I used: ARMphetamine. It worked for the ARM9 code I was developing, but it looks like it hasn't been updated in quite some time.\n" ]
[ 3, 3, 1, 0 ]
[]
[]
[ "arm", "disassembly", "windows_mobile" ]
stackoverflow_0000022309_arm_disassembly_windows_mobile.txt
Q: In C# (or any language) what is/are your favourite way of removing repetition? I've just coded a 700 line class. Awful. I hang my head in shame. It's as opposite to DRY as a British summer. It's full of cut and paste with minor tweaks here and there. This makes it's a prime candidate for refactoring. Before I embark on this, I'd thought I'd ask when you have lots of repetition, what are the first refactoring opportunities you look for? For the record, mine are probably using: Generic classes and methods Method overloading/chaining. What are yours? A: I like to start refactoring when I need to, rather than the first opportunity that I get. You might say this is somewhat of an agile approach to refactoring. When do I feel I need to? Usually when I feel that the ugly parts of my codes are starting to spread. I think ugliness is okay as long as they are contained, but the moment when they start having the urge to spread, that's when you need to take care of business. The techniques you use for refactoring should start with the simplest. I would strongly recommand Martin Fowler's book. Combining common code into functions, removing unneeded variables, and other simple techniques gets you a lot of mileage. For list operations, I prefer using functional programming idioms. That is to say, I use internal iterators, map, filter and reduce(in python speak, there are corresponding things in ruby, lisp and haskell) whenever I can, this makes code a lot shorter and more self-contained. A: #region I made a 1,000 line class only one line with it! In all seriousness, the best way to avoid repetition is the things covered in your list, as well as fully utilizing polymorphism, examine your class and discover what would best be done in a base class, and how different components of it can be broken away a subclasses. A: Sometimes by the time you "complete functionality" using copy and paste code, you've come to a point that it is maimed and mangled enough that any attempt at refactoring will actually take much, much longer than refactoring it at the point where it was obvious. In my personal experience my favorite "way of removing repetition" has been the "Extract Method" functionality of Resharper (although this is also available in vanilla Visual Studio). Many times I would see repeated code (some legacy app I'm maintaining) not as whole methods but in chunks within completely separate methods. That gives a perfect opportunity to turn those chunks into methods. Monster classes also tend to reveal that they contain more than one functionality. That in turn becomes an opportunity to separate each distinct functionality into its own (hopefully smaller) class. I have to reiterate that doing all of these is not a pleasurable experience at all (for me), so I really would rather do it right while it's a small ball of mud, rather than let the big ball of mud roll and then try to fix that. A: First of all, I would recommend refactoring much sooner than when you are done with the first version of the class. Anytime you see duplication, eliminate it ASAP. This may take a little longer initially, but I think the results end up being a lot cleaner, and it helps you rethink your code as you go to ensure you are doing things right. As for my favorite way of removing duplication.... Closures, especially in my favorite language (Ruby). They tend to be a really concise way of taking 2 pieces of code and merging the similarities. Of course (like any "best practice" or tip), this can not be blindly done... I just find them really fun to use when I can use them. A: One of the things I do, is try to make small and simple methods that I can see on a single page in my editor (visual studio). I've learnt from experience that making code simple makes it easier for the compiler to optimise it. The larger the method, the harder the compiler has to work! I've also recently seen a problem where large methods have caused a memory leak. Basically I had a loop very much like the following: while (true) { var smallObject = WaitForSomethingToTurnUp(); var largeObject = DoSomethingWithSmallObject(); } I was finding that my application was keeping a large amount of data in memory because even though 'largeObject' wasn't in scope until smallObject returned something, the garbage collector could still see it. I easily solved this by moving the 'DoSomethingWithSmallObject()' and other associated code to another method. Also, if you make small methods, your reuse within a class will become significantly higher. I generally try to make sure that none of my methods look like any others! Hope this helps. Nick A: "cut and paste with minor tweaks here and there" is the kind of code repetition I usually solve with an entirely non-exotic approach- Take the similar chunk of code, extract it out to a seperate method. The little bit that is different in every instance of that block of code, change that to a parameter. There's also some easy techniques for removing repetitive-looking if/else if and switch blocks, courtesy of Scott Hanselman: http://www.hanselman.com/blog/CategoryView.aspx?category=Source+Code&page=2 A: I might go something like this: Create custom (private) types for data structures and put all the related logic in there. Dictionary<string, List<int>> etc. Make inner functions or properties that guarantee behaviour. If you’re continually checking conditions from a publically accessible property then create an private getter method with all of the checking baked in. Split methods apart that have too much going on. If you can’t put something succinct into the or give it a good name, then start breaking the function apart until the code is (even if these “child” functions aren’t used anywhere else). If all else fails, slap a [SuppressMessage("Microsoft.Maintainability", "CA1502:AvoidExcessiveComplexity")] on it and comment why.
In C# (or any language) what is/are your favourite way of removing repetition?
I've just coded a 700 line class. Awful. I hang my head in shame. It's as opposite to DRY as a British summer. It's full of cut and paste with minor tweaks here and there. This makes it's a prime candidate for refactoring. Before I embark on this, I'd thought I'd ask when you have lots of repetition, what are the first refactoring opportunities you look for? For the record, mine are probably using: Generic classes and methods Method overloading/chaining. What are yours?
[ "I like to start refactoring when I need to, rather than the first opportunity that I get. You might say this is somewhat of an agile approach to refactoring. When do I feel I need to? Usually when I feel that the ugly parts of my codes are starting to spread. I think ugliness is okay as long as they are contained, but the moment when they start having the urge to spread, that's when you need to take care of business. \nThe techniques you use for refactoring should start with the simplest. I would strongly recommand Martin Fowler's book. Combining common code into functions, removing unneeded variables, and other simple techniques gets you a lot of mileage. For list operations, I prefer using functional programming idioms. That is to say, I use internal iterators, map, filter and reduce(in python speak, there are corresponding things in ruby, lisp and haskell) whenever I can, this makes code a lot shorter and more self-contained.\n", "#region\nI made a 1,000 line class only one line with it!\nIn all seriousness, the best way to avoid repetition is the things covered in your list, as well as fully utilizing polymorphism, examine your class and discover what would best be done in a base class, and how different components of it can be broken away a subclasses.\n", "Sometimes by the time you \"complete functionality\" using copy and paste code, you've come to a point that it is maimed and mangled enough that any attempt at refactoring will actually take much, much longer than refactoring it at the point where it was obvious.\nIn my personal experience my favorite \"way of removing repetition\" has been the \"Extract Method\" functionality of Resharper (although this is also available in vanilla Visual Studio).\nMany times I would see repeated code (some legacy app I'm maintaining) not as whole methods but in chunks within completely separate methods. That gives a perfect opportunity to turn those chunks into methods.\nMonster classes also tend to reveal that they contain more than one functionality. That in turn becomes an opportunity to separate each distinct functionality into its own (hopefully smaller) class.\nI have to reiterate that doing all of these is not a pleasurable experience at all (for me), so I really would rather do it right while it's a small ball of mud, rather than let the big ball of mud roll and then try to fix that.\n", "First of all, I would recommend refactoring much sooner than when you are done with the first version of the class. Anytime you see duplication, eliminate it ASAP. This may take a little longer initially, but I think the results end up being a lot cleaner, and it helps you rethink your code as you go to ensure you are doing things right.\nAs for my favorite way of removing duplication.... Closures, especially in my favorite language (Ruby). They tend to be a really concise way of taking 2 pieces of code and merging the similarities. Of course (like any \"best practice\" or tip), this can not be blindly done... I just find them really fun to use when I can use them.\n", "One of the things I do, is try to make small and simple methods that I can see on a single page in my editor (visual studio).\nI've learnt from experience that making code simple makes it easier for the compiler to optimise it. The larger the method, the harder the compiler has to work!\nI've also recently seen a problem where large methods have caused a memory leak. Basically I had a loop very much like the following:\n\nwhile (true)\n{\n var smallObject = WaitForSomethingToTurnUp();\n var largeObject = DoSomethingWithSmallObject();\n}\n\nI was finding that my application was keeping a large amount of data in memory because even though 'largeObject' wasn't in scope until smallObject returned something, the garbage collector could still see it.\nI easily solved this by moving the 'DoSomethingWithSmallObject()' and other associated code to another method.\nAlso, if you make small methods, your reuse within a class will become significantly higher. I generally try to make sure that none of my methods look like any others!\nHope this helps.\nNick\n", "\"cut and paste with minor tweaks here and there\" is the kind of code repetition I usually solve with an entirely non-exotic approach- Take the similar chunk of code, extract it out to a seperate method. The little bit that is different in every instance of that block of code, change that to a parameter.\nThere's also some easy techniques for removing repetitive-looking if/else if and switch blocks, courtesy of Scott Hanselman:\nhttp://www.hanselman.com/blog/CategoryView.aspx?category=Source+Code&page=2\n", "I might go something like this:\nCreate custom (private) types for data structures and put all the related logic in there. Dictionary<string, List<int>> etc.\nMake inner functions or properties that guarantee behaviour. If you’re continually checking conditions from a publically accessible property then create an private getter method with all of the checking baked in.\nSplit methods apart that have too much going on. If you can’t put something succinct into the or give it a good name, then start breaking the function apart until the code is (even if these “child” functions aren’t used anywhere else). \nIf all else fails, slap a [SuppressMessage(\"Microsoft.Maintainability\", \"CA1502:AvoidExcessiveComplexity\")] on it and comment why.\n" ]
[ 4, 2, 2, 1, 1, 1, 1 ]
[]
[]
[ "c#", "coding_style", "refactoring" ]
stackoverflow_0000060100_c#_coding_style_refactoring.txt
Q: Native Tongue as Default Language For an Application When downloading both Firefox and Chrome, I've noticed that the default version I got was in my native tongue of Hebrew. I personally don't like my applications in Hebrew, since I'm used to the English UI conventions embedded in me since long ago by: The lack of choice: Most programs don't offer interfaces in multiple languages and when they do, those languages are usually English and the developer's native tongue. Programming languages which are almost completely bound to the English language. My question then is this: If you translate your applications, would you limit the UI to the user's native tongue or give them the choice by enabling more than one language pack by default? Which language would your application default to (which is interesting mostly if you only install one language pack with your application)? And also generally I'd like to know how much value do you put into translating your applications on a whole. A: I've helped develop an application that was used by Dutch, English, Spanish and Portuguese speaking users. Because the application installed from CD we just added all the language packs. Mostly because it saved us a lot of work not having to maintain 4 different versions. If your application distributed from a website and you have to support more than only 4 languages I can imagine you don't want to let everyone download every language pack. But only distributing the native languages of people downloading the application seems a bit restrictive. Most people I know actually like their software in english. So at least adding the english language to all the versions makes sense. A: I've never written an application for use by a large number of people, and never for anyone that didn't use English as their language, but if I did, I would probably take a route that installs all available language packs at install (unless the user did a custom install, where I would allow them to choose language packs) and then switch between languages as an option inside the program. If I had to only choose one language, I would choose English if I was doing all of the work, or the native language of the users if I had a translator. A: When writing an application for multilingual use, I use Microsoft's Best Practices for Developing World-Ready Applications, which includes retrieving the current CultureInfo from the OS and using that as the default language pack. A: I usually try to ship products with all available sets of localized resources. Upon a user's first launch of the product, the UI is presented in the localization most closely matching the OS on their machine. Once within the app, the user has the option of switching the UI to one of the other available localizations. I think it is very important to provide localizations that match one's target markets. Most "normal" people (not software developers!) prefer by far to have a UI in their native language.
Native Tongue as Default Language For an Application
When downloading both Firefox and Chrome, I've noticed that the default version I got was in my native tongue of Hebrew. I personally don't like my applications in Hebrew, since I'm used to the English UI conventions embedded in me since long ago by: The lack of choice: Most programs don't offer interfaces in multiple languages and when they do, those languages are usually English and the developer's native tongue. Programming languages which are almost completely bound to the English language. My question then is this: If you translate your applications, would you limit the UI to the user's native tongue or give them the choice by enabling more than one language pack by default? Which language would your application default to (which is interesting mostly if you only install one language pack with your application)? And also generally I'd like to know how much value do you put into translating your applications on a whole.
[ "I've helped develop an application that was used by Dutch, English, Spanish and Portuguese speaking users. Because the application installed from CD we just added all the language packs. Mostly because it saved us a lot of work not having to maintain 4 different versions.\nIf your application distributed from a website and you have to support more than only 4 languages I can imagine you don't want to let everyone download every language pack. But only distributing the native languages of people downloading the application seems a bit restrictive. Most people I know actually like their software in english. So at least adding the english language to all the versions makes sense.\n", "I've never written an application for use by a large number of people, and never for anyone that didn't use English as their language, but if I did, I would probably take a route that installs all available language packs at install (unless the user did a custom install, where I would allow them to choose language packs) and then switch between languages as an option inside the program. If I had to only choose one language, I would choose English if I was doing all of the work, or the native language of the users if I had a translator.\n", "When writing an application for multilingual use, I use Microsoft's Best Practices for Developing World-Ready Applications, which includes retrieving the current CultureInfo from the OS and using that as the default language pack.\n", "I usually try to ship products with all available sets of localized resources. Upon a user's first launch of the product, the UI is presented in the localization most closely matching the OS on their machine. Once within the app, the user has the option of switching the UI to one of the other available localizations.\nI think it is very important to provide localizations that match one's target markets. Most \"normal\" people (not software developers!) prefer by far to have a UI in their native language.\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "user_experience", "user_interface" ]
stackoverflow_0000059549_user_experience_user_interface.txt
Q: How do you troubleshoot character encoding problems? If all you see is the ugly no-char boxes, what tools or strategies do you use to figure out what went wrong? (The specific scenario I'm facing is no-char boxes within a <select> when it should be showing Japanese chars.) A: Firstly, "ugly no-char boxes" might not be an encoding problem, they might just be a sign you don't have a font installed that can display the glyphs in the page. Most character encoding problems happen when strings are being passed from one system to another. For webapps, this is usually between the browser and the application, between the application and the filesystem and between the application and the database. So you need to check where the mis-encoded data is coming from, what character encoding it has at the source, and what encoding it is being received as. The best way is to send through characters you know the system is having problems with, and examine them at each level of the app. What do they look like inside the app? In the database? When you get them back from the database? When they're displayed in the browser? Sorry to be so general, but the question doesn't give much more to work with. A: If the data you send to the browser becomes mangled (moji-bake) you will get trash characters. Also, if you specify the wrong character set in your META headers, your browser will render the page incorrectly, causing moji-bake again, sometimes in random places on the page. When handling CJK character sets, you must be sure to use UTF8 character encoding throughout the lifetime of your program (data storage, retrieval, data manipulation in your code, displaying in the browsser etc...) What is UTF8? UTF8 handles binary streams of data, not strings. This means the bit combinations can have variable length. ASCII characters have a fixed length of 8 bits representing 1 byte, however UTF8 characters can be composed of 6bits, 8bits, 12bits, etc... As such, UTF8 is prone to what Japanese call "mojibake". As a coder, from database to codebase to browser, you should try and use UTF8 completely. For email you can use UTF8, but you will probably find most mail servers and clients are still old and use a mishmash of different character sets (e.g. ISO9022X). Database Settings If you are a mysql user, then make sure you have to ensure all connections to the DB use UTF8, and that all tables/fields use UTF8. By default mysql uses Latin (Swedish) character sets. Those kooky swedes love their sense of humour!! Checking your Codebase In my experience editors like Notepad++, Notepad2, UltraEdit, e, etc... all have UTF8 support problems. They mostly work, but since their developers don't use CJK languages themselves, they are not perfected. Issues like turning off BOM (Byte Order Mark), mangled tabs, poor character set conversion, etc ... all present problems. I highly recommend using a proven UTF8 editor like Maruo. This is made by a Japanese company, but there is an English version (and a trial version) at http://www.hidemaru.interlink.or.jp/software/ Lastly, you may need to convert your source files into UTF8. Especially if the codebase itself has CJK language strings contained therein. Manipulating Strings Any string function need to multibyte safe. Notice I didn't say double-byte. UTF8 is not a double byte but multibyte, depending on the total number of bits used to represent a character. In PHP you need to call the MB string functions specifically. Ruby and other languages have more transparent support, but you need to check the docs for your flavour of application server! META Tags Check out google.co.jp or yahoo.co.jp for their META headers. These are sites that know how to to it properly. Basically include the following META tag the doucment <HEAD> <meta http-equiv="content-type" content="text/html; charset=utf-8"> It is usually safe to mix English HTML document type attributes with the above character too. So adding the META tag above seems to work in a HTML document that has: <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> Email This is a wholly different can of worms. UTF8 works a lot, but many older Japanese clients use ISO2022X more. This is not worth covering here. Debugging UTF8 Issues Once you have a reliable UTF8 editor like Maruo, you can create static pages and resolve your issues. Hope that helps A: Redirect the data to disk and use a Hex Editor. Most text editors / viewers do their own conversions behind the scenes, so it is difficult to be sure you are seeing the data in it's true form.
How do you troubleshoot character encoding problems?
If all you see is the ugly no-char boxes, what tools or strategies do you use to figure out what went wrong? (The specific scenario I'm facing is no-char boxes within a <select> when it should be showing Japanese chars.)
[ "Firstly, \"ugly no-char boxes\" might not be an encoding problem, they might just be a sign you don't have a font installed that can display the glyphs in the page.\nMost character encoding problems happen when strings are being passed from one system to another. For webapps, this is usually between the browser and the application, between the application and the filesystem and between the application and the database.\nSo you need to check where the mis-encoded data is coming from, what character encoding it has at the source, and what encoding it is being received as. The best way is to send through characters you know the system is having problems with, and examine them at each level of the app. What do they look like inside the app? In the database? When you get them back from the database? When they're displayed in the browser?\nSorry to be so general, but the question doesn't give much more to work with.\n", "If the data you send to the browser becomes mangled (moji-bake) you will get trash characters. Also, if you specify the wrong character set in your META headers, your browser will render the page incorrectly, causing moji-bake again, sometimes in random places on the page.\nWhen handling CJK character sets, you must be sure to use UTF8 character encoding throughout the lifetime of your program (data storage, retrieval, data manipulation in your code, displaying in the browsser etc...)\nWhat is UTF8?\nUTF8 handles binary streams of data, not strings. This means the bit combinations can have variable length. ASCII characters have a fixed length of 8 bits representing 1 byte, however UTF8 characters can be composed of 6bits, 8bits, 12bits, etc... As such, UTF8 is prone to what Japanese call \"mojibake\".\nAs a coder, from database to codebase to browser, you should try and use UTF8 completely. For email you can use UTF8, but you will probably find most mail servers and clients are still old and use a mishmash of different character sets (e.g. ISO9022X).\nDatabase Settings\nIf you are a mysql user, then make sure you have to ensure all connections to the DB use UTF8, and that all tables/fields use UTF8. By default mysql uses Latin (Swedish) character sets. Those kooky swedes love their sense of humour!!\nChecking your Codebase\nIn my experience editors like Notepad++, Notepad2, UltraEdit, e, etc... all have UTF8 support problems. They mostly work, but since their developers don't use CJK languages themselves, they are not perfected. Issues like turning off BOM (Byte Order Mark), mangled tabs, poor character set conversion, etc ... all present problems.\nI highly recommend using a proven UTF8 editor like Maruo. This is made by a Japanese company, but there is an English version (and a trial version) at http://www.hidemaru.interlink.or.jp/software/\nLastly, you may need to convert your source files into UTF8. Especially if the codebase itself has CJK language strings contained therein.\nManipulating Strings\nAny string function need to multibyte safe. Notice I didn't say double-byte. UTF8 is not a double byte but multibyte, depending on the total number of bits used to represent a character. In PHP you need to call the MB string functions specifically. Ruby and other languages have more transparent support, but you need to check the docs for your flavour of application server!\nMETA Tags\nCheck out google.co.jp or yahoo.co.jp for their META headers. These are sites that know how to to it properly. Basically include the following META tag the doucment <HEAD>\n<meta http-equiv=\"content-type\" content=\"text/html; charset=utf-8\">\nIt is usually safe to mix English HTML document type attributes with the above character too. So adding the META tag above seems to work in a HTML document that has:\n<html xmlns=\"http://www.w3.org/1999/xhtml\" xml:lang=\"en\" lang=\"en\">\nEmail\nThis is a wholly different can of worms. UTF8 works a lot, but many older Japanese clients use ISO2022X more. This is not worth covering here.\nDebugging UTF8 Issues\nOnce you have a reliable UTF8 editor like Maruo, you can create static pages and resolve your issues.\nHope that helps\n", "Redirect the data to disk and use a Hex Editor. Most text editors / viewers do their own conversions behind the scenes, so it is difficult to be sure you are seeing the data in it's true form.\n" ]
[ 3, 2, 1 ]
[]
[]
[ "encoding", "localization" ]
stackoverflow_0000029499_encoding_localization.txt
Q: Can I change the appearance of an html image during hover without a second image? Is there a way to change the appearance of an icon (ie. contrast / luminosity) when I hover the cursor, without requiring a second image file (or without requiring a hidden portion of the image)? A: Here's some good information about image opacity and transparency with CSS. So to make an image with opacity 50%, you'd do this: <img src="image.png" style="opacity: 0.5; filter: alpha(opacity=50)" /> The opacity: part is how Firefox does it, and it's a value between 0.0 and 1.0. filter: is how IE does it, and it's a value from 0 to 100. A: You don't use an img tag, but an element with a background-image css attribute and set the background-position on hover. IE requires an 'a' tag as a parent element for the :hover selector. They are called css sprites. A great article explaining how to use CSS sprites. A: Here's some code to play with. Basic idea: put all possible states of the picture into one big image, set a "window size", that's smaller than the image; move the window around using background-position. #test { display: block; width: 250px; /* window */ height: 337px; /* size */ background: url(http://vi.sualize.us/thumbs/08/09/01/fashion,indie,inspiration,portrait-f825c152cc04c3dbbb6a38174a32a00f_h.jpg) no-repeat; /* put the image */ border: 1px solid red; /* for debugging */ text-indent: -1000px; /* hide the text */ } #test:hover { background-position: -250px 0; /* on mouse over move the window to a different part of the image */ } <a href="#" id="test">a button</a> A: The way I usually see things done with smaller images such as buttons it that only a certain portion of the image is shown. Then many states of the picture will make up a larger picture which gets shifted around behind the visible port. I'll delete this when someone has code.
Can I change the appearance of an html image during hover without a second image?
Is there a way to change the appearance of an icon (ie. contrast / luminosity) when I hover the cursor, without requiring a second image file (or without requiring a hidden portion of the image)?
[ "Here's some good information about image opacity and transparency with CSS.\nSo to make an image with opacity 50%, you'd do this:\n<img src=\"image.png\" style=\"opacity: 0.5; filter: alpha(opacity=50)\" />\n\nThe opacity: part is how Firefox does it, and it's a value between 0.0 and 1.0. filter: is how IE does it, and it's a value from 0 to 100.\n", "You don't use an img tag, but an element with a background-image css attribute and set the background-position on hover. IE requires an 'a' tag as a parent element for the :hover selector. They are called css sprites.\nA great article explaining how to use CSS sprites.\n", "Here's some code to play with. Basic idea: put all possible states of the picture into one big image, set a \"window size\", that's smaller than the image; move the window around using background-position.\n\n\n#test {\r\n display: block;\r\n width: 250px; /* window */\r\n height: 337px; /* size */\r\n background: url(http://vi.sualize.us/thumbs/08/09/01/fashion,indie,inspiration,portrait-f825c152cc04c3dbbb6a38174a32a00f_h.jpg) no-repeat; /* put the image */\r\n border: 1px solid red; /* for debugging */\r\n text-indent: -1000px; /* hide the text */\r\n}\r\n\r\n#test:hover {\r\n background-position: -250px 0; /* on mouse over move the window to a different part of the image */\r\n}\n<a href=\"#\" id=\"test\">a button</a>\n\n\n\n", "The way I usually see things done with smaller images such as buttons it that only a certain portion of the image is shown. Then many states of the picture will make up a larger picture which gets shifted around behind the visible port. I'll delete this when someone has code.\n" ]
[ 9, 9, 6, 1 ]
[]
[]
[ "css", "html", "image" ]
stackoverflow_0000060290_css_html_image.txt
Q: lock keyword in C# I understand the main function of the lock key word from MSDN lock Statement (C# Reference) The lock keyword marks a statement block as a critical section by obtaining the mutual-exclusion lock for a given object, executing a statement, and then releasing the lock. When should the lock be used? For instance it makes sense with multi-threaded applications because it protects the data. But is it necessary when the application does not spin off any other threads? Is there performance issues with using lock? I have just inherited an application that is using lock everywhere, and it is single threaded and I want to know should I leave them in, are they even necessary? Please note this is more of a general knowledge question, the application speed is fine, I want to know if that is a good design pattern to follow in the future or should this be avoided unless absolutely needed. A: When should the lock be used? A lock should be used to protect shared resources in multithreaded code. Not for anything else. But is it necessary when the application does not spin off any other threads? Absolutely not. It's just a time waster. However do be sure that you're not implicitly using system threads. For example if you use asynchronous I/O you may receive callbacks from a random thread, not your original thread. Is there performance issues with using lock? Yes. They're not very big in a single-threaded application, but why make calls you don't need? ...if that is a good design pattern to follow in the future[?] Locking everything willy-nilly is a terrible design pattern. If your code is cluttered with random locking and then you do decide to use a background thread for some work, you're likely to run into deadlocks. Sharing a resource between multiple threads requires careful design, and the more you can isolate the tricky part, the better. A: All the answers here seem right: locks' usefulness is to block threads from acessing locked code concurrently. However, there are many subtleties in this field, one of which is that locked blocks of code are automatically marked as critical regions by the Common Language Runtime. The effect of code being marked as critical is that, if the entire region cannot be entirely executed, the runtime may consider that your entire Application Domain is potentially jeopardized and, therefore, unload it from memory. To quote MSDN: For example, consider a task that attempts to allocate memory while holding a lock. If the memory allocation fails, aborting the current task is not sufficient to ensure stability of the AppDomain, because there can be other tasks in the domain waiting for the same lock. If the current task is terminated, other tasks could be deadlocked. Therefore, even though your application is single-threaded, this may be a hazard for you. Consider that one method in a locked block throws an exception that is eventually not handled within the block. Even if the exception is dealt as it bubbles up through the call stack, your critical region of code didn't finish normally. And who knows how the CLR will react? For more info, read this article on the perils of Thread.Abort(). A: Bear in mind that there might be reasons why your application is not as single-threaded as you think. Async I/O in .NET may well call-back on a pool thread, for example, as do some of the various timer classes (not the Windows Forms Timer, though). A: Generally speaking if your application is single threaded, you're not going to get much use out of the lock statement. Not knowing your application exactly, I don't know if they're useful or not - but I suspect not. Further, if you're application is using lock everywhere I don't know that I would feel all that confident about it working in a multi-threaded environment anyways - did the original developer actually know how to develop multi-threaded code, or did they just add lock statements everywhere in the vague hope that that would do the trick? A: lock should be used around the code that modifies shared state, state that is modified by other threads concurrently, and those other treads must take the same lock. A lock is actually a memory access serializer, the threads (that take the lock) will wait on the lock to enter until the current thread exits the lock, so memory access is serialized. To answer you question lock is not needed in a single threaded application, and it does have performance side effects. because locks in C# are based on kernel sync objects and every lock you take creates a transition to kernel mode from user mode. If you're interested in multithreading performance a good place to start is MSDN threading guidelines A: You can have performance issues with locking variables, but normally, you'd construct your code to minimize the lengths of time that are spent inside a 'locked' block of code. As far as removing the locks. It'll depend on what exactly the code is doing. Even though it's single threaded, if your object is implemented as a Singleton, it's possible that you'll have multiple clients using an instance of it (in memory, on a server) at the same time.. A: Yes, there will be some performance penalty when using lock but it is generally neglible enough to not matter. Using locks (or any other mutual-exclusion statement or construct) is generally only needed in multi-threaded scenarios where multiple threads (either of your own making or from your caller) have the opportunity to interact with the object and change the underlying state or data maintained. For example, if you have a collection that can be accessed by multiple threads you don't want one thread changing the contents of that collection by removing an item while another thread is trying to read it. A: Lock(token) is only used to mark one or more blocks of code that should not run simultaneously in multiple threads. If your application is single-threaded, it's protecting against a condition that can't exist. And locking does invoke a performance hit, adding instructions to check for simultaneous access before code is executed. It should only be used where necessary. A: See the question about 'Mutex' in C#. And then look at these two questions regarding use of the 'lock(Object)' statement specifically. A: There is no point in having locks in the app if there is only one thread and yes, it is a performance hit although it does take a fair number of calls for that hit to stack up into something significant.
lock keyword in C#
I understand the main function of the lock key word from MSDN lock Statement (C# Reference) The lock keyword marks a statement block as a critical section by obtaining the mutual-exclusion lock for a given object, executing a statement, and then releasing the lock. When should the lock be used? For instance it makes sense with multi-threaded applications because it protects the data. But is it necessary when the application does not spin off any other threads? Is there performance issues with using lock? I have just inherited an application that is using lock everywhere, and it is single threaded and I want to know should I leave them in, are they even necessary? Please note this is more of a general knowledge question, the application speed is fine, I want to know if that is a good design pattern to follow in the future or should this be avoided unless absolutely needed.
[ "\nWhen should the lock be used?\n\nA lock should be used to protect shared resources in multithreaded code. Not for anything else.\n\nBut is it necessary when the application does not spin off any other threads?\n\nAbsolutely not. It's just a time waster. However do be sure that you're not implicitly using system threads. For example if you use asynchronous I/O you may receive callbacks from a random thread, not your original thread.\n\nIs there performance issues with using lock?\n\nYes. They're not very big in a single-threaded application, but why make calls you don't need?\n\n...if that is a good design pattern to follow in the future[?]\n\nLocking everything willy-nilly is a terrible design pattern. If your code is cluttered with random locking and then you do decide to use a background thread for some work, you're likely to run into deadlocks. Sharing a resource between multiple threads requires careful design, and the more you can isolate the tricky part, the better.\n", "All the answers here seem right: locks' usefulness is to block threads from acessing locked code concurrently. However, there are many subtleties in this field, one of which is that locked blocks of code are automatically marked as critical regions by the Common Language Runtime.\nThe effect of code being marked as critical is that, if the entire region cannot be entirely executed, the runtime may consider that your entire Application Domain is potentially jeopardized and, therefore, unload it from memory. To quote MSDN:\n\nFor example, consider a task that attempts to allocate memory while holding a lock. If the memory allocation fails, aborting the current task is not sufficient to ensure stability of the AppDomain, because there can be other tasks in the domain waiting for the same lock. If the current task is terminated, other tasks could be deadlocked.\n\nTherefore, even though your application is single-threaded, this may be a hazard for you. Consider that one method in a locked block throws an exception that is eventually not handled within the block. Even if the exception is dealt as it bubbles up through the call stack, your critical region of code didn't finish normally. And who knows how the CLR will react?\nFor more info, read this article on the perils of Thread.Abort().\n", "Bear in mind that there might be reasons why your application is not as single-threaded as you think. Async I/O in .NET may well call-back on a pool thread, for example, as do some of the various timer classes (not the Windows Forms Timer, though).\n", "Generally speaking if your application is single threaded, you're not going to get much use out of the lock statement. Not knowing your application exactly, I don't know if they're useful or not - but I suspect not. Further, if you're application is using lock everywhere I don't know that I would feel all that confident about it working in a multi-threaded environment anyways - did the original developer actually know how to develop multi-threaded code, or did they just add lock statements everywhere in the vague hope that that would do the trick?\n", "lock should be used around the code that modifies shared state, state that is modified by other threads concurrently, and those other treads must take the same lock. \nA lock is actually a memory access serializer, the threads (that take the lock) will wait on the lock to enter until the current thread exits the lock, so memory access is serialized.\nTo answer you question lock is not needed in a single threaded application, and it does have performance side effects. because locks in C# are based on kernel sync objects and every lock you take creates a transition to kernel mode from user mode. \nIf you're interested in multithreading performance a good place to start is MSDN threading guidelines\n", "You can have performance issues with locking variables, but normally, you'd construct your code to minimize the lengths of time that are spent inside a 'locked' block of code.\nAs far as removing the locks. It'll depend on what exactly the code is doing. Even though it's single threaded, if your object is implemented as a Singleton, it's possible that you'll have multiple clients using an instance of it (in memory, on a server) at the same time..\n", "Yes, there will be some performance penalty when using lock but it is generally neglible enough to not matter.\nUsing locks (or any other mutual-exclusion statement or construct) is generally only needed in multi-threaded scenarios where multiple threads (either of your own making or from your caller) have the opportunity to interact with the object and change the underlying state or data maintained. For example, if you have a collection that can be accessed by multiple threads you don't want one thread changing the contents of that collection by removing an item while another thread is trying to read it.\n", "Lock(token) is only used to mark one or more blocks of code that should not run simultaneously in multiple threads. If your application is single-threaded, it's protecting against a condition that can't exist.\nAnd locking does invoke a performance hit, adding instructions to check for simultaneous access before code is executed. It should only be used where necessary.\n", "See the question about 'Mutex' in C#. And then look at these two questions regarding use of the 'lock(Object)' statement specifically.\n", "There is no point in having locks in the app if there is only one thread and yes, it is a performance hit although it does take a fair number of calls for that hit to stack up into something significant. \n" ]
[ 58, 7, 6, 2, 2, 1, 1, 1, 1, 0 ]
[]
[]
[ "c#", "design_patterns", "locking", "multithreading" ]
stackoverflow_0000059590_c#_design_patterns_locking_multithreading.txt
Q: Replace huge Case statement in Classic ASP I have a 200+ case statement in Classic ASP which is getting out of control. It sets 5 variables, based on which case is selected. Address, Phone, Name etc. Just sets the variables and nothing else in each case. Databases or files are not an option for this application. Coming from the Perl world I would use a hash to store this info. A: Getting out of control? I think it's already out of control! Can you not categorise the cases into 'x' general areas and split down into helper routines? A: Brian, the classic ASP equivalent of a Perl hash is the Scripting.Dictionary object. A: Depends on what you want for performance. The case statement is ugly but does not consume memory that would need to be allocated. However, you could create a class for your fields and load instances of them into a Dictionary. Perform this operation in the global.asp script so it only happens once. Store the dictionary in the global asp collection such that it is only allocated once but used with each page call. My appologies for not getting too specific here... it's been a while. A: A lot of people use VBScript for Classic ASP, but you can use JavaScript / JScript on the server as an alternative. As a matter of fact, this is my preferred way of doing Classic ASP before finally moving to .NET (except in some cases, you will have to mix in VBScript for special cases, i.e. Disconnected Recordset, ExecuteNoRecords, etc.). It will provide you with better OOP support vs VBScript. Maybe you can try refactor that to.some sort of Strategy pattern afterward. Worth looking into I guess for better maintenance in the long run. A: The fact that you can't migrate this over to a database or a text file is a bit of an issue as they would be the best solution for this type of data. However, if you have to have it in the code you could always try putting it into a matrix that you predefine. Then you could provide a function that returns the data from a given row in the matrix. A: This should be done with a database, but since you said that is not an option, nothing you will write will be any less complex than a switch statement, since it's all required to live in your code (according to your terms of no db and no files). I mean, you could use an Excel Spreadsheet if the idea of a database is too complicated but technically that would be a file as well! A: Scripting dictionary is the best option IMHO.
Replace huge Case statement in Classic ASP
I have a 200+ case statement in Classic ASP which is getting out of control. It sets 5 variables, based on which case is selected. Address, Phone, Name etc. Just sets the variables and nothing else in each case. Databases or files are not an option for this application. Coming from the Perl world I would use a hash to store this info.
[ "Getting out of control? I think it's already out of control!\nCan you not categorise the cases into 'x' general areas and split down into helper routines?\n", "Brian, the classic ASP equivalent of a Perl hash is the Scripting.Dictionary object.\n", "Depends on what you want for performance. \nThe case statement is ugly but does not consume memory that would need to be allocated.\nHowever, you could create a class for your fields and load instances of them into a Dictionary. Perform this operation in the global.asp script so it only happens once. Store the dictionary in the global asp collection such that it is only allocated once but used with each page call.\nMy appologies for not getting too specific here... it's been a while.\n", "A lot of people use VBScript for Classic ASP, but you can use JavaScript / JScript on the server as an alternative. As a matter of fact, this is my preferred way of doing Classic ASP before finally moving to .NET (except in some cases, you will have to mix in VBScript for special cases, i.e. Disconnected Recordset, ExecuteNoRecords, etc.). It will provide you with better OOP support vs VBScript. Maybe you can try refactor that to.some sort of Strategy pattern afterward. Worth looking into I guess for better maintenance in the long run.\n", "The fact that you can't migrate this over to a database or a text file is a bit of an issue as they would be the best solution for this type of data. However, if you have to have it in the code you could always try putting it into a matrix that you predefine. Then you could provide a function that returns the data from a given row in the matrix.\n", "This should be done with a database, but since you said that is not an option, nothing you will write will be any less complex than a switch statement, since it's all required to live in your code (according to your terms of no db and no files).\nI mean, you could use an Excel Spreadsheet if the idea of a database is too complicated but technically that would be a file as well!\n", "Scripting dictionary is the best option IMHO.\n" ]
[ 3, 3, 1, 1, 1, 0, 0 ]
[]
[]
[ "asp_classic" ]
stackoverflow_0000050140_asp_classic.txt
Q: How to connect to PostgreSQL from .NET using TLS with both client and server authentication? I want to connect a C# .NET application to a PostgreSQL database, using TLS with client and server authentication: in other words, if the certificate from the client can not be verified against the certificate of the server, the client should get access denied, and if the client can not verify the certificate of the server, the client should immediately abort connecting. I tried this using Npgsql 1.0, but I can not find any way in Npgsql to specify the client certificate that should be used for the connection. I did manage to get server certificate verification on the client working, and I also did get all the verification working using the commandline db admin tool psql, but this did not help me further in getting it to work with Npgsql. So, how would I connect my .NET app using TLS client & server authentication to a PostgreSQL database? Is there maybe a newer/other data provider that does support this? Is there actually anyone who did get this to work one way or another? A: Try version 2.0 RC2 - it's pretty stable. (As a note, support for server certificate validation wasn't added in CVS until 2009. See http://pgfoundry.org/tracker/?func=detail&atid=592&aid=1010558&group_id=1000140. I am editing this into the reply because the advice to upgrade, while premature, is correct.)
How to connect to PostgreSQL from .NET using TLS with both client and server authentication?
I want to connect a C# .NET application to a PostgreSQL database, using TLS with client and server authentication: in other words, if the certificate from the client can not be verified against the certificate of the server, the client should get access denied, and if the client can not verify the certificate of the server, the client should immediately abort connecting. I tried this using Npgsql 1.0, but I can not find any way in Npgsql to specify the client certificate that should be used for the connection. I did manage to get server certificate verification on the client working, and I also did get all the verification working using the commandline db admin tool psql, but this did not help me further in getting it to work with Npgsql. So, how would I connect my .NET app using TLS client & server authentication to a PostgreSQL database? Is there maybe a newer/other data provider that does support this? Is there actually anyone who did get this to work one way or another?
[ "Try version 2.0 RC2 - it's pretty stable.\n(As a note, support for server certificate validation wasn't added in CVS until 2009. See http://pgfoundry.org/tracker/?func=detail&atid=592&aid=1010558&group_id=1000140. I am editing this into the reply because the advice to upgrade, while premature, is correct.)\n" ]
[ 1 ]
[]
[]
[ ".net", "database_connection", "postgresql", "ssl" ]
stackoverflow_0000057261_.net_database_connection_postgresql_ssl.txt
Q: C++ Quiz - Singletons I'll soon be posting an article on my blog, but I'd like to verify I haven't missed anything first. Find an example I've missed, and I'll cite you on my post. The topic is failed Singleton implementations: In what cases can you accidentally get multiple instances of a singleton? So far, I've come up with: Race Condition on first call to instance() Incorporation into multiple DLLs or DLL and executable Template definition of a singleton - actually separate classes Any other ways I'm missing - perhaps with inheritance? A: If you use a static instance field that you initialize in your cpp file, you can get multiple instances (and even worse behavior) if the initialization of some static/global tries to get an instance of your singleton. This is because the order of static initialization across compilation units is undefined. A: Inheritance shouldn't be an issue as long as the ctor is private. However, if you don't disallow the copy constructor, users may [un]intentionally copy the singleton instance. Privately inheriting from boost::noncopyable is the easiest way to prevent this.
C++ Quiz - Singletons
I'll soon be posting an article on my blog, but I'd like to verify I haven't missed anything first. Find an example I've missed, and I'll cite you on my post. The topic is failed Singleton implementations: In what cases can you accidentally get multiple instances of a singleton? So far, I've come up with: Race Condition on first call to instance() Incorporation into multiple DLLs or DLL and executable Template definition of a singleton - actually separate classes Any other ways I'm missing - perhaps with inheritance?
[ "If you use a static instance field that you initialize in your cpp file, you can get multiple instances (and even worse behavior) if the initialization of some static/global tries to get an instance of your singleton. This is because the order of static initialization across compilation units is undefined.\n", "Inheritance shouldn't be an issue as long as the ctor is private.\nHowever, if you don't disallow the copy constructor, users may [un]intentionally copy the singleton instance. Privately inheriting from boost::noncopyable is the easiest way to prevent this.\n" ]
[ 3, 1 ]
[]
[]
[ "c++", "design_patterns", "oop" ]
stackoverflow_0000060331_c++_design_patterns_oop.txt
Q: SQL Error OLE.INTEROP I'm getting an error whenever I load Management Studio or open a folder in the server explorer, etc. Additionally, If I try to create a new database it constantly is updating and does not finish. I have attached a screenshot of the error. Please let me know what I can do to fix this because it's really aggravating. Error Screen http://frickinsweet.com/databaseError.gif A: From MSDN forum http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=120476&SiteID=1 A: My first guess would be Client Tool corruption. I've occasionally had to uninstall my client tools and reinstall them. Boot after uninstall. A: I had to add the registry file AND re-run "regsvr32 actxprxy.dll" This was a really odd and painful error. It only seemed to come into existence after installing VS SP1 but I really don't see why that would have happened.
SQL Error OLE.INTEROP
I'm getting an error whenever I load Management Studio or open a folder in the server explorer, etc. Additionally, If I try to create a new database it constantly is updating and does not finish. I have attached a screenshot of the error. Please let me know what I can do to fix this because it's really aggravating. Error Screen http://frickinsweet.com/databaseError.gif
[ "From MSDN forum http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=120476&SiteID=1\n", "My first guess would be Client Tool corruption.\nI've occasionally had to uninstall my client tools and reinstall them. Boot after uninstall.\n", "I had to add the registry file AND re-run \"regsvr32 actxprxy.dll\" This was a really odd and painful error. It only seemed to come into existence after installing VS SP1 but I really don't see why that would have happened.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "sql_server" ]
stackoverflow_0000060076_sql_server.txt
Q: What function does a tag cloud serve? I see them all the time and always ignore them. Can someone explain to me why they have become so prevalent? If I'm using a site that allows me to explore it via tags (e.g., this one, del.icio.us, etc.) that's what I will do. Why would I need a "cloud" of tags upon which to click? I can just type that tag(s) into a search box. What am I missing? A: It's more of a browse assist than a search assist. If you see a large or bold tag in a tag cloud that interests you it my lead to some knowledge discovery that wouldn't have otherwise been sought out with a deliberate search. When I am browsing del.ico.us or stackoverflow I appreciate the tags as they sometimes lead me to discover related topics. Wikipedia has an interesting definition: A tag cloud or word cloud (or weighted list in visual design) is a visual depiction of user-generated tags, or simply the word content of a site, used typically to describe the content of web sites. Tags are usually single words and are typically listed alphabetically, and the importance of a tag is shown with font size or color. 1 Thus both finding a tag by alphabet and by popularity is possible. The tags are usually hyperlinks that lead to a collection of items that are associated with a tag. A: It's a easy mechanism to determine which tags are most popular or how dense that tag is populated ( amount of tags). It's just a intuative interface, I'm fairly certain that's one of the bigger reason's why they are so popular, that and they are very Web 2.0 also. A: Why would I need a "cloud" of tags upon which to click? I can just type that tag(s) into a search box. What am I missing? How do you know what tags are available to type without a lot of trial and error? Even if you know what tags are available, how do you know which are most popular without a bunch more trial and error? A: The thing that makes a tag cloud really useful (at least a well implemented tag cloud IMO) is the ability to drill into a topic deeper and deeper. For example, I could click "Topic A" and then I can see the items in the tag cloud for all tags within the "Topic A" items. I can then drill into one of those sub topic and narrow the items even further. The stackoverflow tag cloud doesn't do this (which is too bad), but if it did, I could click something like "visualstudio" to drill into the threads tagged visualstudio then click "asp.net" to drill into that, then "javascript". The end result would be a list of all items tagged all three "visualstudio", "asp.net" and "javascript". This is where a tag cloud becomes really useful. Unfortunately, not all tag clouds work this way (but IMO they should). A: Because searching for php is not the same as viewing all posts that the owner has tagged as php. Try it. A: It helps you understand the focus of the page or site that you're looking at. What topics being discussed the most? What kinds of information will I find here? If you search for something related to Java and land on two sites, one with a tag cloud showing 'Java' is prominent, and one where Java is almost invisible but 'C#' is prominent it's pretty easy to quickly decide which site is most valuable to you. A: Tags give a way of explicitly labelling something with what it is about instead of relying on computers to extract this information. For example, you might be interested in on questions about stackoverflow. If you search for "stackoverflow" you will get all kinds of questions that are not about stackoverflow at all (e.g. they only contain the word "stackoverflow" because there is some link to another question). By selecting questions that are tagged with "stackoverflow" you get only those post that people have explicitly identified as being about stackoverflow.
What function does a tag cloud serve?
I see them all the time and always ignore them. Can someone explain to me why they have become so prevalent? If I'm using a site that allows me to explore it via tags (e.g., this one, del.icio.us, etc.) that's what I will do. Why would I need a "cloud" of tags upon which to click? I can just type that tag(s) into a search box. What am I missing?
[ "It's more of a browse assist than a search assist. If you see a large or bold tag in a tag cloud that interests you it my lead to some knowledge discovery that wouldn't have otherwise been sought out with a deliberate search. When I am browsing del.ico.us or stackoverflow I appreciate the tags as they sometimes lead me to discover related topics.\nWikipedia has an interesting definition:\n\nA tag cloud or word cloud (or weighted list in visual design) is a visual depiction of user-generated tags, or simply the word content of a site, used typically to describe the content of web sites. Tags are usually single words and are typically listed alphabetically, and the importance of a tag is shown with font size or color. 1 Thus both finding a tag by alphabet and by popularity is possible. The tags are usually hyperlinks that lead to a collection of items that are associated with a tag.\n\n", "It's a easy mechanism to determine which tags are most popular or how dense that tag is populated ( amount of tags). \nIt's just a intuative interface, I'm fairly certain that's one of the bigger reason's why they are so popular, that and they are very Web 2.0 also. \n", "\nWhy would I need a \"cloud\" of tags upon which to click? I can just type that tag(s) into a search box. What am I missing?\n\nHow do you know what tags are available to type without a lot of trial and error? Even if you know what tags are available, how do you know which are most popular without a bunch more trial and error?\n", "The thing that makes a tag cloud really useful (at least a well implemented tag cloud IMO) is the ability to drill into a topic deeper and deeper. \nFor example, I could click \"Topic A\" and then I can see the items in the tag cloud for all tags within the \"Topic A\" items. I can then drill into one of those sub topic and narrow the items even further. \nThe stackoverflow tag cloud doesn't do this (which is too bad), but if it did, I could click something like \"visualstudio\" to drill into the threads tagged visualstudio then click \"asp.net\" to drill into that, then \"javascript\". The end result would be a list of all items tagged all three \"visualstudio\", \"asp.net\" and \"javascript\". This is where a tag cloud becomes really useful. Unfortunately, not all tag clouds work this way (but IMO they should).\n", "Because searching for php is not the same as viewing all posts that the owner has tagged as php. Try it.\n", "It helps you understand the focus of the page or site that you're looking at. What topics being discussed the most? What kinds of information will I find here?\nIf you search for something related to Java and land on two sites, one with a tag cloud showing 'Java' is prominent, and one where Java is almost invisible but 'C#' is prominent it's pretty easy to quickly decide which site is most valuable to you.\n", "Tags give a way of explicitly labelling something with what it is about instead of relying on computers to extract this information.\nFor example, you might be interested in on questions about stackoverflow. If you search for \"stackoverflow\" you will get all kinds of questions that are not about stackoverflow at all (e.g. they only contain the word \"stackoverflow\" because there is some link to another question). By selecting questions that are tagged with \"stackoverflow\" you get only those post that people have explicitly identified as being about stackoverflow.\n" ]
[ 15, 4, 2, 2, 1, 1, 0 ]
[]
[]
[ "tags" ]
stackoverflow_0000060330_tags.txt
Q: How do you balance fun feature creep with time constraints? I enjoy programming, usually. Tedious stuff is easy to get done as quickly and correctly as possible so I can get through it and not have to see it again. But a lot of my coding is fun and when I get in the 'zone' I just really enjoy myself. Which is where I make the mistake of spending too much time, perhaps adding features, perhaps writing it in a cool or elegant manner, or just doing neat prototypes. How do you recognize this is happening before it exceeds your time frame? What do you do before starting a potentially fun piece of code, or during, to get back on track? When is it ok to let yourself go "hog wild" and just enjoy it without worrying about consequences? -Adam A: Keep a detailed prioritized feature list/bug list. review it often then balance the fun work with bugs/features that need to get done. A: Give yourself a hard deadline--even for your own projects. Otherwise, you'll just keep tweaking and adding features ad infinitum. A: Always have a working release (snapshot) ready. Treat it like the way SQL server implement snapshot isolation. :) Continue adding new cool stuffs to a separate copy of the project. Once it is stable, overwrite your release folder and that is your new snapshot. Whenever somebody ask for a demo or release, that way you can always switch to the stable application and will have something to show anytime. A: With a backlog. That way you'll always have in mind what needs to be done before you can start doing what you want to do. A: Justify any "fun" features you insert by regarding them as marketable eye-candy. Unless, of course, they're not visible ;)
How do you balance fun feature creep with time constraints?
I enjoy programming, usually. Tedious stuff is easy to get done as quickly and correctly as possible so I can get through it and not have to see it again. But a lot of my coding is fun and when I get in the 'zone' I just really enjoy myself. Which is where I make the mistake of spending too much time, perhaps adding features, perhaps writing it in a cool or elegant manner, or just doing neat prototypes. How do you recognize this is happening before it exceeds your time frame? What do you do before starting a potentially fun piece of code, or during, to get back on track? When is it ok to let yourself go "hog wild" and just enjoy it without worrying about consequences? -Adam
[ "Keep a detailed prioritized feature list/bug list. review it often then balance the fun work with bugs/features that need to get done.\n", "Give yourself a hard deadline--even for your own projects. Otherwise, you'll just keep tweaking and adding features ad infinitum.\n", "Always have a working release (snapshot) ready. Treat it like the way SQL server implement snapshot isolation. :) \nContinue adding new cool stuffs to a separate copy of the project. Once it is stable, overwrite your release folder and that is your new snapshot. Whenever somebody ask for a demo or release, that way you can always switch to the stable application and will have something to show anytime.\n", "With a backlog. That way you'll always have in mind what needs to be done before you can start doing what you want to do.\n", "Justify any \"fun\" features you insert by regarding them as marketable eye-candy.\nUnless, of course, they're not visible ;)\n" ]
[ 6, 4, 4, 2, 1 ]
[]
[]
[ "project_management" ]
stackoverflow_0000060256_project_management.txt
Q: SQL Server 2005 Encryption, asp.net and stored procedures I need to write a web application using SQL Server 2005, asp.net, and ado.net. Much of the user data stored in this application must be encrypted (read HIPAA). In the past for projects that required encryption, I encrypted/decrypted in the application code. However, this was generally for encrypting passwords or credit card information, so only a handful of columns in a couple tables. For this application, far more columns in several tables need to be encrypted, so I suspect pushing the encryption responsibilities into the data layer will be better performing, especially given SQL Server 2005's native support for several encryption types. (I could be convinced otherwise if anyone has real, empirical evidence.) I've consulted BOL, and I'm fairly adept at using google. So I don't want links to online articles or MSDN documentation (its likely I've already read it). One approach I've wrapped my head around so far is to use a symmetric key which is opened using a certificate. So the one time setup steps are (performed by a DBA in theory): Create a Master Key Backup the Master Key to a file, burn to CD and store off site. Open the Master Key and create a certificate. Backup the certificate to a file, burn to CD and store off site. Create the Symmetric key with encryption algorithm of choice using the certificate. Then anytime a stored procedure (or a human user via Management Studio) needs to access encrypted data you have to first open the symmetric key, execute any tsql statements or batches, and then close the symmetric key. Then as far as the asp.net application is concerned, and in my case the application code's data access layer, the data encryption is entirely transparent. So my questions are: Do I want to open, execute tsql statements/batches, and then close the symmetric key all within the sproc? The danger I see is, what if something goes wrong with the tsql execution, and code sproc execution never reaches the statement that closes the key. I assume this means the key will remain open until sql kills the SPID that sproc executed on. Should I instead consider making three database calls for any given procedure I need to execute (only when encryption is necessary)? One database call to open the key, a second call to execute the sproc, and a third call to close the key. (Each call wrapped in its own try catch loop in order to maximize the odds that an open key ultimately is closed.) Any considerations should I need to use client side transactions (meaning my code is the client, and initiates a transaction, executes several sprocs, and then commits the transaction assuming success)? A: 1) Look into using TRY..CATCH in SQL 2005. Unfortunately there is no FINALLY, so you'll have to handle both the success and error cases individually. 2) Not necessary if (1) handles the cleanup. 3) There isn't really a difference between client and server transactions with SQL Server. Connection.BeginTransaction() more or less executes "BEGIN TRANSACTION" on the server (and System.Transactions/TransactionScope does the same, until it's promoted to a distributed transaction). As for concerns with open/closing the key multiple times inside a transaction, I don't know of any issues to be aware of. A: I'm a big fan of option 3. Pretend for a minute you were going to set up transaction infrastructure anyways where: Whenever a call to the datastore was about to be made if an existing transaction hadn't been started then one was created. If a transaction is already in place then calls to the data store hook into that transaction. This is often useful for business rules that are raised by save/going-to-the-database events. IE. If you had a rule that whenever you sold a widget you needed to update a WidgetAudit table, you'd probably want to wrap the widget audit insert call in the same transaction as that which is telling the datastore a widget has been sold. Whenever a the original caller to the datastore (from step 1) is finished it commits/rollbacks the transaction, which affects all the database actions which happened during its call (using a try/catch/finally). Once this type of transactioning is created then it becomes simple to tack on a open key at the beginning (when the transaction opens) and close the key at the end (just before the transaction ends). Making "calls" to the datastore isn't nearly as expensive as opening a connection to the database. It's really things like SQLConnection.Open() that burns resources (even if ADO.NET is pooling them for you). If you want an example of these types of codes I would consider looking at NetTiers. It has quite an elegant solution for the transactioning that we just described (assuming you don't already have something in mind). Just 2 cents. Good luck. A: you can use @@error to see if any errors occured during the call to a sproc in SQL. No to complicated. You can but I prefer to use transactions in SQL Server itself.
SQL Server 2005 Encryption, asp.net and stored procedures
I need to write a web application using SQL Server 2005, asp.net, and ado.net. Much of the user data stored in this application must be encrypted (read HIPAA). In the past for projects that required encryption, I encrypted/decrypted in the application code. However, this was generally for encrypting passwords or credit card information, so only a handful of columns in a couple tables. For this application, far more columns in several tables need to be encrypted, so I suspect pushing the encryption responsibilities into the data layer will be better performing, especially given SQL Server 2005's native support for several encryption types. (I could be convinced otherwise if anyone has real, empirical evidence.) I've consulted BOL, and I'm fairly adept at using google. So I don't want links to online articles or MSDN documentation (its likely I've already read it). One approach I've wrapped my head around so far is to use a symmetric key which is opened using a certificate. So the one time setup steps are (performed by a DBA in theory): Create a Master Key Backup the Master Key to a file, burn to CD and store off site. Open the Master Key and create a certificate. Backup the certificate to a file, burn to CD and store off site. Create the Symmetric key with encryption algorithm of choice using the certificate. Then anytime a stored procedure (or a human user via Management Studio) needs to access encrypted data you have to first open the symmetric key, execute any tsql statements or batches, and then close the symmetric key. Then as far as the asp.net application is concerned, and in my case the application code's data access layer, the data encryption is entirely transparent. So my questions are: Do I want to open, execute tsql statements/batches, and then close the symmetric key all within the sproc? The danger I see is, what if something goes wrong with the tsql execution, and code sproc execution never reaches the statement that closes the key. I assume this means the key will remain open until sql kills the SPID that sproc executed on. Should I instead consider making three database calls for any given procedure I need to execute (only when encryption is necessary)? One database call to open the key, a second call to execute the sproc, and a third call to close the key. (Each call wrapped in its own try catch loop in order to maximize the odds that an open key ultimately is closed.) Any considerations should I need to use client side transactions (meaning my code is the client, and initiates a transaction, executes several sprocs, and then commits the transaction assuming success)?
[ "1) Look into using TRY..CATCH in SQL 2005. Unfortunately there is no FINALLY, so you'll have to handle both the success and error cases individually.\n2) Not necessary if (1) handles the cleanup.\n3) There isn't really a difference between client and server transactions with SQL Server. Connection.BeginTransaction() more or less executes \"BEGIN TRANSACTION\" on the server (and System.Transactions/TransactionScope does the same, until it's promoted to a distributed transaction). As for concerns with open/closing the key multiple times inside a transaction, I don't know of any issues to be aware of.\n", "I'm a big fan of option 3.\nPretend for a minute you were going to set up transaction infrastructure anyways where:\n\nWhenever a call to the datastore was about to be made if an existing transaction hadn't been started then one was created. \nIf a transaction is already in place then calls to the data store hook into that transaction. This is often useful for business rules that are raised by save/going-to-the-database events. IE. If you had a rule that whenever you sold a widget you needed to update a WidgetAudit table, you'd probably want to wrap the widget audit insert call in the same transaction as that which is telling the datastore a widget has been sold. \nWhenever a the original caller to the datastore (from step 1) is finished it commits/rollbacks the transaction, which affects all the database actions which happened during its call (using a try/catch/finally).\n\nOnce this type of transactioning is created then it becomes simple to tack on a open key at the beginning (when the transaction opens) and close the key at the end (just before the transaction ends). Making \"calls\" to the datastore isn't nearly as expensive as opening a connection to the database. It's really things like SQLConnection.Open() that burns resources (even if ADO.NET is pooling them for you).\nIf you want an example of these types of codes I would consider looking at NetTiers. It has quite an elegant solution for the transactioning that we just described (assuming you don't already have something in mind).\nJust 2 cents. Good luck.\n", "\nyou can use @@error to see if any errors occured during the call to a sproc in SQL.\nNo to complicated.\nYou can but I prefer to use transactions in SQL Server itself.\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "asp.net", "encryption", "sql_server_2005" ]
stackoverflow_0000059926_asp.net_encryption_sql_server_2005.txt
Q: continuous integration web service I am in a position where I could become a team leader of a team distributed over two countries. This team would be the tech. team for a start up company that we plan to bootstrap on limited funds. So I am trying to find out ways to minimize upfront expenses. Right now we are planning to use Java and will have a lot of junit tests. I am planing on using github for VCS and lighthouse for a bug tracker. In addition I want to add a continuous integration server but I do not know of any continuous integration servers that are offered as a web service. Does anybody know if there are continuous integration servers available in a software as a service model? P.S. if anybody knows were I can get these three services at one location that would be great to know to. A: I am assuming you are talking about continuous integration. You can run CruiseControl on a virtual machine or an old machine, but if it needs to be up in the Internet, you can try virtual dedicated server hosting services. You can save money by picking Linux here, but I'd go for a Windows server if your target platform is Windows. A: Note: This is an outdated answer from 2008. There are now plenty of such services thanks to things like Amazon's Elastic Cloud Compute service (for example, travis-ci) I rather doubt you'll find a service to build stuff for you. Building requires a lot of CPU power, and if you're having to rebuild every time someone commits, it would be hard to scale such a service.. And I'm sure there's probably security issues and the likes as well.. As @eed3si9n said, you could run CruiseControl on a spare (virtual-)machine and use that. Then setup port forwarding, and something like http://dyndns.com or http://no-ip.info to make it publicly accessible. It's not ideal.. I've never used CruiseControl before, but I imagine there will be a way to take the build results, and upload them to a public web-server (as a dumb HTML file). That way it would sit on your home machine, watching github, building new versions and sending the results to a reliable web-host (so no "Connection Timeout" every time your home connection isn't accessible) In fact, I just looked at the CruiseControl documentation - the build results are stored as a set of XML files, so it'd be trivial to transfer/display them on another machine. Basically, my suggestion is: run the continuous integration server on a spare machine, have it upload the results to a public web server somehow.
continuous integration web service
I am in a position where I could become a team leader of a team distributed over two countries. This team would be the tech. team for a start up company that we plan to bootstrap on limited funds. So I am trying to find out ways to minimize upfront expenses. Right now we are planning to use Java and will have a lot of junit tests. I am planing on using github for VCS and lighthouse for a bug tracker. In addition I want to add a continuous integration server but I do not know of any continuous integration servers that are offered as a web service. Does anybody know if there are continuous integration servers available in a software as a service model? P.S. if anybody knows were I can get these three services at one location that would be great to know to.
[ "I am assuming you are talking about continuous integration.\nYou can run CruiseControl on a virtual machine or an old machine, but if it needs to be up in the Internet, you can try virtual dedicated server hosting services. You can save money by picking Linux here, but I'd go for a Windows server if your target platform is Windows.\n", "Note: This is an outdated answer from 2008. There are now plenty of such services thanks to things like Amazon's Elastic Cloud Compute service (for example, travis-ci)\n\nI rather doubt you'll find a service to build stuff for you. Building requires a lot of CPU power, and if you're having to rebuild every time someone commits, it would be hard to scale such a service.. And I'm sure there's probably security issues and the likes as well..\nAs @eed3si9n said, you could run CruiseControl on a spare (virtual-)machine and use that. Then setup port forwarding, and something like http://dyndns.com or http://no-ip.info to make it publicly accessible. It's not ideal..\nI've never used CruiseControl before, but I imagine there will be a way to take the build results, and upload them to a public web-server (as a dumb HTML file). That way it would sit on your home machine, watching github, building new versions and sending the results to a reliable web-host (so no \"Connection Timeout\" every time your home connection isn't accessible)\nIn fact, I just looked at the CruiseControl documentation - the build results are stored as a set of XML files, so it'd be trivial to transfer/display them on another machine.\nBasically, my suggestion is: run the continuous integration server on a spare machine, have it upload the results to a public web server somehow.\n" ]
[ 1, 0 ]
[]
[]
[ "continuous_integration" ]
stackoverflow_0000060368_continuous_integration.txt
Q: What do I need to know to globalize an asp.net application? I'm writing an asp.net application that will need to be localized to several regions other than North America. What do I need to do to prepare for this globalization? What are your top 1 to 2 resources for learning how to write a world ready application. A: A couple of things that I've learned: Absolutely and brutally minimize the number of images you have that contain text. Doing so will make your life a billion percent easier since you won't have to get a new set of images for every friggin' language. Be very wary of css positioning that relies on things always remaining the same size. If those things contain text, they will not remain the same size, and you will then need to go back and fix your designs. If you use character types in your sql tables, make sure that any of those that might receive international input are unicode (nchar, nvarchar, ntext). For that matter, I would just standardize on using the unicode versions. If you're building SQL queries dynamically, make sure that you include the N prefix before any quoted text if there's any chance that text might be unicode. If you end up putting garbage in a SQL table, check to see if that's there. Make sure that all your web pages definitively state that they are in a unicode format. See Joel's article, mentioned above. You're going to be using resource files a lot for this project. That's good - ASP.NET 2.0 has great support for such. You'll want to look into the App_LocalResources and App_GlobalResources folder as well as GetLocalResourceObject, GetGlobalResourceObject, and the concept of meta:resourceKey. Chapter 30 of Professional ASP.NET 2.0 has some great content regarding that. The 3.5 version of the book may well have good content there as well, but I don't own it. Think about fonts. Many of the standard fonts you might want to use aren't unicode capable. I've always had luck with Arial Unicode MS, MS Gothic, MS Mincho. I'm not sure about how cross-platform these are, though. Also, note that not all fonts support all of the Unicode character definition. Again, test, test, test. Start thinking now about how you're going to get translations into this system. Go talk to whoever is your translation vendor about how they want data passed back and forth for translation. Think about the fact that, through your local resource files, you will likely be repeating some commonly used strings through the system. Do you normalize those into global resource files, or do you have some sort of database layer where only one copy of each text used is generated. In our recent project, we used resource files which were generated from a database table that contained all the translations and the original, english version of the resource files. Test. Generally speaking I will test in German, Polish, and an Asian language (Japanese, Chinese, Korean). German and Polish are wordy and nearly guaranteed to stretch text areas, Asian languages use an entirely different set of characters which tests your unicode support. A: Learn about the System.Globalization namespace: System.Globalization Also, a good book is NET Internationalization: The Developer's Guide to Building Global Windows and Web Applications A: Would be good to refresh a bit on Unicodes if you are targeting other cultures,languages. The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) A: This is a hard problem. I live in Canada, so multilingualism is a big issue. In all my years of doing software development, I've never seen a solution that I liked. I've seen a lot of solutions that worked, and got the job done, but they've always felt like a big kludge. I would go with @harriyott, and make sure that none of your strings are actually in code. A resource file works well for desktop applications. However in ASP.Net, I'd recommend using the database. @John Christensen also has some good pointers. A: Make sure you're compiling with Code Analysis turned on, and pay attention to the Globalization warnings that it gives you. Keep data in an invariant format (CultureInfo.InvariantCulture) until you display it to the user (then use CultureInfo.CurrentCulture). A: I would seriously consider reading the following code project article: Globalization and localization demystified in ASP.NET 2.0 It covers everything from Cultures and Locales, setting the threads current culture, resource files, encodings, you name it! And of course it's loaded with pretty pictures and examples :-). Good luck! A: I would suggest: Put all strings in either the database or resource files. Allow extra space for translated text, as some (e.g. German) are wordier.
What do I need to know to globalize an asp.net application?
I'm writing an asp.net application that will need to be localized to several regions other than North America. What do I need to do to prepare for this globalization? What are your top 1 to 2 resources for learning how to write a world ready application.
[ "A couple of things that I've learned:\n\nAbsolutely and brutally minimize the number of images you have that contain text. Doing so will make your life a billion percent easier since you won't have to get a new set of images for every friggin' language.\nBe very wary of css positioning that relies on things always remaining the same size. If those things contain text, they will not remain the same size, and you will then need to go back and fix your designs.\nIf you use character types in your sql tables, make sure that any of those that might receive international input are unicode (nchar, nvarchar, ntext). For that matter, I would just standardize on using the unicode versions.\nIf you're building SQL queries dynamically, make sure that you include the N prefix before any quoted text if there's any chance that text might be unicode. If you end up putting garbage in a SQL table, check to see if that's there.\nMake sure that all your web pages definitively state that they are in a unicode format. See Joel's article, mentioned above.\nYou're going to be using resource files a lot for this project. That's good - ASP.NET 2.0 has great support for such. You'll want to look into the App_LocalResources and App_GlobalResources folder as well as GetLocalResourceObject, GetGlobalResourceObject, and the concept of meta:resourceKey. Chapter 30 of Professional ASP.NET 2.0 has some great content regarding that. The 3.5 version of the book may well have good content there as well, but I don't own it.\nThink about fonts. Many of the standard fonts you might want to use aren't unicode capable. I've always had luck with Arial Unicode MS, MS Gothic, MS Mincho. I'm not sure about how cross-platform these are, though. Also, note that not all fonts support all of the Unicode character definition. Again, test, test, test.\nStart thinking now about how you're going to get translations into this system. Go talk to whoever is your translation vendor about how they want data passed back and forth for translation. Think about the fact that, through your local resource files, you will likely be repeating some commonly used strings through the system. Do you normalize those into global resource files, or do you have some sort of database layer where only one copy of each text used is generated. In our recent project, we used resource files which were generated from a database table that contained all the translations and the original, english version of the resource files. \nTest. Generally speaking I will test in German, Polish, and an Asian language (Japanese, Chinese, Korean). German and Polish are wordy and nearly guaranteed to stretch text areas, Asian languages use an entirely different set of characters which tests your unicode support.\n\n", "Learn about the System.Globalization namespace:\nSystem.Globalization\nAlso, a good book is NET Internationalization: The Developer's Guide to Building Global Windows and Web Applications \n", "Would be good to refresh a bit on Unicodes if you are targeting other cultures,languages.\nThe Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)\n", "This is a hard problem. I live in Canada, so multilingualism is a big issue. In all my years of doing software development, I've never seen a solution that I liked. I've seen a lot of solutions that worked, and got the job done, but they've always felt like a big kludge. I would go with @harriyott, and make sure that none of your strings are actually in code. A resource file works well for desktop applications. However in ASP.Net, I'd recommend using the database. @John Christensen also has some good pointers.\n", "Make sure you're compiling with Code Analysis turned on, and pay attention to the Globalization warnings that it gives you. Keep data in an invariant format (CultureInfo.InvariantCulture) until you display it to the user (then use CultureInfo.CurrentCulture).\n", "I would seriously consider reading the following code project article:\nGlobalization and localization demystified in ASP.NET 2.0\nIt covers everything from Cultures and Locales, setting the threads current culture, resource files, encodings, you name it!\nAnd of course it's loaded with pretty pictures and examples :-). Good luck!\n", "I would suggest:\n\nPut all strings in either the database or resource files. \nAllow extra space for translated text, as some (e.g. German) are wordier.\n\n" ]
[ 23, 3, 2, 2, 2, 1, 0 ]
[]
[]
[ "asp.net", "globalization", "localization", "unicode" ]
stackoverflow_0000059130_asp.net_globalization_localization_unicode.txt
Q: .NET serialization class design issue We have a rather large object graph that needs to be serialized and deserialized in a lot of different ways (modes). In some modes we want certain properties to be deserialized and in some we don't. In future modes it might also be possible that there are more options for properties than yes or no. The problem is now how we implement these modes. Approach A (use deserialization constructor and ISerializable.GetObjectData): If we let each object of the graph serialize itself using a deserialization constructor we get a lot switches for all the different modes of deserialization. The advantage of this approach however is that all the deserialization logic is at one location and if we add new properties we just need to modify the ISerializable.GetObjectData and the deserialization constructor. Another advantage is that the object might take internal states into account that might be exposed publically. The most important disadvantage is that we dataobject itself needs to know about all possible serialization modes. If we need a new mode we need to modify the dataobjects. Approach B (Deserialization Factory Classes/Methods ): Another approach would be to have some sort Deserialization Factory Classes/Methods one for each mode that does the serialization and deserialization externally (e.g. GraphSerializer.SerializeObjectTypeX(ObjectTypeX objectToSerialze). The advantage here is that whenever we want a new mode we just add a new Factory Class/Method and our Dataobject do not get cluttered with all the serialization modes that get introduced. The main disadvantage here is that I would have to write the same serialization code over and over for all the different modes. If two modes differ just in one or two properties but I would have to implement the complete logic for the whole graph again. When I add a new property to a dataobject I need to update all the factory classes. So I wonder if there is a better approach to this IMHO general problem. Or even a best practise in .NET? Or maybe I am just approaching the whole thing from a wrong perspective? A: Make separate serializer classes (a-la XmlSerializer) for each mode, inherit or incapsulate to avoid duplication. Use attributes on properties to mark whether and how they should be serialised in specific mode
.NET serialization class design issue
We have a rather large object graph that needs to be serialized and deserialized in a lot of different ways (modes). In some modes we want certain properties to be deserialized and in some we don't. In future modes it might also be possible that there are more options for properties than yes or no. The problem is now how we implement these modes. Approach A (use deserialization constructor and ISerializable.GetObjectData): If we let each object of the graph serialize itself using a deserialization constructor we get a lot switches for all the different modes of deserialization. The advantage of this approach however is that all the deserialization logic is at one location and if we add new properties we just need to modify the ISerializable.GetObjectData and the deserialization constructor. Another advantage is that the object might take internal states into account that might be exposed publically. The most important disadvantage is that we dataobject itself needs to know about all possible serialization modes. If we need a new mode we need to modify the dataobjects. Approach B (Deserialization Factory Classes/Methods ): Another approach would be to have some sort Deserialization Factory Classes/Methods one for each mode that does the serialization and deserialization externally (e.g. GraphSerializer.SerializeObjectTypeX(ObjectTypeX objectToSerialze). The advantage here is that whenever we want a new mode we just add a new Factory Class/Method and our Dataobject do not get cluttered with all the serialization modes that get introduced. The main disadvantage here is that I would have to write the same serialization code over and over for all the different modes. If two modes differ just in one or two properties but I would have to implement the complete logic for the whole graph again. When I add a new property to a dataobject I need to update all the factory classes. So I wonder if there is a better approach to this IMHO general problem. Or even a best practise in .NET? Or maybe I am just approaching the whole thing from a wrong perspective?
[ "Make separate serializer classes (a-la XmlSerializer) for each mode, inherit or incapsulate to avoid duplication.\nUse attributes on properties to mark whether and how they should be serialised in specific mode\n" ]
[ 2 ]
[]
[]
[ ".net", "class_design", "serialization" ]
stackoverflow_0000056005_.net_class_design_serialization.txt
Q: Connecting Team Explorer to Codeplex anonymously I was using Codeplex and tried connecting to their source control using Team Explorer, with no joy. I also tried connecting with HTTPS or HTTP, using the server name and the project name. As I do not have a user account on Codeplex I could not login. I am just trying to check out some code without changing it. My question is: How can I connect Team Explorer to a Codeplex server anonymously? A: As the person primarily responsible for making anonymous access work against the TFS CodePlex servers, I can tell you that it isn't possible with Team Explorer. We tried to make it happen, but the way you get anonymous to work would've caused a pretty stellar-sized security hole with Team Explorer. So, as others have mentioned, the custom-written clients (CPC and SvnBridge) do support anonymous. I know the Teamprise guys were talking about adding it to Teamprise for a while, but not sure if they ever got around to it. It would've been a pretty big change in the way they work (since it basically has to be Workspace-less). Edit: Brannon helped, too. Wrote all the horrible C++ that I refuse to write. He just bugged me on IM, so I better amend my previous remarks. :-p A: I think you have to use the CodePlex Source Control Client. In includes cpc.exe which supports the anonymous access features of CodePlex TFS servers for non-coordinator/developer access. But according to the site: The CodePlex Client is not currently being maintained. The focus of the CodePlex team now is on the SvnBridge. I'm using TortoiseSVN with SvnBridge with no problems. A: I have used SVNBridge with TortoiseSVN, which workes like a charm. What I was looking for here is a way for anonymous access that is directly integrated with VS. Guess that's not possible at the moment. Also just found out you can connect directly via TortoiseSVN, without SVNBridge. Look for the "SvnBridge on the CodePlex servers?" heading A: I think it's not possible with Team Explorer. But you can with CodePlex Source Control Client or Tortoise
Connecting Team Explorer to Codeplex anonymously
I was using Codeplex and tried connecting to their source control using Team Explorer, with no joy. I also tried connecting with HTTPS or HTTP, using the server name and the project name. As I do not have a user account on Codeplex I could not login. I am just trying to check out some code without changing it. My question is: How can I connect Team Explorer to a Codeplex server anonymously?
[ "As the person primarily responsible for making anonymous access work against the TFS CodePlex servers, I can tell you that it isn't possible with Team Explorer. We tried to make it happen, but the way you get anonymous to work would've caused a pretty stellar-sized security hole with Team Explorer.\nSo, as others have mentioned, the custom-written clients (CPC and SvnBridge) do support anonymous. I know the Teamprise guys were talking about adding it to Teamprise for a while, but not sure if they ever got around to it. It would've been a pretty big change in the way they work (since it basically has to be Workspace-less).\nEdit: Brannon helped, too. Wrote all the horrible C++ that I refuse to write. He just bugged me on IM, so I better amend my previous remarks. :-p\n", "I think you have to use the CodePlex Source Control Client. In includes cpc.exe which supports the anonymous access features of CodePlex TFS servers for non-coordinator/developer access. But according to the site:\n\nThe CodePlex Client is not currently\n being maintained. The focus of the\n CodePlex team now is on the SvnBridge.\n\nI'm using TortoiseSVN with SvnBridge with no problems. \n", "I have used SVNBridge with TortoiseSVN, which workes like a charm.\nWhat I was looking for here is a way for anonymous access that is directly integrated with VS. Guess that's not possible at the moment.\nAlso just found out you can connect directly via TortoiseSVN, without SVNBridge. Look for the \"SvnBridge on the CodePlex servers?\" heading\n", "I think it's not possible with Team Explorer. But you can with CodePlex Source Control Client or Tortoise\n" ]
[ 3, 2, 2, 1 ]
[]
[]
[ "codeplex", "tfs", "version_control" ]
stackoverflow_0000037614_codeplex_tfs_version_control.txt
Q: WPF - Programmatic Binding on a BitmapEffect I would like to be able to programmatically bind some data to the dependency properties on a BitmapEffect. With a FrameworkElement like TextBlock there is a SetBinding method where you can programmatically do these bindings like: myTextBlock.SetBinding(TextBlock.TextProperty, new Binding("SomeProperty")); And I know you can do it in straight XAML (as seen below) <TextBlock Width="Auto" Text="Some Content" x:Name="MyTextBlock" TextWrapping="Wrap" > <TextBlock.BitmapEffect> <BitmapEffectGroup> <OuterGlowBitmapEffect x:Name="MyGlow" GlowColor="White" GlowSize="{Binding Path=MyValue}" /> </BitmapEffectGroup> </TextBlock.BitmapEffect> </TextBlock> But I can't figure out how to accomplish this with C# because BitmapEffect doesn't have a SetBinding method. I've tried: myTextBlock.SetBinding(OuterGlowBitmapEffect.GlowSize, new Binding("SomeProperty") { Source = someObject }); But it doesn't work. A: You can use BindingOperation.SetBinding: Binding newBinding = new Binding(); newBinding.ElementName = "SomeObject"; newBinding.Path = new PropertyPath(SomeObjectType.SomeProperty); BindingOperations.SetBinding(MyGlow, OuterGlowBitmapEffect.GlowSizeProperty, newBinding); I think that should do what you want.
WPF - Programmatic Binding on a BitmapEffect
I would like to be able to programmatically bind some data to the dependency properties on a BitmapEffect. With a FrameworkElement like TextBlock there is a SetBinding method where you can programmatically do these bindings like: myTextBlock.SetBinding(TextBlock.TextProperty, new Binding("SomeProperty")); And I know you can do it in straight XAML (as seen below) <TextBlock Width="Auto" Text="Some Content" x:Name="MyTextBlock" TextWrapping="Wrap" > <TextBlock.BitmapEffect> <BitmapEffectGroup> <OuterGlowBitmapEffect x:Name="MyGlow" GlowColor="White" GlowSize="{Binding Path=MyValue}" /> </BitmapEffectGroup> </TextBlock.BitmapEffect> </TextBlock> But I can't figure out how to accomplish this with C# because BitmapEffect doesn't have a SetBinding method. I've tried: myTextBlock.SetBinding(OuterGlowBitmapEffect.GlowSize, new Binding("SomeProperty") { Source = someObject }); But it doesn't work.
[ "You can use BindingOperation.SetBinding:\nBinding newBinding = new Binding();\nnewBinding.ElementName = \"SomeObject\";\nnewBinding.Path = new PropertyPath(SomeObjectType.SomeProperty);\nBindingOperations.SetBinding(MyGlow, OuterGlowBitmapEffect.GlowSizeProperty, newBinding);\n\nI think that should do what you want.\n" ]
[ 11 ]
[]
[]
[ "bitmapeffect", "data_binding", "wpf" ]
stackoverflow_0000059958_bitmapeffect_data_binding_wpf.txt
Q: Can distutils create empty __init__.py files? If all of my __init__.py files are empty, do I have to store them into version control, or is there a way to make distutils create empty __init__.py files during installation? A: In Python, __init__.py files actually have a meaning! They mean that the folder they are in is a Python module. As such, they have a real role in your code and should most probably be stored in Version Control. You could well imagine a folder in your source tree that is NOT a Python module, for example a folder containing only resources (e.g. images) and no code. That folder would not need to have a __init__.py file in it. Now how do you make the difference between folders where distutils should create those files and folders where it should not ? A: Is there a reason you want to avoid putting empty __init__.py files in version control? If you do this you won't be able to import your packages from the source directory wihout first running distutils. If you really want to, I suppose you can create __init__.py in setup.py. It has to be before running distutils.setup, so setup itself is able to find your packages: from distutils import setup import os for path in [my_package_directories]: filename = os.path.join(pagh, '__init__.py') if not os.path.exists(filename): init = open(filename, 'w') init.close() setup( ... ) but... what would you gain from this, compared to having the empty __init__.py files there in the first place?
Can distutils create empty __init__.py files?
If all of my __init__.py files are empty, do I have to store them into version control, or is there a way to make distutils create empty __init__.py files during installation?
[ "In Python, __init__.py files actually have a meaning! They mean that the folder they are in is a Python module. As such, they have a real role in your code and should most probably be stored in Version Control.\nYou could well imagine a folder in your source tree that is NOT a Python module, for example a folder containing only resources (e.g. images) and no code. That folder would not need to have a __init__.py file in it. Now how do you make the difference between folders where distutils should create those files and folders where it should not ?\n", "Is there a reason you want to avoid putting empty __init__.py files in version control? If you do this you won't be able to import your packages from the source directory wihout first running distutils.\nIf you really want to, I suppose you can create __init__.py in setup.py. It has to be before running distutils.setup, so setup itself is able to find your packages:\nfrom distutils import setup\nimport os\n\nfor path in [my_package_directories]:\n filename = os.path.join(pagh, '__init__.py')\n if not os.path.exists(filename):\n init = open(filename, 'w')\n init.close()\n\nsetup(\n...\n)\n\nbut... what would you gain from this, compared to having the empty __init__.py files there in the first place? \n" ]
[ 7, 4 ]
[]
[]
[ "distutils", "python", "version_control" ]
stackoverflow_0000060352_distutils_python_version_control.txt
Q: Pass functions in F# Is it possible to pass a reference to a function to another function in F#? Specifically, I'd like to pass lambda functions like foo(fun x -> x ** 3) More specifically, I need to know how I would refer to the passed function in a function that I wrote myself. A: Yes, it is possible. The manual has this example: > List.map (fun x -> x % 2 = 0) [1 .. 5];; val it : bool list = [false; true; false; true; false] A: Functions are first class citizens in F#. You can therefore pass them around just like you want to. If you have a function like this: let myFunction f = f 1 2 3 and f is function then the return value of myFunction is f applied to 1,2 and 3. A: Passing a lambda function to another function works like this: Suppose we have a trivial function of our own as follows: let functionThatTakesaFunctionAndAList f l = List.map f l Now you can pass a lambda function and a list to it: functionThatTakesaFunctionAndAList (fun x -> x ** 3.0) [1.0;2.0;3.0] Inside our own function functionThatTakesaFunctionAndAList you can just refer to the lambda function as f because you called your first parameter f. The result of the function call is of course: float list = [1.0; 8.0; 27.0]
Pass functions in F#
Is it possible to pass a reference to a function to another function in F#? Specifically, I'd like to pass lambda functions like foo(fun x -> x ** 3) More specifically, I need to know how I would refer to the passed function in a function that I wrote myself.
[ "Yes, it is possible. The manual has this example:\n> List.map (fun x -> x % 2 = 0) [1 .. 5];;\n\nval it : bool list\n= [false; true; false; true; false]\n\n", "Functions are first class citizens in F#. You can therefore pass them around just like you want to.\nIf you have a function like this:\nlet myFunction f =\n f 1 2 3\n\nand f is function then the return value of myFunction is f applied to 1,2 and 3.\n", "Passing a lambda function to another function works like this:\nSuppose we have a trivial function of our own as follows:\nlet functionThatTakesaFunctionAndAList f l = List.map f l\n\nNow you can pass a lambda function and a list to it:\nfunctionThatTakesaFunctionAndAList (fun x -> x ** 3.0) [1.0;2.0;3.0]\n\nInside our own function functionThatTakesaFunctionAndAList you can just refer to the lambda function as f because you called your first parameter f.\nThe result of the function call is of course:\nfloat list = [1.0; 8.0; 27.0]\n\n" ]
[ 8, 3, 3 ]
[]
[]
[ "f#", "functional_programming", "lambda" ]
stackoverflow_0000044066_f#_functional_programming_lambda.txt
Q: Using the DLR for (primarily) static language compilation I'm building a compiler that targets .NET and I've previously generated CIL directly, but generating DLR trees will make my life a fair amount easier. I'm supporting a few dynamic features, namely runtime function creation and ducktyping, but the vast majority of the code is completely static. So now that that's been explained, I have the following questions: Has the DLR been used for static compilation, outside of small examples on MSDN blogs? If so, what sort of performance was achieved? If not, is there anything fundamentally preventing this? Are there any better mechanisms of generating code than either using the DLR or emitting IL directly? Any insight into this or references to blogs/code/talks would be greatly appreciated. A: I'm not aware of anyone using the DLR in quite this fashion yet, though this is definitely one of its intended use cases. One interesting thing to consider is that the DLR's expression trees have been merged with LINQ expression trees, so the IL being produced for LINQ in some as-yet-unannounced future version of Visual Studio will be using the DLR code. A neat aspect of releasing the DLR as open source is that we have no idea what kinds of interesting things people outside the company might be doing with it :).
Using the DLR for (primarily) static language compilation
I'm building a compiler that targets .NET and I've previously generated CIL directly, but generating DLR trees will make my life a fair amount easier. I'm supporting a few dynamic features, namely runtime function creation and ducktyping, but the vast majority of the code is completely static. So now that that's been explained, I have the following questions: Has the DLR been used for static compilation, outside of small examples on MSDN blogs? If so, what sort of performance was achieved? If not, is there anything fundamentally preventing this? Are there any better mechanisms of generating code than either using the DLR or emitting IL directly? Any insight into this or references to blogs/code/talks would be greatly appreciated.
[ "I'm not aware of anyone using the DLR in quite this fashion yet, though this is definitely one of its intended use cases. One interesting thing to consider is that the DLR's expression trees have been merged with LINQ expression trees, so the IL being produced for LINQ in some as-yet-unannounced future version of Visual Studio will be using the DLR code.\nA neat aspect of releasing the DLR as open source is that we have no idea what kinds of interesting things people outside the company might be doing with it :).\n" ]
[ 7 ]
[]
[]
[ ".net", "cil", "compiler_construction", "dynamic_language_runtime" ]
stackoverflow_0000060474_.net_cil_compiler_construction_dynamic_language_runtime.txt
Q: Automatically checking for a new version of my application Trying to honor a feature request from our customers, I'd like that my application, when Internet is available, check on our website if a new version is available. The problem is that I have no idea about what have to be done on the server side. I can imagine that my application (developped in C++ using Qt) has to send a request (HTTP ?) to the server, but what is going to respond to this request ? In order to go through firewalls, I guess I'll have to use port 80 ? Is this correct ? Or, for such a feature, do I have to ask our network admin to open a specific port number through which I'll communicate ? @pilif : thanks for your detailed answer. There is still something which is unclear for me : like http://www.example.com/update?version=1.2.4 Then you can return what ever you want, probably also the download-URL of the installer of the new version. How do I return something ? Will it be a php or asp page (I know nothing about PHP nor ASP, I have to confess) ? How can I decode the ?version=1.2.4 part in order to return something accordingly ? A: I would absolutely recommend to just do a plain HTTP request to your website. Everything else is bound to fail. I'd make a HTTP GET request to a certain page on your site containing the version of the local application. like http://www.example.com/update?version=1.2.4 Then you can return what ever you want, probably also the download-URL of the installer of the new version. Why not just put a static file with the latest version to the server and let the client decide? Because you may want (or need) to have control over the process. Maybe 1.2 won't be compatible with the server in the future, so you want the server to force the update to 1.3, but the update from 1.2.4 to 1.2.6 could be uncritical, so you might want to present the client with an optional update. Or you want to have a breakdown over the installed base. Or whatever. Usually, I've learned it's best to keep as much intelligence on the server, because the server is what you have ultimate control over. Speaking here with a bit of experience in the field, here's a small preview of what can (and will - trust me) go wrong: Your Application will be prevented from making HTTP-Requests by the various Personal Firewall applications out there. A considerable percentage of users won't have the needed permissions to actually get the update process going. Even if your users have allowed the old version past their personal firewall, said tool will complain because the .EXE has changed and will recommend the user not to allow the new exe to connect (users usually comply with the wishes of their security tool here). In managed environments, you'll be shot and hanged (not necessarily in that order) for loading executable content from the web and then actually executing it. So to keep the damage as low as possible, fail silently when you can't connect to the update server before updating, make sure that you have write-permission to the install directory and warn the user if you do not, or just don't update at all. Provide a way for administrators to turn the auto-update off. It's no fun to do what you are about to do - especially when you deal with non technically inclined users as I had to numerous times. A: Pilif answer was good, and I have lots of experience with this too, but I'd like to add something more: Remember that if you start yourapp.exe, then the "updater" will try to overwrite yourapp.exe with the newest version. Depending upon your operating system and programming environment (you've mentioned C++/QT, I have no experience with those), you will not be able to overwrite yourapp.exe because it will be in use. What I have done is create a launcher. I have a MyAppLauncher.exe that uses a config file (xml, very simple) to launch the "real exe". Should a new version exist, the Launcher can update the "real exe" because it's not in use, and then relaunch the new version. Just keep that in mind and you'll be safe. A: Martin, you are absolutely right of course. But I would deliver the launcher with the installer. Or just download the installer, launch it and quit myself as soon as possible. The reason is bugs in the launcher. You would never, ever, want to be dependent on a component you cannot update (or forget to include in the initial drop). So the payload I distribute with the updating process of my application is just the standard installer, but devoid of any significant UI. Once the client has checked that the installer has a chance of running successfully and once it has downloaded the updater, it runs that and quits itself. The updater than runs, installs its payload into the original installation directory and restarts the (hopefully updated) application. Still: The process is hairy and you better think twice before implementing an Auto Update functionality on the Windows Platform when your application has a wide focus of usage. A: in php, the thing is easy: <?php if (version_compare($_GET['version'], "1.4.0") < 0){ echo "http://www.example.com/update.exe"; }else{ echo "no update"; } ?> if course you could extend this so the currently available version isn't hard-coded inside the script, but this is just about illustrating the point. In your application you would have this pseudo code: result = makeHTTPRequest("http://www.example.com/update?version=" + getExeVersion()); if result != "no update" then updater = downloadUpdater(result); ShellExecute(updater); ExitApplication; end; Feel free to extend the "protocol" by specifying something the PHP script could return to tell the client whether it's an important, mandatory update or not. Or you can add some text to display to the user - maybe containing some information about what's changed. Your possibilities are quite limitless. A: My Qt app just uses QHttp to read tiny XML file off my website that contains the latest version number. If this is greater than the current version number it gives the option to go to the download page. Very simple. Works fine. A: I would agree with @Martin and @Pilif's answer, but add; Consider allowing your end-users to decide if they want to actually install the update there and then, or delay the installation of the update until they've finished using the program. I don't know the purpose/function of your app but many applications are launched when the user needs to do something specific there and then - nothing more annoying than launching an app and then being told it's found a new version, and you having to wait for it to download, shut down the app and relaunch itself. If your program has other resources that might be updated (reference files, databases etc) the problem gets worse. We had an EPOS system running in about 400 shops, and initially we thought it would be great to have the program spot updates and download them (using a file containing a version number very similar to the suggestions you have above)... great idea. Until all of the shops started up their systems at around the same time (8:45-8:50am), and our server was hit serving a 20+Mb download to 400 remote servers, which would then update the local software and cause a restart. Chaos - with nobody able to trade for about 10 minutes. Needless to say that this caused us to subsequently turn off the 'check for updates' feature and redesign it to allow the shops to 'delay' the update until later in the day. :-) EDIT: And if anyone from ADOBE is reading - for god's sake why does the damn acrobat reader insist on trying to download updates and crap when I just want to fire-it-up to read a document? Isn't it slow enough at starting, and bloated enough, as it is, without wasting a further 20-30 seconds of my life looking for updates every time I want to read a PDF? DONT THEY USE THEIR OWN SOFTWARE??!!! :-) A: On the server you could just have a simple file "latestversion.txt" which contains the version number (and maybe download URL) of the latest version. The client then just needs to read this file using a simple HTTP request (yes, to port 80) to retrieve http://your.web.site/latestversion.txt, which you can then parse to get the version number. This way you don't need any fancy server code --- you just need to add a simple file to your existing website. A: if you keep your files in the update directory on example.com, this PHP script should download them for you given the request previously mentioned. (your update would be yourprogram.1.2.4.exe $version = $_GET['version']; $filename = "yourprogram" . $version . ".exe"; $filesize = filesize($filename); header("Pragma: public"); header("Expires: 0"); header("Cache-Control: post-check=0, pre-check=0"); header("Content-type: application-download"); header('Content-Length: ' . $filesize); header('Content-Disposition: attachment; filename="' . basename($filename).'"'); header("Content-Transfer-Encoding: binary"); This makes your web browser think it's downloading an application. A: The simplest way to make this happen is to fire an HTTP request using a library like libcurl and make it download an ini or xml file which contains the online version and where a new version would be available online. After parsing the xml file you can determine if a new version is needed and download the new version with libcurl and install it. A: Just put an (XML) file on your server with the version number of the latest version, and a URL to the download the new version from. Your application can then request the XML file, look if the version differs from its own, and take action accordingly. A: I think that simple XML file on the server would be sufficient for version checking only purposes. You would need then only an ftp account on your server and build system that is able to send a file via ftp after it has built a new version. That build system could even put installation files/zip on your website directly! A: If you want to keep it really basic, simply upload a version.txt to a webserver, that contains an integer version number. Download that check against the latest version.txt you downloaded and then just download the msi or setup package and run it. More advanced versions would be to use rss, xml or similar. It would be best to use a third-party library to parse the rss and you could include information that is displayed to your user about changes if you wish to do so. Basically you just need simple download functionality. Both these solutions will only require you to access port 80 outgoing from the client side. This should normally not require any changes to firewalls or networking (on the client side) and you simply need to have a internet facing web server (web hosting, colocation or your own server - all would work here). There are a couple of commercial auto-update solutions available. I'll leave the recommendations for those to others answerers, because I only have experience on the .net side with Click-Once and Updater Application Block (the latter is not continued any more).
Automatically checking for a new version of my application
Trying to honor a feature request from our customers, I'd like that my application, when Internet is available, check on our website if a new version is available. The problem is that I have no idea about what have to be done on the server side. I can imagine that my application (developped in C++ using Qt) has to send a request (HTTP ?) to the server, but what is going to respond to this request ? In order to go through firewalls, I guess I'll have to use port 80 ? Is this correct ? Or, for such a feature, do I have to ask our network admin to open a specific port number through which I'll communicate ? @pilif : thanks for your detailed answer. There is still something which is unclear for me : like http://www.example.com/update?version=1.2.4 Then you can return what ever you want, probably also the download-URL of the installer of the new version. How do I return something ? Will it be a php or asp page (I know nothing about PHP nor ASP, I have to confess) ? How can I decode the ?version=1.2.4 part in order to return something accordingly ?
[ "I would absolutely recommend to just do a plain HTTP request to your website. Everything else is bound to fail.\nI'd make a HTTP GET request to a certain page on your site containing the version of the local application.\nlike\nhttp://www.example.com/update?version=1.2.4\n\nThen you can return what ever you want, probably also the download-URL of the installer of the new version. \nWhy not just put a static file with the latest version to the server and let the client decide? Because you may want (or need) to have control over the process. Maybe 1.2 won't be compatible with the server in the future, so you want the server to force the update to 1.3, but the update from 1.2.4 to 1.2.6 could be uncritical, so you might want to present the client with an optional update.\nOr you want to have a breakdown over the installed base.\nOr whatever. Usually, I've learned it's best to keep as much intelligence on the server, because the server is what you have ultimate control over.\nSpeaking here with a bit of experience in the field, here's a small preview of what can (and will - trust me) go wrong:\n\nYour Application will be prevented from making HTTP-Requests by the various Personal Firewall applications out there.\nA considerable percentage of users won't have the needed permissions to actually get the update process going.\nEven if your users have allowed the old version past their personal firewall, said tool will complain because the .EXE has changed and will recommend the user not to allow the new exe to connect (users usually comply with the wishes of their security tool here).\nIn managed environments, you'll be shot and hanged (not necessarily in that order) for loading executable content from the web and then actually executing it.\n\nSo to keep the damage as low as possible, \n\nfail silently when you can't connect to the update server\nbefore updating, make sure that you have write-permission to the install directory and warn the user if you do not, or just don't update at all.\nProvide a way for administrators to turn the auto-update off.\n\nIt's no fun to do what you are about to do - especially when you deal with non technically inclined users as I had to numerous times.\n", "Pilif answer was good, and I have lots of experience with this too, but I'd like to add something more:\nRemember that if you start yourapp.exe, then the \"updater\" will try to overwrite yourapp.exe with the newest version. Depending upon your operating system and programming environment (you've mentioned C++/QT, I have no experience with those), you will not be able to overwrite yourapp.exe because it will be in use. \nWhat I have done is create a launcher. I have a MyAppLauncher.exe that uses a config file (xml, very simple) to launch the \"real exe\". Should a new version exist, the Launcher can update the \"real exe\" because it's not in use, and then relaunch the new version.\nJust keep that in mind and you'll be safe. \n", "Martin, \nyou are absolutely right of course. But I would deliver the launcher with the installer. Or just download the installer, launch it and quit myself as soon as possible. The reason is bugs in the launcher. You would never, ever, want to be dependent on a component you cannot update (or forget to include in the initial drop).\nSo the payload I distribute with the updating process of my application is just the standard installer, but devoid of any significant UI. Once the client has checked that the installer has a chance of running successfully and once it has downloaded the updater, it runs that and quits itself.\nThe updater than runs, installs its payload into the original installation directory and restarts the (hopefully updated) application.\nStill: The process is hairy and you better think twice before implementing an Auto Update functionality on the Windows Platform when your application has a wide focus of usage.\n", "in php, the thing is easy:\n<?php\n if (version_compare($_GET['version'], \"1.4.0\") < 0){\n echo \"http://www.example.com/update.exe\";\n }else{\n echo \"no update\";\n }\n?>\n\nif course you could extend this so the currently available version isn't hard-coded inside the script, but this is just about illustrating the point.\nIn your application you would have this pseudo code:\nresult = makeHTTPRequest(\"http://www.example.com/update?version=\" + getExeVersion());\nif result != \"no update\" then\n updater = downloadUpdater(result);\n ShellExecute(updater);\n ExitApplication;\nend;\n\nFeel free to extend the \"protocol\" by specifying something the PHP script could return to tell the client whether it's an important, mandatory update or not. \nOr you can add some text to display to the user - maybe containing some information about what's changed.\nYour possibilities are quite limitless.\n", "My Qt app just uses QHttp to read tiny XML file off my website that contains the latest version number. If this is greater than the current version number it gives the option to go to the download page. Very simple. Works fine.\n", "I would agree with @Martin and @Pilif's answer, but add;\nConsider allowing your end-users to decide if they want to actually install the update there and then, or delay the installation of the update until they've finished using the program. \nI don't know the purpose/function of your app but many applications are launched when the user needs to do something specific there and then - nothing more annoying than launching an app and then being told it's found a new version, and you having to wait for it to download, shut down the app and relaunch itself. If your program has other resources that might be updated (reference files, databases etc) the problem gets worse.\nWe had an EPOS system running in about 400 shops, and initially we thought it would be great to have the program spot updates and download them (using a file containing a version number very similar to the suggestions you have above)... great idea. Until all of the shops started up their systems at around the same time (8:45-8:50am), and our server was hit serving a 20+Mb download to 400 remote servers, which would then update the local software and cause a restart. Chaos - with nobody able to trade for about 10 minutes.\nNeedless to say that this caused us to subsequently turn off the 'check for updates' feature and redesign it to allow the shops to 'delay' the update until later in the day. :-)\nEDIT: And if anyone from ADOBE is reading - for god's sake why does the damn acrobat reader insist on trying to download updates and crap when I just want to fire-it-up to read a document? Isn't it slow enough at starting, and bloated enough, as it is, without wasting a further 20-30 seconds of my life looking for updates every time I want to read a PDF?\nDONT THEY USE THEIR OWN SOFTWARE??!!! :-)\n", "On the server you could just have a simple file \"latestversion.txt\" which contains the version number (and maybe download URL) of the latest version. The client then just needs to read this file using a simple HTTP request (yes, to port 80) to retrieve http://your.web.site/latestversion.txt, which you can then parse to get the version number. This way you don't need any fancy server code --- you just need to add a simple file to your existing website.\n", "if you keep your files in the update directory on example.com, this PHP script should download them for you given the request previously mentioned. (your update would be yourprogram.1.2.4.exe\n$version = $_GET['version']; \n$filename = \"yourprogram\" . $version . \".exe\";\n$filesize = filesize($filename);\nheader(\"Pragma: public\");\nheader(\"Expires: 0\");\nheader(\"Cache-Control: post-check=0, pre-check=0\");\nheader(\"Content-type: application-download\");\nheader('Content-Length: ' . $filesize);\nheader('Content-Disposition: attachment; filename=\"' . basename($filename).'\"');\nheader(\"Content-Transfer-Encoding: binary\");\n\nThis makes your web browser think it's downloading an application. \n", "The simplest way to make this happen is to fire an HTTP request using a library like libcurl and make it download an ini or xml file which contains the online version and where a new version would be available online.\nAfter parsing the xml file you can determine if a new version is needed and download the new version with libcurl and install it.\n", "Just put an (XML) file on your server with the version number of the latest version, and a URL to the download the new version from. Your application can then request the XML file, look if the version differs from its own, and take action accordingly.\n", "I think that simple XML file on the server would be sufficient for version checking only purposes.\nYou would need then only an ftp account on your server and build system that is able to send a file via ftp after it has built a new version. That build system could even put installation files/zip on your website directly!\n", "If you want to keep it really basic, simply upload a version.txt to a webserver, that contains an integer version number. Download that check against the latest version.txt you downloaded and then just download the msi or setup package and run it.\nMore advanced versions would be to use rss, xml or similar. It would be best to use a third-party library to parse the rss and you could include information that is displayed to your user about changes if you wish to do so.\nBasically you just need simple download functionality.\nBoth these solutions will only require you to access port 80 outgoing from the client side. This should normally not require any changes to firewalls or networking (on the client side) and you simply need to have a internet facing web server (web hosting, colocation or your own server - all would work here).\nThere are a couple of commercial auto-update solutions available. I'll leave the recommendations for those to others answerers, because I only have experience on the .net side with Click-Once and Updater Application Block (the latter is not continued any more).\n" ]
[ 38, 21, 7, 6, 4, 2, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "c++", "qt" ]
stackoverflow_0000056391_c++_qt.txt
Q: Terminal emulation in Flex I need to do some emulation of some old DOS or mainframe terminals in Flex. Something like the image below for example. The different coloured text is easy enough, but the ability to do different background colours, such as the yellow background is beyond the capabilities of the standard Flash text. I may also need to be able to enter text at certain places and scroll text up the "terminal". Any idea how I'd attack this? Or better still, any existing code/components for this sort of thing? A: Use TextField.getCharBoundaries to get a rectangle of the first and last characters in the areas where you want a background. From these rectangles you can construct a rectangle that spans the whole area. Use this to draw the background in a Shape placed behind the text field, or in the parent of the text field. Update you asked for an example, here is how to get a rectangle from a range of characters: var firstCharBounds : Rectangle = textField.getCharBoundaries(firstCharIndex); var lastCharBounds : Rectangle = textField.getCharBoundaries(lastCharIndex); var rangeBounds : Rectangle = new Rectangle(); rangeBounds.topLeft = firstCharBounds.topLeft; rangeBounds.bottomRight = lastCharBounds.bottomRight; If you want to find a rectangle for a whole line you can do this instead: var charBounds : Rectangle = textField.getCharBoundaries(textField.getLineOffset(lineNumber)); var lineBounds : Rectangle = new Rectangle(0, charBounds.y, textField.width, firstCharBounds.height); When you have the bounds of the text range you want to paint a background for, you can do this in the updateDisplayList method of the parent of the text field (assuming the text field is positioned at [0, 0] and has white text, and that textRangesWithYellowBackground is an array of rectangles that represent the text ranges that should have yellow backgrounds): graphics.clear(); // this draws the black background graphics.beginFill(0x000000); graphics.drawRect(0, 0, textField.width, textField.height); graphics.endFill(); // this draws yellow text backgrounds for each ( var r : Rectangle in textRangesWithYellowBackground ) graphics.beginFill(0xFFFF00); graphics.drawRect(r.x, r.y, r.width, r.height); graphics.endFill(); } A: The font is fixed width and height, so making a background bitmap dynamically isn't difficult, and is probably the quickest and easiest solution. In fact, if you size it correctly there will only be one stretched pixel per character. Color the pixel (or pixels) according to the background of the character. -Adam
Terminal emulation in Flex
I need to do some emulation of some old DOS or mainframe terminals in Flex. Something like the image below for example. The different coloured text is easy enough, but the ability to do different background colours, such as the yellow background is beyond the capabilities of the standard Flash text. I may also need to be able to enter text at certain places and scroll text up the "terminal". Any idea how I'd attack this? Or better still, any existing code/components for this sort of thing?
[ "Use TextField.getCharBoundaries to get a rectangle of the first and last characters in the areas where you want a background. From these rectangles you can construct a rectangle that spans the whole area. Use this to draw the background in a Shape placed behind the text field, or in the parent of the text field.\nUpdate you asked for an example, here is how to get a rectangle from a range of characters:\nvar firstCharBounds : Rectangle = textField.getCharBoundaries(firstCharIndex);\nvar lastCharBounds : Rectangle = textField.getCharBoundaries(lastCharIndex);\n\nvar rangeBounds : Rectangle = new Rectangle();\n\nrangeBounds.topLeft = firstCharBounds.topLeft;\nrangeBounds.bottomRight = lastCharBounds.bottomRight;\n\nIf you want to find a rectangle for a whole line you can do this instead:\nvar charBounds : Rectangle = textField.getCharBoundaries(textField.getLineOffset(lineNumber));\n\nvar lineBounds : Rectangle = new Rectangle(0, charBounds.y, textField.width, firstCharBounds.height);\n\nWhen you have the bounds of the text range you want to paint a background for, you can do this in the updateDisplayList method of the parent of the text field (assuming the text field is positioned at [0, 0] and has white text, and that textRangesWithYellowBackground is an array of rectangles that represent the text ranges that should have yellow backgrounds):\ngraphics.clear();\n\n// this draws the black background\ngraphics.beginFill(0x000000);\ngraphics.drawRect(0, 0, textField.width, textField.height);\ngraphics.endFill();\n\n// this draws yellow text backgrounds\nfor each ( var r : Rectangle in textRangesWithYellowBackground )\n graphics.beginFill(0xFFFF00);\n graphics.drawRect(r.x, r.y, r.width, r.height);\n graphics.endFill();\n}\n\n", "The font is fixed width and height, so making a background bitmap dynamically isn't difficult, and is probably the quickest and easiest solution. In fact, if you size it correctly there will only be one stretched pixel per character.\nColor the pixel (or pixels) according to the background of the character.\n-Adam\n" ]
[ 2, 1 ]
[]
[]
[ "apache_flex" ]
stackoverflow_0000060558_apache_flex.txt
Q: How can I make the browser see CSS and Javascript changes? CSS and Javascript files don't change very often, so I want them to be cached by the web browser. But I also want the web browser to see changes made to these files without requiring the user to clear their browser cache. Also want a solution that works well with a version control system such as Subversion. Some solutions I have seen involve adding a version number to the end of the file in the form of a query string. Could use the SVN revision number to automate this for you: ASP.NET Display SVN Revision Number Can you specify how you include the Revision variable of another file? That is in the HTML file I can include the Revision number in the URL to the CSS or Javascript file. In the Subversion book it says about Revision: "This keyword describes the last known revision in which this file changed in the repository". Firefox also allows pressing CTRL+R to reload everything on a particular page. To clarify I am looking for solutions that don't require the user to do anything on their part. A: I found that if you append the last modified timestamp of the file onto the end of the URL the browser will request the files when it is modified. For example in PHP: function urlmtime($url) { $parsed_url = parse_url($url); $path = $parsed_url['path']; if ($path[0] == "/") { $filename = $_SERVER['DOCUMENT_ROOT'] . "/" . $path; } else { $filename = $path; } if (!file_exists($filename)) { // If not a file then use the current time $lastModified = date('YmdHis'); } else { $lastModified = date('YmdHis', filemtime($filename)); } if (strpos($url, '?') === false) { $url .= '?ts=' . $lastModified; } else { $url .= '&ts=' . $lastModified; } return $url; } function include_css($css_url, $media='all') { // According to Yahoo, using link allows for progressive // rendering in IE where as @import url($css_url) does not echo '<link rel="stylesheet" type="text/css" media="' . $media . '" href="' . urlmtime($css_url) . '">'."\n"; } function include_javascript($javascript_url) { echo '<script type="text/javascript" src="' . urlmtime($javascript_url) . '"></script>'."\n"; } A: Some solutions I have seen involve adding a version number to the end of the file in the form of a query string. <script type="text/javascript" src="funkycode.js?v1"> You could use the SVN revision number to automate this for you by including the word LastChangedRevision in your html file after where v1 appears above. You must also setup your repository to do this. I hope this further clarifies my answer? Firefox also allows pressing CTRL + R to reload everything on a particular page. A: In my opinion, it is better to make the version number part of the file itself e.g. myscript.1.2.3.js. You can set your webserver to cache this file forever, and just add a new js file when you have a new version. A: When you release a new version of your CSS or JS libraries, cause the following to occur: modify the filename to include a unique version string modify the HTML files which reference the library to point at the versioned file (this is usually a pretty simple matter for a release script) Now you can set the Expires for the CSS/JS to be years in the future. Whenever you change the content, if the referencing HTML points to a new URI, browsers will no longer use the old cached copy. This causes the caching behavior you want without requiring anything of the user. A: I was also wondering how to do this, when I found grom's answer. Thanks for the code. I struggled with understanding how the code was supposed to be used. (I don't use a version control system.) In summary, you include the timestamp (ts) when you call the stylesheet. You're not planning on changing the stylesheet often: <?php include ('grom_file.php'); // timestamp on the filename has to be updated manually include_css('_stylesheets/style.css?ts=20080912162813', 'all'); ?>
How can I make the browser see CSS and Javascript changes?
CSS and Javascript files don't change very often, so I want them to be cached by the web browser. But I also want the web browser to see changes made to these files without requiring the user to clear their browser cache. Also want a solution that works well with a version control system such as Subversion. Some solutions I have seen involve adding a version number to the end of the file in the form of a query string. Could use the SVN revision number to automate this for you: ASP.NET Display SVN Revision Number Can you specify how you include the Revision variable of another file? That is in the HTML file I can include the Revision number in the URL to the CSS or Javascript file. In the Subversion book it says about Revision: "This keyword describes the last known revision in which this file changed in the repository". Firefox also allows pressing CTRL+R to reload everything on a particular page. To clarify I am looking for solutions that don't require the user to do anything on their part.
[ "I found that if you append the last modified timestamp of the file onto the end of the URL the browser will request the files when it is modified. For example in PHP:\nfunction urlmtime($url) {\n $parsed_url = parse_url($url);\n $path = $parsed_url['path'];\n\n if ($path[0] == \"/\") {\n $filename = $_SERVER['DOCUMENT_ROOT'] . \"/\" . $path;\n } else {\n $filename = $path;\n }\n\n if (!file_exists($filename)) {\n // If not a file then use the current time\n $lastModified = date('YmdHis');\n } else {\n $lastModified = date('YmdHis', filemtime($filename));\n }\n\n if (strpos($url, '?') === false) {\n $url .= '?ts=' . $lastModified;\n } else {\n $url .= '&ts=' . $lastModified;\n }\n\n return $url;\n}\n\nfunction include_css($css_url, $media='all') {\n // According to Yahoo, using link allows for progressive \n // rendering in IE where as @import url($css_url) does not\n echo '<link rel=\"stylesheet\" type=\"text/css\" media=\"' .\n $media . '\" href=\"' . urlmtime($css_url) . '\">'.\"\\n\";\n}\n\nfunction include_javascript($javascript_url) {\n echo '<script type=\"text/javascript\" src=\"' . urlmtime($javascript_url) .\n '\"></script>'.\"\\n\";\n}\n\n", "Some solutions I have seen involve adding a version number to the end of the file in the form of a query string.\n<script type=\"text/javascript\" src=\"funkycode.js?v1\">\n\nYou could use the SVN revision number to automate this for you by including the word LastChangedRevision in your html file after where v1 appears above. You must also setup your repository to do this.\nI hope this further clarifies my answer?\nFirefox also allows pressing CTRL + R to reload everything on a particular page.\n", "In my opinion, it is better to make the version number part of the file itself e.g. myscript.1.2.3.js. You can set your webserver to cache this file forever, and just add a new js file when you have a new version.\n", "When you release a new version of your CSS or JS libraries, cause the following to occur:\n\nmodify the filename to include a unique version string\nmodify the HTML files which reference the library to point at the versioned file\n\n(this is usually a pretty simple matter for a release script)\nNow you can set the Expires for the CSS/JS to be years in the future. Whenever you change the content, if the referencing HTML points to a new URI, browsers will no longer use the old cached copy.\nThis causes the caching behavior you want without requiring anything of the user.\n", "I was also wondering how to do this, when I found grom's answer. Thanks for the code.\nI struggled with understanding how the code was supposed to be used. (I don't use a version control system.) In summary, you include the timestamp (ts) when you call the stylesheet. You're not planning on changing the stylesheet often:\n<?php \n include ('grom_file.php');\n // timestamp on the filename has to be updated manually\n include_css('_stylesheets/style.css?ts=20080912162813', 'all');\n?>\n\n" ]
[ 29, 10, 8, 6, 6 ]
[]
[]
[ "caching", "css", "http", "javascript" ]
stackoverflow_0000003224_caching_css_http_javascript.txt
Q: Is it possible to display the entity ⇓ in IE6 is it possible to display ⇓ entity in ie6? It is being display in every browser but not IE 6.I am writing markup such as: <span>&#8659;</span> A: According to this page, that symbol doesn't show in IE6 at all. Symbol Character Numeric Description ⇓ &dArr; &#8659; Down double arrow - - * Doesn't show with MS IE6 If you really need that particular symbol, you may just have to go for a small graphic of the arrow - not an ideal solution, but if you need it to display in IE6 then that may be your only option. A: Yes, it is possible... But you'll need to explicitly tell IE which font to find it in. For instance: <span style="font-family:Arial Unicode MS"> &#8659; </span> should produce ⇓ in most browsers.
Is it possible to display the entity ⇓ in IE6
is it possible to display ⇓ entity in ie6? It is being display in every browser but not IE 6.I am writing markup such as: <span>&#8659;</span>
[ "According to this page, that symbol doesn't show in IE6 at all. \nSymbol Character Numeric Description\n⇓ &dArr; &#8659; Down double arrow - - * Doesn't show with MS IE6\n\nIf you really need that particular symbol, you may just have to go for a small graphic of the arrow - not an ideal solution, but if you need it to display in IE6 then that may be your only option.\n", "Yes, it is possible... But you'll need to explicitly tell IE which font to find it in. For instance:\n<span style=\"font-family:Arial Unicode MS\"> &#8659; </span>\n\nshould produce ⇓ in most browsers.\n" ]
[ 5, 4 ]
[]
[]
[ "internet_explorer_6" ]
stackoverflow_0000060664_internet_explorer_6.txt
Q: Is global memory initialized in C++? Is global memory initialized in C++? And if so, how? (Second) clarification: When a program starts up, what is in the memory space which will become global memory, prior to primitives being initialized? I'm trying to understand if it is zeroed out, or garbage for example. The situation is: can a singleton reference be set - via an instance() call, prior to its initialization: MySingleton* MySingleton::_instance = NULL; and get two singleton instances as a result? See my C++ quiz on on multiple instances of a singleton... A: From the standard: Objects with static storage duration (3.7.1) shall be zero-initialized (8.5) before any other initialization takes place. Zero-initialization and initialization with a constant expression are collectively called static initialization; all other initialization is dynamic initialization. Objects of POD [plain old data] types (3.9) with static storage duration initialized with constant expressions (5.19) shall be initialized before any dynamic initialization takes place. Objects with static storage duration defined in namespace scope in the same translation unit and dynamically initialized shall be initialized in the order in which their definition appears in the translation unit. [Note:8.5.1 describes the order in which aggregate members are initialized. The initial- ization of local static objects is described in 6.7.] So yes, globals which have static storage duration will be initialized. Globals allocated, e.g., on the heap will of course not be initialized automatically. A: Yes global primitives are initialized to NULL. Example: int x; int main(int argc, char**argv) { assert(x == 0); int y; //assert(y == 0); <-- wrong can't assume this. } You cannot make any assumptions about classes, structs, arrays, blocks of memory on the heap... It's safest just to always initialize everything. A: Coming from the embedded world... Your code gets compiled into three types of memory: 1. .data: initialized memory 2. .text: constants and code 3. .bss: uninitialized memory (initialized to 0 in C++ if not explicitly initialized) Globals go in .data if initialized. If not they are placed in .bss and zero'ed in premain code. A: Variables declared with static/global scope are always initialized under VC++ at least. Under some circumstances there can actually be a difference in behaviour between: int x = 0; int main() { ... } and int x; int main() { ... } If you are using shared data segments then VC++ at least uses the presence of an explicit initialization along with a #pragma data_seg to determine whether a particular variable should go in the shared data segment or the private data segment for a process. For added fun consider what happens if you have a static C++ object with constructor/destructor declared in a shared data segment. The constructor/destructor is called every time the exe/dll attaches to the data segment which is almost certainly not what you want. More details in this KB article
Is global memory initialized in C++?
Is global memory initialized in C++? And if so, how? (Second) clarification: When a program starts up, what is in the memory space which will become global memory, prior to primitives being initialized? I'm trying to understand if it is zeroed out, or garbage for example. The situation is: can a singleton reference be set - via an instance() call, prior to its initialization: MySingleton* MySingleton::_instance = NULL; and get two singleton instances as a result? See my C++ quiz on on multiple instances of a singleton...
[ "From the standard:\n\nObjects with static storage duration (3.7.1) shall be zero-initialized (8.5) before any other initialization takes place. Zero-initialization and initialization with a constant expression are collectively called static initialization; all other initialization is dynamic initialization. Objects of POD [plain old data] types (3.9) with static storage duration initialized with constant expressions (5.19) shall be initialized before any dynamic initialization takes place. Objects with static storage duration defined in namespace scope in the same translation unit and dynamically initialized shall be initialized in the order in which their definition appears in the translation unit. [Note:8.5.1 describes the order in which aggregate members are initialized. The initial- \n ization of local static objects is described in 6.7.] \n\nSo yes, globals which have static storage duration will be initialized. Globals allocated, e.g., on the heap will of course not be initialized automatically.\n", "Yes global primitives are initialized to NULL.\nExample:\nint x;\n\nint main(int argc, char**argv)\n{\n assert(x == 0);\n int y;\n //assert(y == 0); <-- wrong can't assume this.\n}\n\nYou cannot make any assumptions about classes, structs, arrays, blocks of memory on the heap...\nIt's safest just to always initialize everything. \n", "Coming from the embedded world...\nYour code gets compiled into three types of memory:\n 1. .data: initialized memory\n 2. .text: constants and code\n 3. .bss: uninitialized memory (initialized to 0 in C++ if not explicitly initialized)\nGlobals go in .data if initialized. If not they are placed in .bss and zero'ed in premain code.\n", "Variables declared with static/global scope are always initialized under VC++ at least.\nUnder some circumstances there can actually be a difference in behaviour between:\nint x = 0;\n\nint main() { ... }\n\nand\nint x;\n\nint main() { ... }\n\nIf you are using shared data segments then VC++ at least uses the presence of an explicit initialization along with a #pragma data_seg to determine whether a particular variable should go in the shared data segment or the private data segment for a process.\nFor added fun consider what happens if you have a static C++ object with constructor/destructor declared in a shared data segment. The constructor/destructor is called every time the exe/dll attaches to the data segment which is almost certainly not what you want.\nMore details in this KB article\n" ]
[ 19, 9, 7, 0 ]
[]
[]
[ "c++", "memory", "memory_management" ]
stackoverflow_0000060653_c++_memory_memory_management.txt
Q: MySQL replication for fallback scenario When I have two mysql servers that have different jobs (holding different databases) but want to be able to use one of them to slip in when the other one fails, what would you suggest how I keep the data on both of them equal "close to realtime"? Obviously it's not possible to make a full database dump every x minutes. I've read about the Binary Log, is that the way that I need to go? Will that not slow down the fallback server a lot? Is there a way to not include some tables in the binary log - where it doesn't matter that the data has changed? A: You may want to consider the master-master replication scenario, but with a slight twist. You can specify which databases to replicate and limit the replication for each server. For server1 I would add --replicate-do-db=server_2_db and on server2 --replicate-do-db=server_1_db to your my.cnf (or my.ini on Windows). This would mean that only statements for the server_1_db would be replicated to server2 and vice verse. Please also make sure that you perform full backups on a regular basis and not just rely on replication as it does not provide safety from accidental DROP DATABASE statements or their like. A: Binary log is definitely the way to go. However, you should be aware that with MySQL you can't just flip back and forth between servers like that. One server will be the master and the other will be the slave. You write/read to the master, but can only read from the slave server. If you ever write to the slave, they'll be out of sync and there's no easy way to get them to sync up again (basically, you have to swap them so the master is the new slave, but this is a tedious manual process). If you need true hot-swappable backup databases you might have to go to a system other than MySQL. If all you want is a read-only live backup that you can use instantly in the worst-case scenario (master is permanently destroyed), Binary Log will suit you just fine.
MySQL replication for fallback scenario
When I have two mysql servers that have different jobs (holding different databases) but want to be able to use one of them to slip in when the other one fails, what would you suggest how I keep the data on both of them equal "close to realtime"? Obviously it's not possible to make a full database dump every x minutes. I've read about the Binary Log, is that the way that I need to go? Will that not slow down the fallback server a lot? Is there a way to not include some tables in the binary log - where it doesn't matter that the data has changed?
[ "You may want to consider the master-master replication scenario, but with a slight twist. You can specify which databases to replicate and limit the replication for each server.\nFor server1 I would add --replicate-do-db=server_2_db and on server2 --replicate-do-db=server_1_db to your my.cnf (or my.ini on Windows). This would mean that only statements for the server_1_db would be replicated to server2 and vice verse. \nPlease also make sure that you perform full backups on a regular basis and not just rely on replication as it does not provide safety from accidental DROP DATABASE statements or their like.\n", "Binary log is definitely the way to go. However, you should be aware that with MySQL you can't just flip back and forth between servers like that.\nOne server will be the master and the other will be the slave. You write/read to the master, but can only read from the slave server. If you ever write to the slave, they'll be out of sync and there's no easy way to get them to sync up again (basically, you have to swap them so the master is the new slave, but this is a tedious manual process).\nIf you need true hot-swappable backup databases you might have to go to a system other than MySQL. If all you want is a read-only live backup that you can use instantly in the worst-case scenario (master is permanently destroyed), Binary Log will suit you just fine.\n" ]
[ 3, 2 ]
[]
[]
[ "binary_log", "fallback", "mysql", "replication" ]
stackoverflow_0000008107_binary_log_fallback_mysql_replication.txt
Q: Common memory optimization What are the most common memory optimizations in csharp, dotnet 2.0. Wanted to see if there common things that people may not be doing by default in winform app A: use structs for small wrapper objects to avoid heap fragmentation think carefully about object lifetimes, especially for large objects so they do not end up on the LOH unless you intend them to think about allocations inside of a loop make sure dynamically sized array will be of reasonable size, otherwise partition the problem A: Use StringBuilder instead of directly modifying a string if you're performing many modifications to the same string. A: Sealing as much classes as possible should also help. AFAIK this is one trick that SmartAssembly uses to reduce memory consumption.
Common memory optimization
What are the most common memory optimizations in csharp, dotnet 2.0. Wanted to see if there common things that people may not be doing by default in winform app
[ "\nuse structs for small wrapper objects to avoid heap fragmentation\nthink carefully about object lifetimes, especially for large objects so they do not end up on the LOH unless you intend them to\nthink about allocations inside of a loop\nmake sure dynamically sized array will be of reasonable size, otherwise partition the problem\n\n", "Use StringBuilder instead of directly modifying a string if you're performing many modifications to the same string.\n", "Sealing as much classes as possible should also help. AFAIK this is one trick that SmartAssembly uses to reduce memory consumption.\n" ]
[ 9, 3, 0 ]
[]
[]
[ "c#", "memory_management", "optimization" ]
stackoverflow_0000060652_c#_memory_management_optimization.txt
Q: python regex to match multi-line preprocessor macro What follows is a regular expression I have written to match multi-line pre-processor macros in C / C++ code. I'm by no means a regular expressions guru, so I'd welcome any advice on how I can make this better. Here's the regex: \s*#define(.*\\\n)+[\S]+(?!\\) It should match all of this: #define foo(x) if(x) \ doSomething(x) But only some of this (shouldn't match the next line of code: #define foo(x) if(x) \ doSomething(x) normalCode(); And also shouldn't match single-line preprocessor macros. I'm pretty sure that the regex above works - but as I said, there probably a better way of doing it, and I imagine that there are ways of breaking it. Can anyone suggest any? A: This is a simple test program I knocked up: #!/usr/bin/env python TEST1=""" #include "Foo.h" #define bar foo\\ x #include "Bar.h" """ TEST2=""" #define bar foo #define x 1 \\ 12 \\ 2 \\\\ 3 Foobar """ TEST3=""" #define foo(x) if(x) \\ doSomething(x) """ TEST4=""" #define foo(x) if(x) \\ doSomething(x) normalCode(); """ import re matcher = re.compile(r"^[ \t]*#define(.*\\\n)+.*$",re.MULTILINE) def extractDefines(s): mo = matcher.search(s) if not mo: print mo return print mo.group(0) extractDefines(TEST1) extractDefines(TEST2) extractDefines(TEST3) extractDefines(TEST4) The re I used: r"^[ \t]*#define(.*\\\n)+.*$" Is very similar to the one use used, the changes: [ \t] To avoid newlines at the start of the define. I rely on + being greedy, so I can use a simple .*$ at the end to get the first line of the define that doesn't end with \ A: start = r"^\s*#define\s+" continuation = r"(?:.*\\\n)+" lastline = r".*$" re_multiline_macros = re.compile(start + continuation + lastline, re.MULTILINE)
python regex to match multi-line preprocessor macro
What follows is a regular expression I have written to match multi-line pre-processor macros in C / C++ code. I'm by no means a regular expressions guru, so I'd welcome any advice on how I can make this better. Here's the regex: \s*#define(.*\\\n)+[\S]+(?!\\) It should match all of this: #define foo(x) if(x) \ doSomething(x) But only some of this (shouldn't match the next line of code: #define foo(x) if(x) \ doSomething(x) normalCode(); And also shouldn't match single-line preprocessor macros. I'm pretty sure that the regex above works - but as I said, there probably a better way of doing it, and I imagine that there are ways of breaking it. Can anyone suggest any?
[ "This is a simple test program I knocked up:\n#!/usr/bin/env python\n\nTEST1=\"\"\"\n#include \"Foo.h\"\n#define bar foo\\\\\n x\n#include \"Bar.h\"\n\"\"\"\n\nTEST2=\"\"\"\n#define bar foo\n#define x 1 \\\\\n 12 \\\\\n 2 \\\\\\\\ 3\nFoobar\n\"\"\"\n\nTEST3=\"\"\"\n#define foo(x) if(x) \\\\\ndoSomething(x)\n\"\"\"\n\nTEST4=\"\"\"\n#define foo(x) if(x) \\\\\ndoSomething(x)\nnormalCode();\n\"\"\"\n\nimport re\nmatcher = re.compile(r\"^[ \\t]*#define(.*\\\\\\n)+.*$\",re.MULTILINE)\n\ndef extractDefines(s):\n mo = matcher.search(s)\n if not mo:\n print mo\n return\n print mo.group(0)\n\nextractDefines(TEST1)\nextractDefines(TEST2)\nextractDefines(TEST3)\nextractDefines(TEST4)\n\nThe re I used:\nr\"^[ \\t]*#define(.*\\\\\\n)+.*$\"\n\nIs very similar to the one use used, the changes:\n\n[ \\t] To avoid newlines at the start\nof the define.\nI rely on + being\ngreedy, so I can use a simple .*$ at\nthe end to get the first line of the\ndefine that doesn't end with \\\n\n", "start = r\"^\\s*#define\\s+\"\ncontinuation = r\"(?:.*\\\\\\n)+\"\nlastline = r\".*$\"\n\nre_multiline_macros = re.compile(start + continuation + lastline, \n re.MULTILINE)\n\n" ]
[ 6, 4 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0000060685_python_regex.txt
Q: How do you change the displayed order of ActiveScaffold "actions"? I am using ActiveScaffold in a Ruby on Rails app, and have replaced the default "actions" text in the table (ie. "edit", "delete", "show") with icons using CSS. I have also added a couple of custom actions with action_link.add ("move" and "copy"). For clarity, I would like to have the icons displayed in a different order than they are. Specifically, I would like "edit" to be the first icon displayed. I seem to be able to change the order of the action_links by the changing the order of definition in the controller. I have also been able to change the order of the default actions by first config.actions.excluding everything, and then adding them with config.actions.add in a specific order. However, my custom actions always seem to appear before the default actions in the list. Ideally I would like them to display "edit" "copy" "move" "delete" (ie - built-in, custom, custom, built-in). Can anyone suggest how I might do this? One idea I had was to re-define "edit" as a custom action (with the default functionality), but I don't know how to go about this either. A: Caveat: I don't know ActiveScaffold. This answer is based on me reading its source code. It looks like the action_links variable is a custom data structure, called ActionLinks. It's defined in ActiveScaffold::DataStructures. Internally, it has a @set variable, which is not a Set at all, but an Array. ActionLinks has an add, delete, and each methods that serve as gatekeepers of this @set variable. When displaying the links, ActiveScaffold does this (in _list_actions.rhtml): <% active_scaffold_config.action_links.each :record do |link| -%> # Displays the link (code removed for brevity) <% end -%> So, short of extending ActiveScaffold::DataStructures::ActionLinks to add a method to sort the values in @set differently, there doesn't seem to be a way to do it, at least not generally. If I were you, I'd add something called order_by!, where you pass it an array of symbols, with the proper order, and it resorts @set. That way, you can call it after you're done adding your custom actions.
How do you change the displayed order of ActiveScaffold "actions"?
I am using ActiveScaffold in a Ruby on Rails app, and have replaced the default "actions" text in the table (ie. "edit", "delete", "show") with icons using CSS. I have also added a couple of custom actions with action_link.add ("move" and "copy"). For clarity, I would like to have the icons displayed in a different order than they are. Specifically, I would like "edit" to be the first icon displayed. I seem to be able to change the order of the action_links by the changing the order of definition in the controller. I have also been able to change the order of the default actions by first config.actions.excluding everything, and then adding them with config.actions.add in a specific order. However, my custom actions always seem to appear before the default actions in the list. Ideally I would like them to display "edit" "copy" "move" "delete" (ie - built-in, custom, custom, built-in). Can anyone suggest how I might do this? One idea I had was to re-define "edit" as a custom action (with the default functionality), but I don't know how to go about this either.
[ "Caveat: I don't know ActiveScaffold. This answer is based on me reading its source code.\nIt looks like the action_links variable is a custom data structure, called ActionLinks. It's defined in ActiveScaffold::DataStructures.\nInternally, it has a @set variable, which is not a Set at all, but an Array. ActionLinks has an add, delete, and each methods that serve as gatekeepers of this @set variable.\nWhen displaying the links, ActiveScaffold does this (in _list_actions.rhtml):\n<% active_scaffold_config.action_links.each :record do |link| -%>\n # Displays the link (code removed for brevity)\n<% end -%>\n\nSo, short of extending ActiveScaffold::DataStructures::ActionLinks to add a method to sort the values in @set differently, there doesn't seem to be a way to do it, at least not generally.\nIf I were you, I'd add something called order_by!, where you pass it an array of symbols, with the proper order, and it resorts @set. That way, you can call it after you're done adding your custom actions.\n" ]
[ 1 ]
[]
[]
[ "activescaffold", "ruby", "ruby_on_rails" ]
stackoverflow_0000059207_activescaffold_ruby_ruby_on_rails.txt
Q: OpenGl And Flickering When objects from a CallList intersect the near plane I get a flicker..., what can I do? Im using OpenGL and SDL. Yes it is double buffered. A: It sounds like you're getting z-fighting. "Z-fighting is a phenomenon in 3D rendering that occurs when two or more primitives have similar values in the z-buffer, and is particularly prevalent with coplanar polygons. The effect causes pseudo-random pixels to be rendered with the color of one polygon or another in a non-deterministic manner, varying as the scene is animated, causing one polygon to "win" the z test, then another, and so on." (From wikipedia) You can get more information about the problem in the OpenGL FAQ. glPolygonOffset might help, but you can also get yourself into trouble with it. Tom Forsyth has a good explanation in his FAQ Note: It talks about ZBIAS, but that's just the DirectX equivilent. A: The problem was that my rotation function had some floating point errors which screwed up my model_view matrix. None of you could have guessed it, sorry for the waste of your time. Although I don't think that moving the near plane should be even considered a solution to any kind of problem usually something else is wrong, because openGL does support polygon intersection with the near plane. A: Try to put the near clipping plane a little bit further : for example with gluPerspective -> third parameter zNear http://www.opengl.org/documentation/specs/man_pages/hardcopy/GL/html/glu/perspective.html A: Ah, you meant the near plane. :) Well...another thing when drawing polygons in the same plane is to use glPolygonOffset From the description glPolygonOffset is useful for rendering hidden-line images, for applying decals to surfaces, and for rendering solids with highlighted edges.
OpenGl And Flickering
When objects from a CallList intersect the near plane I get a flicker..., what can I do? Im using OpenGL and SDL. Yes it is double buffered.
[ "It sounds like you're getting z-fighting.\n\"Z-fighting is a phenomenon in 3D rendering that occurs when two or more primitives have similar values in the z-buffer, and is particularly prevalent with coplanar polygons. The effect causes pseudo-random pixels to be rendered with the color of one polygon or another in a non-deterministic manner, varying as the scene is animated, causing one polygon to \"win\" the z test, then another, and so on.\"\n(From wikipedia)\nYou can get more information about the problem in the OpenGL FAQ.\nglPolygonOffset might help, but you can also get yourself into trouble with it. Tom Forsyth has a good explanation in his FAQ Note: It talks about ZBIAS, but that's just the DirectX equivilent.\n", "The problem was that my rotation function had some floating point errors which screwed up my model_view matrix.\nNone of you could have guessed it, sorry for the waste of your time.\nAlthough I don't think that moving the near plane should be even considered a solution to any kind of problem usually something else is wrong, because openGL does support polygon intersection with the near plane.\n", "Try to put the near clipping plane a little bit further :\nfor example with gluPerspective -> third parameter zNear\nhttp://www.opengl.org/documentation/specs/man_pages/hardcopy/GL/html/glu/perspective.html\n", "Ah, you meant the near plane. :)\nWell...another thing when drawing polygons in the same plane is to use glPolygonOffset\nFrom the description\n glPolygonOffset is useful for rendering hidden-line images,\n for applying decals to surfaces, and for rendering solids\n with highlighted edges.\n\n" ]
[ 5, 3, 2, 0 ]
[]
[]
[ "opengl" ]
stackoverflow_0000055317_opengl.txt
Q: How can I listen in on shortcuts when the app is the task bar in C# An example of an app that does this is Enso, it pops up when you press the caps lock. A: You can act on global hotkeys by calling the winapi function RegisterHotKey. Also see https://www.codeproject.com/Articles/4345/NET-system-wide-hotkey-component and https://www.codeproject.com/Articles/3055/System-Hotkey-Component for example. You can not use all key combination as hotkeys. For those that don't work you might try a global keyboard hook (SetWindowsHookEx) A: You need to install a hook in user32.dll. Lookup the Win32-API call SetWindowsHookEx. You can call it from C# via the stuff in System.Runtime.InteropServices. This article discusses the topic nicely. Edit: Lars Truijens answer looks like a nicer approach actually.
How can I listen in on shortcuts when the app is the task bar in C#
An example of an app that does this is Enso, it pops up when you press the caps lock.
[ "You can act on global hotkeys by calling the winapi function RegisterHotKey. Also see https://www.codeproject.com/Articles/4345/NET-system-wide-hotkey-component and https://www.codeproject.com/Articles/3055/System-Hotkey-Component for example. You can not use all key combination as hotkeys. For those that don't work you might try a global keyboard hook (SetWindowsHookEx) \n", "You need to install a hook in user32.dll. Lookup the Win32-API call SetWindowsHookEx. You can call it from C# via the stuff in System.Runtime.InteropServices.\nThis article discusses the topic nicely.\nEdit: Lars Truijens answer looks like a nicer approach actually.\n" ]
[ 3, 0 ]
[]
[]
[ "c#", "keyboard_shortcuts" ]
stackoverflow_0000060788_c#_keyboard_shortcuts.txt
Q: How do you make a build that includes only one of many pending changes? In my current environment, we have a "clean" build machine, which has an exact copy of all committed changes, nothing more, nothing less. And of course I have my own machine, with dozens of files in an "in-progress" state. Often I need to build my application with only one change in place. For example, I've finished task ABC, and I want to build an EXE with only that change. But of course I can't commit the change to the repository until it's tested. Branching seems like overkill for this. What do you do in your environment to isolate changes for test builds and releases? @Matt b: So while you wait for feedback on your change, what do you do? Are you always working on exactly one thing? A: So you are asking how to handle working on multiple "tasks" at once, right? Except branching. You can have multiple checkouts of the source on the local machine, suffixing the directory name with the name of the ticket you are working on. Just make sure to make changes in the right directory, depending on the task... Mixing multiple tasks in one working copy / commit can get very confusing, especially if somebody needs to review your work later. A: I prefer to make and test builds on my local machine/environment before committing or promoting any changes. For your specific example, I would have checked out a clean copy of the source before starting task ABC, and after implementing ABC, created a build locally with that in it. A: Something like that: git stash && ./bootstrap.sh && make tests :) A: I try hard to make each "commit" operation represent a single, cohesive change. Sometimes it's a whole bug fix or whole feature, and sometimes it's a single small refactoring on the way to something bigger. There's no simple way to decide what a unit is here, just by gut feel. I also ask (beg!) my teammates to do the same. When this is done well, you get a number of benefits: You can write a high quality, detailed description for the change. Reading the first line of the description of each change gives you a sense of the flow of the code. The diffs of a change are easy to read & understand. If a change introduces a bug / build break / other problem, it's easy to isolate, understand, and back out if necessary. If I'm half-way through a change and decide to abort, I don't lose much. If I'm not sure how to proceed next, I can spend a few minutes on each of several approaches, and then pick the one I like, discarding the others. My coworkers pick up most of my changes sooner, dramatically simplifying the merge problem. When I'm feeling stuck about a big problem, I can take a few small steps that I'm confident in, checking them in as I go, thereby making the big problem a little smaller. Working like this can help reduce the need for small branches, since you take a small, confident step, validate it, and commit it, then repeat. I've talked about how to make the step small & confident, but for this to work, you also need to make validation phase go quickly. Having a strong battery of fast, fine-grained unit tests + high quality, fast application tests is key. Teams that I have worked on before required code reviews before checking in; that adds latency, which interferes with my small-step work style. Making code reviews a high-urgency interrupt works; so does switching to pair programming. Still, my brain seems to like heavy multitasking. To make that work, I still want multiple in-progress changes. I've used multiple branches, multiple local copies, multiple computers, and tools that make backups of pending changes. All of them can work. (And all of them are equivalent, implemented in different ways.) I think that multiple branches is my favorite, although you need a source control system that is good at spinning up new branches quickly & easily, without being a burden on the server. I've heard BitKeeper is good at this, but I haven't had a chance to check it out yet.
How do you make a build that includes only one of many pending changes?
In my current environment, we have a "clean" build machine, which has an exact copy of all committed changes, nothing more, nothing less. And of course I have my own machine, with dozens of files in an "in-progress" state. Often I need to build my application with only one change in place. For example, I've finished task ABC, and I want to build an EXE with only that change. But of course I can't commit the change to the repository until it's tested. Branching seems like overkill for this. What do you do in your environment to isolate changes for test builds and releases? @Matt b: So while you wait for feedback on your change, what do you do? Are you always working on exactly one thing?
[ "So you are asking how to handle working on multiple \"tasks\" at once, right? Except branching.\nYou can have multiple checkouts of the source on the local machine, suffixing the directory name with the name of the ticket you are working on. Just make sure to make changes in the right directory, depending on the task...\nMixing multiple tasks in one working copy / commit can get very confusing, especially if somebody needs to review your work later.\n", "I prefer to make and test builds on my local machine/environment before committing or promoting any changes. \nFor your specific example, I would have checked out a clean copy of the source before starting task ABC, and after implementing ABC, created a build locally with that in it.\n", "Something like that: git stash && ./bootstrap.sh && make tests :)\n", "I try hard to make each \"commit\" operation represent a single, cohesive change. Sometimes it's a whole bug fix or whole feature, and sometimes it's a single small refactoring on the way to something bigger. There's no simple way to decide what a unit is here, just by gut feel. I also ask (beg!) my teammates to do the same.\nWhen this is done well, you get a number of benefits:\n\nYou can write a high quality, detailed description for the change.\nReading the first line of the description of each change gives you a sense of the flow of the code.\nThe diffs of a change are easy to read & understand.\nIf a change introduces a bug / build break / other problem, it's easy to isolate, understand, and back out if necessary.\nIf I'm half-way through a change and decide to abort, I don't lose much.\nIf I'm not sure how to proceed next, I can spend a few minutes on each of several approaches, and then pick the one I like, discarding the others.\nMy coworkers pick up most of my changes sooner, dramatically simplifying the merge problem.\nWhen I'm feeling stuck about a big problem, I can take a few small steps that I'm confident in, checking them in as I go, thereby making the big problem a little smaller.\n\nWorking like this can help reduce the need for small branches, since you take a small, confident step, validate it, and commit it, then repeat. I've talked about how to make the step small & confident, but for this to work, you also need to make validation phase go quickly. Having a strong battery of fast, fine-grained unit tests + high quality, fast application tests is key. \nTeams that I have worked on before required code reviews before checking in; that adds latency, which interferes with my small-step work style. Making code reviews a high-urgency interrupt works; so does switching to pair programming.\nStill, my brain seems to like heavy multitasking. To make that work, I still want multiple in-progress changes. I've used multiple branches, multiple local copies, multiple computers, and tools that make backups of pending changes. All of them can work. (And all of them are equivalent, implemented in different ways.) I think that multiple branches is my favorite, although you need a source control system that is good at spinning up new branches quickly & easily, without being a burden on the server. I've heard BitKeeper is good at this, but I haven't had a chance to check it out yet.\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "build_process", "testing", "version_control" ]
stackoverflow_0000052778_build_process_testing_version_control.txt
Q: Find out how much memory is being used by an object in C#? Does anyone know of a way to find out how much memory an instance of an object is taking? For example, if I have an instance of the following object: TestClass tc = new TestClass(); Is there a way to find out how much memory the instance tc is taking? The reason for asking, is that although C# has built in memory management, I often run into issues with not clearing an instance of an object (e.g. a List that keeps track of something). There are couple of reasonably good memory profilers (e.g. ANTS Profiler) but in a multi-threaded environment is pretty hard to figure out what belongs where, even with those tools. A: If you are not trying to do it in code itself, which I'm assuming based on your ANTS reference, try taking a look at CLRProfiler (currently v2.0). It's free and if you don't mind the rather simplistic UI, it can provide valuable information. It will give you a in-depth overview of all kinds of stats. I used it a while back as one tool for finding a memory leek. Download here: https://github.com/MicrosoftArchive/clrprofiler If you do want to do it in code, the CLR has profiling APIs you could use. If you find the information in CLRProfiler, since it uses those APIs, you should be able to do it in code too. More info here: http://msdn.microsoft.com/de-de/magazine/cc300553(en-us).aspx (It's not as cryptic as using WinDbg, but be prepared to do mighty deep into the CLR.) A: The CLR Profiler, which is provide free by Microsoft does a very good job at this type of thing. An introduction to the whole profiler can be downloaded here. Also the Patterns & Practices team put something together a while back detailing how to use the profiler. It does a fairly reasonable job at showing you the different threads and objects created in those threads. Hope this sheds some light. Happy profiling! A: I have good experiences with MemProfiler. It gives you stack traces of when the object was created and all the graphs of why the object is still not garbage collected.
Find out how much memory is being used by an object in C#?
Does anyone know of a way to find out how much memory an instance of an object is taking? For example, if I have an instance of the following object: TestClass tc = new TestClass(); Is there a way to find out how much memory the instance tc is taking? The reason for asking, is that although C# has built in memory management, I often run into issues with not clearing an instance of an object (e.g. a List that keeps track of something). There are couple of reasonably good memory profilers (e.g. ANTS Profiler) but in a multi-threaded environment is pretty hard to figure out what belongs where, even with those tools.
[ "If you are not trying to do it in code itself, which I'm assuming based on your ANTS reference, try taking a look at CLRProfiler (currently v2.0). It's free and if you don't mind the rather simplistic UI, it can provide valuable information. It will give you a in-depth overview of all kinds of stats. I used it a while back as one tool for finding a memory leek.\nDownload here: https://github.com/MicrosoftArchive/clrprofiler\nIf you do want to do it in code, the CLR has profiling APIs you could use. If you find the information in CLRProfiler, since it uses those APIs, you should be able to do it in code too. More info here:\nhttp://msdn.microsoft.com/de-de/magazine/cc300553(en-us).aspx\n(It's not as cryptic as using WinDbg, but be prepared to do mighty deep into the CLR.)\n", "The CLR Profiler, which is provide free by Microsoft does a very good job at this type of thing. \nAn introduction to the whole profiler can be downloaded here. Also the Patterns & Practices team put something together a while back detailing how to use the profiler.\nIt does a fairly reasonable job at showing you the different threads and objects created in those threads.\nHope this sheds some light. Happy profiling!\n", "I have good experiences with MemProfiler. It gives you stack traces of when the object was created and all the graphs of why the object is still not garbage collected.\n" ]
[ 7, 4, 2 ]
[]
[]
[ "c#", "memory", "performance", "profiler" ]
stackoverflow_0000060820_c#_memory_performance_profiler.txt
Q: version control practice In my current job the supervisor's practice is to only check in production ready code. Most recently the project I was on involved work by 3 different developers with some file overlap. This meant manually integrating changes despite the fact that some changes took a day and then it was done. I wanted to see if this was a common practice and get suggestions on how to change this practice with the knowledge that many times my opinion means little in the grand scheme of things. A: You can use various ways to handle this situation, depending on your source control system. Private branches: Allow you to check in and work on code while you go, merging back and forth at appropriate times. Shelvesets/pacakaged changesets: Allow you to store changesets and send them around for review - ensuring they're production ready before check in. As to whether this is an appropriate way to work, we don't allow check-in to main branches without prior review. To pass review your code must pass various automated tools, and then must be acceptable to your peer reviewer. For some definitions of "production ready" - this is it. Therefore, we do something like what you do. However, we use private branches to ensure that check-ins can still be made while this is in progress, and that other check-ins don't have to interfere. If production ready means tested in an integration environment, then it sounds like you may need staging branches or something similar. A: Code that is checked in should be unit tested, but, to me, "production ready" implies that it's gone through integration and system testing. You can't do that until a code freeze, so I don't see how you can do that before every check in. A: Start by switching away from VSS to something more reliable & feature-rich. See How to convince a company to switch their Source Control Then apply known-good practices: Check in often Pick up others' changes often, to simplify merging Use fast unit tests to make sure each change meets a minimum bar Require that that the checked-in code always builds, and always passes tests. Now you won't be "production ready" at this point: you will still need a couple weeks to test & fix before you can deploy. Getting that time down is awesome for you, and awesome for your customer, so invest in: High quality automated acceptance tests. A: wouldn't it be a good idea to have a testing branch of the repo that can have the non "production ready code" checked in after the changes are done and tested? the main trunk should never have code checked in that breaks the build and doesn't pass unit tests, but branches don't have to have all those restrictions in place. A: I would personally not approve of this because sometimes that's the best way to catch problem code with less experienced developers (by seeing it as they are working on it) and when you "check in early and often" you can rollback to earlier changes you made (as you were developing) if you decide that some changes you made earlier was actually a better idea. A: I think it may be the version control we user, VSS in combination with a lack of time to learn the branching. I really like the idea of nightly check ins to help with development and avoid 'Going Dark'. I can see him being resistant to the trunks but perhaps building a development SS and when the code is production ready move it to production SS. A: From the practices I have seen the term production quality is used as a 'frightener' to ensure that people are scared of breaking top of tree, not a bad thing to be honest because top of tree should always work if possible. I would say that best practice is that you should only be merging distinct (i.e. seperate) functional components on the top of tree. If you have a significant overlap on deltas to the same source files I think this 'might' indicate that somewhere along the line the project management has broken down, and that those developers should have merged their changes to seperate integration branch before going in to the main line sources. An individual developer saying that they unit tested their stuff is irrelevant, because the thing they tested has changed! Trying to solve integration problems on your main line codeline will inevitably stall other unrelated submissions. A: Assuming that you are working in a centralized version control system (such as Subversion), and assuming that you have a concept of "the trunk" (where the latest well-working code lives): If you work on new features in "features branches"/"experimental branches", then it's OK to commit code which is far from finished. (When the feature is done, you commit the well-behaving result into the "trunk".) But you will not win a popularity contest if committing non-compiling/obviously non-working code into the "trunk" or a "release branch". The Pragmatic Programmers have a book called Pragmatic Version Control using Subversion which includes a section with advice about branches. A: Check in early and check in often for two main reasons - 1 - it might make it easier to integrate code 2 - in case your computer explodes your weeks of work isn't gone A: @bpapa Nightly backups of work folders to servers will prevent losing more than a days work. @tonyo Let's see the requirement documents were completed the day after we finished coding. Does that tell you about our project management? We are a small shop so while you would think change is easy there are some here that are unbending to the old ways. A: An approach I particularly like is to have different life cycle versions in the depot. That is,for example, have a dev version of the code that is where the developers check in code that is in being worked on; then you could have a beta version, where you could add beta fixes to your code; and then a production version. There is obvious overhead in this approach, such as the fact that you will have a larger workspace on you local machine, the fact that you will need need to have a migration process into place to move code from one stage to the next (which means a code freeze when doing the integration testing that goes with the migration), and that depending on the complexity of the project(s) you might need to have tools that change settings, environment variables, registry entries, etc. All of this is a pain to set up, but you only do it once, and once you have it all in place, makes working on different stages of the code a breeze.
version control practice
In my current job the supervisor's practice is to only check in production ready code. Most recently the project I was on involved work by 3 different developers with some file overlap. This meant manually integrating changes despite the fact that some changes took a day and then it was done. I wanted to see if this was a common practice and get suggestions on how to change this practice with the knowledge that many times my opinion means little in the grand scheme of things.
[ "You can use various ways to handle this situation, depending on your source control system. \nPrivate branches: Allow you to check in and work on code while you go, merging back and forth at appropriate times.\nShelvesets/pacakaged changesets: Allow you to store changesets and send them around for review - ensuring they're production ready before check in.\nAs to whether this is an appropriate way to work, we don't allow check-in to main branches without prior review. To pass review your code must pass various automated tools, and then must be acceptable to your peer reviewer. For some definitions of \"production ready\" - this is it. Therefore, we do something like what you do. However, we use private branches to ensure that check-ins can still be made while this is in progress, and that other check-ins don't have to interfere. \nIf production ready means tested in an integration environment, then it sounds like you may need staging branches or something similar.\n", "Code that is checked in should be unit tested, but, to me, \"production ready\" implies that it's gone through integration and system testing. You can't do that until a code freeze, so I don't see how you can do that before every check in.\n", "Start by switching away from VSS to something more reliable & feature-rich. See How to convince a company to switch their Source Control\nThen apply known-good practices:\n\nCheck in often\nPick up others' changes often, to simplify merging\nUse fast unit tests to make sure each change meets a minimum bar\nRequire that that the checked-in code always builds, and always passes tests.\n\nNow you won't be \"production ready\" at this point: you will still need a couple weeks to test & fix before you can deploy. Getting that time down is awesome for you, and awesome for your customer, so invest in:\n\nHigh quality automated acceptance tests.\n\n", "wouldn't it be a good idea to have a testing branch of the repo that can have the non \"production ready code\" checked in after the changes are done and tested?\nthe main trunk should never have code checked in that breaks the build and doesn't pass unit tests, but branches don't have to have all those restrictions in place.\n", "I would personally not approve of this because sometimes that's the best way to catch problem code with less experienced developers (by seeing it as they are working on it) and when you \"check in early and often\" you can rollback to earlier changes you made (as you were developing) if you decide that some changes you made earlier was actually a better idea. \n", "I think it may be the version control we user, VSS in combination with a lack of time to learn the branching. I really like the idea of nightly check ins to help with development and avoid 'Going Dark'. I can see him being resistant to the trunks but perhaps building a development SS and when the code is production ready move it to production SS.\n", "From the practices I have seen the term production quality is used as a 'frightener' to ensure that people are scared of breaking top of tree, not a bad thing to be honest because top of tree should always work if possible.\nI would say that best practice is that you should only be merging distinct (i.e. seperate) functional components on the top of tree. If you have a significant overlap on deltas to the same source files I think this 'might' indicate that somewhere along the line the project management has broken down, and that those developers should have merged their changes to seperate integration branch before going in to the main line sources. An individual developer saying that they unit tested their stuff is irrelevant, because the thing they tested has changed!\nTrying to solve integration problems on your main line codeline will inevitably stall other unrelated submissions.\n", "Assuming that you are working in a centralized version control system (such as Subversion), and assuming that you have a concept of \"the trunk\" (where the latest well-working code lives):\nIf you work on new features in \"features branches\"/\"experimental branches\", then it's OK to commit code which is far from finished. (When the feature is done, you commit the well-behaving result into the \"trunk\".)\nBut you will not win a popularity contest if committing non-compiling/obviously non-working code into the \"trunk\" or a \"release branch\".\nThe Pragmatic Programmers have a book called Pragmatic Version Control using Subversion which includes a section with advice about branches.\n", "Check in early and check in often for two main reasons - \n1 - it might make it easier to integrate code\n2 - in case your computer explodes your weeks of work isn't gone\n", "@bpapa\nNightly backups of work folders to servers will prevent losing more than a days work. \n@tonyo\nLet's see the requirement documents were completed the day after we finished coding. Does that tell you about our project management? \nWe are a small shop so while you would think change is easy there are some here that are unbending to the old ways.\n", "An approach I particularly like is to have different life cycle versions in the depot. That is,for example, have a dev version of the code that is where the developers check in code that is in being worked on; then you could have a beta version, where you could add beta fixes to your code; and then a production version. \nThere is obvious overhead in this approach, such as the fact that you will have a larger workspace on you local machine, the fact that you will need need to have a migration process into place to move code from one stage to the next (which means a code freeze when doing the integration testing that goes with the migration), and that depending on the complexity of the project(s) you might need to have tools that change settings, environment variables, registry entries, etc.\nAll of this is a pain to set up, but you only do it once, and once you have it all in place, makes working on different stages of the code a breeze.\n" ]
[ 4, 2, 2, 1, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "version_control" ]
stackoverflow_0000044630_version_control.txt
Q: SQL Select Bottom Records I have a query where I wish to retrieve the oldest X records. At present my query is something like the following: SELECT Id, Title, Comments, CreatedDate FROM MyTable WHERE CreatedDate > @OlderThanDate ORDER BY CreatedDate DESC I know that normally I would remove the 'DESC' keyword to switch the order of the records, however in this instance I still want to get records ordered with the newest item first. So I want to know if there is any means of performing this query such that I get the oldest X items sorted such that the newest item is first. I should also add that my database exists on SQL Server 2005. A: Why not just use a subquery? SELECT T1.* FROM (SELECT TOP X Id, Title, Comments, CreatedDate FROM MyTable WHERE CreatedDate > @OlderThanDate ORDER BY CreatedDate) T1 ORDER BY CreatedDate DESC A: Embed the query. You take the top x when sorted in ascending order (i.e. the oldest) and then re-sort those in descending order ... select * from ( SELECT top X Id, Title, Comments, CreatedDate FROM MyTable WHERE CreatedDate > @OlderThanDate ORDER BY CreatedDate ) a order by createddate desc
SQL Select Bottom Records
I have a query where I wish to retrieve the oldest X records. At present my query is something like the following: SELECT Id, Title, Comments, CreatedDate FROM MyTable WHERE CreatedDate > @OlderThanDate ORDER BY CreatedDate DESC I know that normally I would remove the 'DESC' keyword to switch the order of the records, however in this instance I still want to get records ordered with the newest item first. So I want to know if there is any means of performing this query such that I get the oldest X items sorted such that the newest item is first. I should also add that my database exists on SQL Server 2005.
[ "Why not just use a subquery?\nSELECT T1.* \nFROM\n(SELECT TOP X Id, Title, Comments, CreatedDate\nFROM MyTable\nWHERE CreatedDate > @OlderThanDate\nORDER BY CreatedDate) T1\nORDER BY CreatedDate DESC\n\n", "Embed the query. You take the top x when sorted in ascending order (i.e. the oldest) and then re-sort those in descending order ... \nselect * \nfrom \n(\n SELECT top X Id, Title, Comments, CreatedDate\n FROM MyTable\n WHERE CreatedDate > @OlderThanDate\n ORDER BY CreatedDate \n) a\norder by createddate desc \n\n" ]
[ 22, 1 ]
[]
[]
[ "sql", "sql_server", "tsql" ]
stackoverflow_0000060877_sql_sql_server_tsql.txt
Q: How to convince a company to switch their Source Control My current place of employment is currently in a transition, new ownership has taken over, things are finally getting standardized and proper guidelines are being enforced. But we are still using VSS, there really isn't any reason for using it other then that's what whats initially setup. We don't use Visual Studio, or any tool really that specifically requires it. What would be the absolute best argument I can bring up to help convince them that going to something like Subversion would be a much better solution, in the long run. A: VSS totally relies on the clients to manage the database. If a client drops connection in the middle of a write over the network at just the wrong time, your file is trashed on the server. Not just the tip, but all the history. Hope you have a good backup. I've been through it. It's bad news. VSS usage over VPN or other remote connections is abysmal. It's using SMB to transfer the data, and you have to retrieve the file and all of its deltas just to get the tip. Nasty. I've seen VSS start to act up at 1GB of data. Database errors, etc. MS (somewhere in a FAQ or KB) says that 2GB is really the max safe limit. There are no good management tools (the clients run the asylum), so you don't really get any warning about this. Anything with a server process to provide some level of transactions and integrity control is a superior solution. A: The best argument would have to be the reason why you want them to switch to subversion. :) I know absolutely nothing about VSS, but the phrase "if it ain't broken don't fix it" comes to mind. You have to show your managers that VSS is broken and needs fixing. Even better if you can show management how it would save them money. A: @Adam Davis: Uhhh actually Adam, VSS is a horrible source control system. It has a long history of corrupting history and losing data. It is terrible at merging, doesn't handle multiple developers well and is very slow. Also the history is poor. Microsoft don't really support it any more, you'll note that they never used it for their own internal development and now they don't even sell it in favour of a more modern solution (VSTS). In short, if you have to choose between VSS and any other type of source control, go with the alternative. A: By just going over the features good source control brings: ability to easily see logs of who did what, when, and in what order, to which files keep a history of past versions of everything easily go back and reproduce a specific version of your files from any past version, to more easily reproduce bugs reported in older versions ability go retrieve deleted code, or remove unwanted changes, without having to worry about losing data in the process A: Any document that proves switching will lower costs. Failing that, multi-colored graphs and charts. Maybe a power-point presentation. A: The internet is littered with well written articles on the flaws of VSS. I would collect this as a body of evidence for moving away from VSS. Find a key requirement that VSS can't support (remote working, support on other OSs, tools integration) and use it to drive your issue. You then need to find a source control system that is a good match for your organisation's requirements - are you sure Subversion is that system? Set up a demonstration of your chosen system, and use this to prove its worth. I implemented this change at a previous employer (first to CVS, and then to SVN), and while it was successful we had to build a lot of bits around the edge and rely on a lot of (sometimes unreliable) open source projects to get all the tools we needed. With hindsight I should have considered trying to evaluate professional tools such as Perforce, Vault or even Team System. Having evaluated these, I could have made a proper value judgement on whether CVS/SVN were worth their "free" price tag. A: being able to handle branching and forking is a start. Try using subversion for a while in parallel to vss you will most likely find many arguments to convince your boss. If you don't, your boss is right, no reason to switch. A: Get them to google for 'vss problem', 'source safe corruption' or simply look at the Wiki page for it. That ought to convince them that it's probably not a long-term viable thing for you to be betting such a vital part of your business on. How big is your team? (ie, I mean how many members, not whether or not you're salad dodgers) Once you start to get more than half a dozen quite active users, VSS is going to give you headaches. I seriously doubt that Microsoft use it (in fact, don't they use a customised Subversion or CVS variant?) and you've got to ask yourself - if the company don't eat their own dogfood, why would you eat it? A: Basic answer is that you have to make the case that switching meets the needs of the business. For example: lower cost of development shorter schedule (another shade of #1) more apt for meeting process requirements (like software requirements traceability, or build reproducibility, etc). Making the case on these things also requires something quantitative, not just "we will lower costs because this is the right way to do it!". One thing to watch out for is that it's too easy for a developer to convince themselves that it would be beneficial to make the change without first going through the basic business filters. Once that happens, you end up with developers who are unhappy with their tools and are doubly frustrated because they think management won't listen. If you can't check off one of the things above, them you'll have no chance of persuading management of anything (unless management is incompetent, but that's for another question). A: Why Subversion over VSS? Free software Easier to manage "check-ins" are atomic! Easy to Branch and Merge Continued development (i.e. VSS is dead end) Better tools for tracking changes and viewing logs Toolset and platform agnostic, but also integrates with many tools I made the proposal to my manager, and it was a pretty easy sell. I've found it to be much easier to use, especially for branching (our project took 5 hours to "share and pin" in VSS, and then each operation took extra time to complete!). A: I've previously written about why VSS is not a good idea. You might be able to gain some information from that. Also this article and this one contain further information. VSS 2005 has papered over some of the cracks in 6.0, but not in a particularly convincing way. The same brain-dead foundation remains. A: Even if it ain't broke, there's a potential benefit to migrating from VSS. First and most trivially, you won't have to buy new VSS licenses. Second, there are many examples of deficiencies in the VSS product (some also acknowledged by MS). The learning curve for SVN is at least as low as for VSS, and if you have devs happier with their source control system, they're more likely to use it early and often. That will translate to lots less risk for your company, and that's a good benefit. A: @Jason: VSS is broken. I think the most powerful method for motivating a change away from VSS is to point out how critical an asset your source code is. Taking risks with its integrity is not a wise business choice. Add that your programmers are the creators of this asset, and that making it easier for them to be productive means more value in your source code asset. Joel on Software often talks about how investing in his programmers is a big win for his company. The other answers here all describe specific reasons that you can point to when making your case. A: In addition to the technical points given in other answers, there may be non-technical reasons lurking that you should be prepared to respond to: You should investigate whether your company has any sort of policy against (or misguided fear of) open source software. If the company or its lawyers don’t understand the ins and outs of which licenses “infect” proprietary code and which don’t, as well as what you can do with open source code that doesn’t affect your proprietary code, you will have a hard time getting them to switch from a proprietary to an an open source tool. (And you may have a bigger education job on your hands.) In arguing for the switch from proprietary (e.g. VSS) to open source (e.g. subversion) you’ll also need to be prepared to defend the quality of the code and the lack of any need for a warranty or other contract rights regarding the code.
How to convince a company to switch their Source Control
My current place of employment is currently in a transition, new ownership has taken over, things are finally getting standardized and proper guidelines are being enforced. But we are still using VSS, there really isn't any reason for using it other then that's what whats initially setup. We don't use Visual Studio, or any tool really that specifically requires it. What would be the absolute best argument I can bring up to help convince them that going to something like Subversion would be a much better solution, in the long run.
[ "VSS totally relies on the clients to manage the database. If a client drops connection in the middle of a write over the network at just the wrong time, your file is trashed on the server. Not just the tip, but all the history. Hope you have a good backup. I've been through it. It's bad news.\nVSS usage over VPN or other remote connections is abysmal. It's using SMB to transfer the data, and you have to retrieve the file and all of its deltas just to get the tip. Nasty.\nI've seen VSS start to act up at 1GB of data. Database errors, etc. MS (somewhere in a FAQ or KB) says that 2GB is really the max safe limit. There are no good management tools (the clients run the asylum), so you don't really get any warning about this.\nAnything with a server process to provide some level of transactions and integrity control is a superior solution.\n", "The best argument would have to be the reason why you want them to switch to subversion. :)\nI know absolutely nothing about VSS, but the phrase \"if it ain't broken don't fix it\" comes to mind. You have to show your managers that VSS is broken and needs fixing. Even better if you can show management how it would save them money.\n", "@Adam Davis: Uhhh actually Adam, VSS is a horrible source control system. It has a long history of corrupting history and losing data. It is terrible at merging, doesn't handle multiple developers well and is very slow. Also the history is poor. Microsoft don't really support it any more, you'll note that they never used it for their own internal development and now they don't even sell it in favour of a more modern solution (VSTS). In short, if you have to choose between VSS and any other type of source control, go with the alternative.\n", "By just going over the features good source control brings:\n\nability to easily see logs of who did what, when, and in what order, to which files\nkeep a history of past versions of everything\neasily go back and reproduce a specific version of your files from any past version, to more easily reproduce bugs reported in older versions\nability go retrieve deleted code, or remove unwanted changes, without having to worry about losing data in the process\n\n", "Any document that proves switching will lower costs. Failing that, multi-colored graphs and charts. Maybe a power-point presentation.\n", "The internet is littered with well written articles on the flaws of VSS. I would collect this as a body of evidence for moving away from VSS. Find a key requirement that VSS can't support (remote working, support on other OSs, tools integration) and use it to drive your issue. You then need to find a source control system that is a good match for your organisation's requirements - are you sure Subversion is that system? Set up a demonstration of your chosen system, and use this to prove its worth.\nI implemented this change at a previous employer (first to CVS, and then to SVN), and while it was successful we had to build a lot of bits around the edge and rely on a lot of (sometimes unreliable) open source projects to get all the tools we needed. With hindsight I should have considered trying to evaluate professional tools such as Perforce, Vault or even Team System. Having evaluated these, I could have made a proper value judgement on whether CVS/SVN were worth their \"free\" price tag.\n", "being able to handle branching and forking is a start. \nTry using subversion for a while in parallel to vss you will most likely find many arguments to convince your boss. If you don't, your boss is right, no reason to switch.\n", "Get them to google for 'vss problem', 'source safe corruption' or simply look at the Wiki page for it. That ought to convince them that it's probably not a long-term viable thing for you to be betting such a vital part of your business on.\nHow big is your team? (ie, I mean how many members, not whether or not you're salad dodgers) Once you start to get more than half a dozen quite active users, VSS is going to give you headaches.\nI seriously doubt that Microsoft use it (in fact, don't they use a customised Subversion or CVS variant?) and you've got to ask yourself - if the company don't eat their own dogfood, why would you eat it?\n", "Basic answer is that you have to make the case that switching meets the needs of the business. For example:\n\nlower cost of development\nshorter schedule (another shade of #1)\nmore apt for meeting process requirements (like software requirements traceability, or build reproducibility, etc).\n\nMaking the case on these things also requires something quantitative, not just \"we will lower costs because this is the right way to do it!\".\nOne thing to watch out for is that it's too easy for a developer to convince themselves that it would be beneficial to make the change without first going through the basic business filters. Once that happens, you end up with developers who are unhappy with their tools and are doubly frustrated because they think management won't listen. If you can't check off one of the things above, them you'll have no chance of persuading management of anything (unless management is incompetent, but that's for another question).\n", "Why Subversion over VSS?\n\nFree software\nEasier to manage\n\"check-ins\" are atomic!\nEasy to Branch and Merge\nContinued development (i.e. VSS is dead end)\nBetter tools for tracking changes and viewing logs\nToolset and platform agnostic, but also integrates with many tools\n\nI made the proposal to my manager, and it was a pretty easy sell. I've found it to be much easier to use, especially for branching (our project took 5 hours to \"share and pin\" in VSS, and then each operation took extra time to complete!).\n", "I've previously written about why VSS is not a good idea. You might be able to gain some information from that. Also this article and this one contain further information.\nVSS 2005 has papered over some of the cracks in 6.0, but not in a particularly convincing way. The same brain-dead foundation remains.\n", "Even if it ain't broke, there's a potential benefit to migrating from VSS. First and most trivially, you won't have to buy new VSS licenses. Second, there are many examples of deficiencies in the VSS product (some also acknowledged by MS). The learning curve for SVN is at least as low as for VSS, and if you have devs happier with their source control system, they're more likely to use it early and often. That will translate to lots less risk for your company, and that's a good benefit.\n", "@Jason: VSS is broken.\nI think the most powerful method for motivating a change away from VSS is to point out how critical an asset your source code is. Taking risks with its integrity is not a wise business choice.\nAdd that your programmers are the creators of this asset, and that making it easier for them to be productive means more value in your source code asset. Joel on Software often talks about how investing in his programmers is a big win for his company.\nThe other answers here all describe specific reasons that you can point to when making your case.\n", "In addition to the technical points given in other answers, there may be non-technical reasons lurking that you should be prepared to respond to:\nYou should investigate whether your company has any sort of policy against (or misguided fear of) open source software. If the company or its lawyers don’t understand the ins and outs of which licenses “infect” proprietary code and which don’t, as well as what you can do with open source code that doesn’t affect your proprietary code, you will have a hard time getting them to switch from a proprietary to an an open source tool. (And you may have a bigger education job on your hands.)\nIn arguing for the switch from proprietary (e.g. VSS) to open source (e.g. subversion) you’ll also need to be prepared to defend the quality of the code and the lack of any need for a warranty or other contract rights regarding the code.\n" ]
[ 16, 8, 4, 2, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "svn", "version_control", "visual_sourcesafe" ]
stackoverflow_0000044588_svn_version_control_visual_sourcesafe.txt
Q: Installing Curl IDE/RTE on AMD processors Trying to move my development environment to Linux. And new to Curl. Can't get it to install the IDE & RTE packages on an AMD HP PC running Ubuntu x64. I tried to install the Debian package via the package installer and get "Error: Wrong architecture - i386". Tried using the --force-architecture switch but it errors out. I'm assuming Curl IDE will just run under Intel processors? Anyone have any luck with this issue and can advise? A: It's been a while since I ran linux, but try looking for the x64 version. There are also x64 to x86 compatibility libraries available that should make 32 bit programs work for most situations. The ubuntu forums are a much better place for this question, however.
Installing Curl IDE/RTE on AMD processors
Trying to move my development environment to Linux. And new to Curl. Can't get it to install the IDE & RTE packages on an AMD HP PC running Ubuntu x64. I tried to install the Debian package via the package installer and get "Error: Wrong architecture - i386". Tried using the --force-architecture switch but it errors out. I'm assuming Curl IDE will just run under Intel processors? Anyone have any luck with this issue and can advise?
[ "It's been a while since I ran linux, but try looking for the x64 version. There are also x64 to x86 compatibility libraries available that should make 32 bit programs work for most situations. \nThe ubuntu forums are a much better place for this question, however. \n" ]
[ 1 ]
[]
[]
[ "amd_processor", "curl_language", "ide" ]
stackoverflow_0000060800_amd_processor_curl_language_ide.txt
Q: ASP.NET Convert Invalid String to Null In my application I have TextBox in a FormView bound to a LinqDataSource like so: <asp:TextBox ID="MyTextBox" runat="server" Text='<%# Bind("MyValue") %>' AutoPostBack="True" ontextchanged="MyTextBox_TextChanged" /> protected void MyTextBox_TextChanged(object sender, EventArgs e) { MyFormView.UpdateItem(false); } This is inside an UpdatePanel so any change to the field is immediately persisted. Also, the value of MyValue is decimal?. This works fine unless I enter any string which cannot be converted to decimal into the field. In that case, the UpdateItem call throws: LinqDataSourceValidationException - Failed to set one or more properties on type MyType. asdf is not a valid value for Decimal. I understand the problem, ASP.NET does not know how to convert from 'asdf' to decimal?. What I would like it to do is convert all these invalid values to null. What is the best way to do this? A: I think you should handle the Updating event of the LinqDataSource on your page. Do your check for invalid strings (use a TryParse method or something) and then continue with the base class update. (Edit: My intuition lines up with what's recommended here) A: Not familiar with ASP, but in .net, couldn't you just do something along the lines of protected void MyTextBox_TextChanged(object sender, EventArgs e) { Decimal d = null; TextBox tb = sender as TextBox; if(!Decimal.TryParse(tb.Text, out d)) { tb.Text = String.Empty; } MyFormView.UpdateItem(false); }
ASP.NET Convert Invalid String to Null
In my application I have TextBox in a FormView bound to a LinqDataSource like so: <asp:TextBox ID="MyTextBox" runat="server" Text='<%# Bind("MyValue") %>' AutoPostBack="True" ontextchanged="MyTextBox_TextChanged" /> protected void MyTextBox_TextChanged(object sender, EventArgs e) { MyFormView.UpdateItem(false); } This is inside an UpdatePanel so any change to the field is immediately persisted. Also, the value of MyValue is decimal?. This works fine unless I enter any string which cannot be converted to decimal into the field. In that case, the UpdateItem call throws: LinqDataSourceValidationException - Failed to set one or more properties on type MyType. asdf is not a valid value for Decimal. I understand the problem, ASP.NET does not know how to convert from 'asdf' to decimal?. What I would like it to do is convert all these invalid values to null. What is the best way to do this?
[ "I think you should handle the Updating event of the LinqDataSource on your page. Do your check for invalid strings (use a TryParse method or something) and then continue with the base class update.\n(Edit: My intuition lines up with what's recommended here)\n", "Not familiar with ASP, but in .net, couldn't you just do something along the lines of\nprotected void MyTextBox_TextChanged(object sender, EventArgs e)\n{ \n Decimal d = null;\n TextBox tb = sender as TextBox;\n\n if(!Decimal.TryParse(tb.Text, out d))\n {\n tb.Text = String.Empty;\n }\n MyFormView.UpdateItem(false);\n}\n\n" ]
[ 1, 1 ]
[]
[]
[ "asp.net", "data_binding", "linq", "linq_to_sql", "validation" ]
stackoverflow_0000060893_asp.net_data_binding_linq_linq_to_sql_validation.txt
Q: Controlling which Network Card TCP/IP message are sent on The system I'm currently working on consists of a controller PC running XP with .Net 2 connected to a set of embedded systems. All these components communicate with each other over an ethernet network. I'm currently using TcpClient.Connect on the XP computer to open a connection to the embedded systems to send TCP/IP messages. I now have to connect the XP computer to an external network to send processing data to, so there are now two network cards on the XP computer. However, the messages sent to the external network mustn't appear on the network connecting the embedded systems together (don't want to consume the bandwidth) and the messages to the embedded systems mustn't appear on the external network. So, the assertion I'm making is that messages sent to a defined IP address are sent out on both network cards when using the TcpClient.Connect method. How do I specify which physical network card messages are sent via, ideally using the .Net networking API. If no such method exists in .Net, then I can always P/Invoke the Win32 API. Skizz A: Try using a Socket for your client instead of the TcpClient Class. Then you can use Socket.Bind to target your local network adapter int port = 1234; IPHostEntry entry = Dns.GetHostEntry(Dns.GetHostName()); //find ip address for your adapter here IPAddress localAddress = entry.AddressList.FirstOrDefault(); IPEndPoint localEndPoint = new IPEndPoint(localAddress, port); //use socket instead of a TcpClient Socket client = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); //binds client to the local end point client.Bind(localEndPoint); http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.bind.aspx A: If you have two network cards on the machine, then there shouldn't be a problem. Normal IP behaviour should ensure that traffic for your 'private' network (embedded systems in this case) is separate from your public network, without you having to do anything in your code. All that is required is for the two networks to be on different IP subnets, and for your 'public' NIC to be the default. Assuming your two NICs are configured as follows: NIC A (Public): 192.168.1.10 mask 255.255.255.0 NIC B (Private): 192.168.5.10 mask 255.255.255.0 The only configuration you need to verify is that NIC A is your default. When you try to send packets to any address in your private network (192.168.50.0 - 192.168.50.255), your IP stack will look in the routing table and see a directly connected network, and forward traffic via the private NIC. Any traffic to the (directly connected) public network will be sent to NIC A, as will traffic to any address for which you do not have a more specific route in your routing table. Your routing table (netstat -rn) should look something like this: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.10 266 <<-- 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 169.254.0.0 255.255.0.0 On-link 192.168.1.10 286 169.254.255.255 255.255.255.255 On-link 192.168.1.10 266 192.168.1.0 255.255.255.0 On-link 192.168.1.10 266 192.168.1.10 255.255.255.255 On-link 192.168.1.10 266 192.168.1.255 255.255.255.255 On-link 192.168.1.10 266 192.168.5.0 255.255.255.0 On-link 192.168.5.10 266 192.168.5.10 255.255.255.255 On-link 192.168.5.10 266 192.168.5.255 255.255.255.255 On-link 192.168.5.10 266 255.255.255.255 255.255.255.255 On-link 192.168.1.10 276 255.255.255.255 255.255.255.255 On-link 192.168.5.10 276 =========================================================================== There will also be some multicast routes (starting with 224) which have been omitted for brevity. The '<<--' indicates the default route, which should be using the public interface. A: Basically, once the TcpClient.Connect method has been successful, it will have created a mapping between the physical MAC address of the embedded system and the route it should take to that address (i.e. which network card to use). I don't believe that all messages then sent over the TcpClient connection will be sent out via both network cards. Do you have any data to suggest otherwise, or are you mealy guessing? A: Xp maintains a routing table where it maps ranges of ip-adresses to networks and gateways. you can view the table using "route print", with "route add" you can add a route to your embedded device.
Controlling which Network Card TCP/IP message are sent on
The system I'm currently working on consists of a controller PC running XP with .Net 2 connected to a set of embedded systems. All these components communicate with each other over an ethernet network. I'm currently using TcpClient.Connect on the XP computer to open a connection to the embedded systems to send TCP/IP messages. I now have to connect the XP computer to an external network to send processing data to, so there are now two network cards on the XP computer. However, the messages sent to the external network mustn't appear on the network connecting the embedded systems together (don't want to consume the bandwidth) and the messages to the embedded systems mustn't appear on the external network. So, the assertion I'm making is that messages sent to a defined IP address are sent out on both network cards when using the TcpClient.Connect method. How do I specify which physical network card messages are sent via, ideally using the .Net networking API. If no such method exists in .Net, then I can always P/Invoke the Win32 API. Skizz
[ "Try using a Socket for your client instead of the TcpClient Class.\nThen you can use Socket.Bind to target your local network adapter\n int port = 1234;\n\n IPHostEntry entry = Dns.GetHostEntry(Dns.GetHostName());\n\n //find ip address for your adapter here\n IPAddress localAddress = entry.AddressList.FirstOrDefault();\n\n IPEndPoint localEndPoint = new IPEndPoint(localAddress, port);\n\n //use socket instead of a TcpClient\n Socket client = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);\n\n //binds client to the local end point\n client.Bind(localEndPoint);\n\nhttp://msdn.microsoft.com/en-us/library/system.net.sockets.socket.bind.aspx\n", "If you have two network cards on the machine, then there shouldn't be a problem. Normal IP behaviour should ensure that traffic for your 'private' network (embedded systems in this case) is separate from your public network, without you having to do anything in your code. All that is required is for the two networks to be on different IP subnets, and for your 'public' NIC to be the default.\nAssuming your two NICs are configured as follows:\nNIC A (Public): 192.168.1.10 mask 255.255.255.0\nNIC B (Private): 192.168.5.10 mask 255.255.255.0\n\nThe only configuration you need to verify is that NIC A is your default. When you try to send packets to any address in your private network (192.168.50.0 - 192.168.50.255), your IP stack will look in the routing table and see a directly connected network, and forward traffic via the private NIC. Any traffic to the (directly connected) public network will be sent to NIC A, as will traffic to any address for which you do not have a more specific route in your routing table.\nYour routing table (netstat -rn) should look something like this:\nIPv4 Route Table\n===========================================================================\nActive Routes:\nNetwork Destination Netmask Gateway Interface Metric\n 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.10 266 <<--\n 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306\n 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306\n 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306\n 169.254.0.0 255.255.0.0 On-link 192.168.1.10 286\n 169.254.255.255 255.255.255.255 On-link 192.168.1.10 266\n 192.168.1.0 255.255.255.0 On-link 192.168.1.10 266\n 192.168.1.10 255.255.255.255 On-link 192.168.1.10 266\n 192.168.1.255 255.255.255.255 On-link 192.168.1.10 266\n 192.168.5.0 255.255.255.0 On-link 192.168.5.10 266\n 192.168.5.10 255.255.255.255 On-link 192.168.5.10 266\n 192.168.5.255 255.255.255.255 On-link 192.168.5.10 266\n 255.255.255.255 255.255.255.255 On-link 192.168.1.10 276\n 255.255.255.255 255.255.255.255 On-link 192.168.5.10 276\n===========================================================================\n\nThere will also be some multicast routes (starting with 224) which have been omitted for brevity. The '<<--' indicates the default route, which should be using the public interface.\n", "Basically, once the TcpClient.Connect method has been successful, it will have created a mapping between the physical MAC address of the embedded system and the route it should take to that address (i.e. which network card to use).\nI don't believe that all messages then sent over the TcpClient connection will be sent out via both network cards.\nDo you have any data to suggest otherwise, or are you mealy guessing?\n", "Xp maintains a routing table where it maps ranges of ip-adresses to networks and gateways. \nyou can view the table using \"route print\", with \"route add\" you can add a route to your embedded device.\n" ]
[ 7, 2, 1, 1 ]
[]
[]
[ ".net", "c#", "networking" ]
stackoverflow_0000049507_.net_c#_networking.txt
Q: Java sound recording and mixer settings I'm using the javax.sound.sampled package in a radio data mode decoding program. To use the program the user feeds audio from their radio receiver into their PC's line input. The user is also required to use their mixer program to select the line in as the recording input. The trouble is some users don't know how to do this and also sometimes other programs alter the recording input setting. So my question is how can my program detect if the line in is set as the recording input ? Also is it possible for my program to change the recording input setting if it detects it is incorrect ? Thanks for your time. Ian A: To answer your first question, you can check if the Line.Info object for your recording input matches Port.Info.LINE_IN like this: public static boolean isLineIn(Line.Info lineInfo) { Line.Info[] detected = AudioSystem.getSourceLineInfo(Port.Info.LINE_IN); for (Line.Info lineIn : detected) { if (lineIn.matches(lineInfo)) { return true; } } return false; } However, this doesn't work with operating systems or soundcard driver APIs that don't provide the type of each available mixer channel. So when I test it on Windows it works, but not on Linux or Mac. For more information and recommendations, see this FAQ. Regarding your second question, you can try changing the recording input settings through a Control class. In particular, see FloatControl.Type for some common settings. Keep in mind that the availability of these controls depends on the operating system and soundcard drivers, just like line-in detection.
Java sound recording and mixer settings
I'm using the javax.sound.sampled package in a radio data mode decoding program. To use the program the user feeds audio from their radio receiver into their PC's line input. The user is also required to use their mixer program to select the line in as the recording input. The trouble is some users don't know how to do this and also sometimes other programs alter the recording input setting. So my question is how can my program detect if the line in is set as the recording input ? Also is it possible for my program to change the recording input setting if it detects it is incorrect ? Thanks for your time. Ian
[ "To answer your first question, you can check if the Line.Info object for your recording input matches Port.Info.LINE_IN like this:\npublic static boolean isLineIn(Line.Info lineInfo) {\n Line.Info[] detected = AudioSystem.getSourceLineInfo(Port.Info.LINE_IN);\n for (Line.Info lineIn : detected) {\n if (lineIn.matches(lineInfo)) {\n return true;\n }\n }\n return false;\n}\n\nHowever, this doesn't work with operating systems or soundcard driver APIs that don't provide the type of each available mixer channel. So when I test it on Windows it works, but not on Linux or Mac. For more information and recommendations, see this FAQ.\nRegarding your second question, you can try changing the recording input settings through a Control class. In particular, see FloatControl.Type for some common settings. Keep in mind that the availability of these controls depends on the operating system and soundcard drivers, just like line-in detection.\n" ]
[ 3 ]
[]
[]
[ "java", "javasound" ]
stackoverflow_0000060049_java_javasound.txt
Q: Real life examples of methodologies and lifecycles Choosing the correct lifecycle and methodology isn't as easy as it was before when there weren't so many methodologies, this days a new one emerges every day. I've found that most projects require a certain level of evolution and that each project is different from the rest. That way, extreme programming works with for a project for a given company with 15 employees but doesn't quite work with a 100 employee company or doesn't work for a given project type (for example real time application, scientific application, etc). I'd like to have a list of experiences, mostly stating the project type, the project size (number of people working on it), the project time (real or planned), the project lifecycle and methodology and if the project succeded or failed. Any other data will be appreciated, I think we might find some patterns if there's enough data. Of course, comments are welcomed. PS: Very large, PT: Very long, LC: Incremental-CMMI, PR: Success PS: Very large, PT: Very long, LC: Waterfall-CMMI, PR: Success Edit: I'll be constructing a "summary" with the stats of all answers. A: My personal experience: Project size: Very large (150+ persons) Project time: Very long (+6 years) Project income (estimated): 40 Million $ (Military is paying) Project life cycle: Incremental lyfetime. Main milestones every year. Project structure: Traditional at first (system department, development department, etc) not so good. Process based later (the process establish a flow of work, requirements, design, implementation, test, feedback, metrics): quite good so far. Project result: success (so far) A: Here you go: Project size: about 1 million lines of code, 30 people Project time: 9 years Project life cycle: good old waterfall, due to big customers requirements, but with staggered delivery to the QA team - it is very difficult to be agile when you have customers commitments to large clients Project structure: we are organized in departments but we use CMMI to keep them in sync - we have stakeholders, work products, deviance procedures, etc. Project result: we've really improved with the implementation of CMMI and have delivered our last few releases on time every time -C.
Real life examples of methodologies and lifecycles
Choosing the correct lifecycle and methodology isn't as easy as it was before when there weren't so many methodologies, this days a new one emerges every day. I've found that most projects require a certain level of evolution and that each project is different from the rest. That way, extreme programming works with for a project for a given company with 15 employees but doesn't quite work with a 100 employee company or doesn't work for a given project type (for example real time application, scientific application, etc). I'd like to have a list of experiences, mostly stating the project type, the project size (number of people working on it), the project time (real or planned), the project lifecycle and methodology and if the project succeded or failed. Any other data will be appreciated, I think we might find some patterns if there's enough data. Of course, comments are welcomed. PS: Very large, PT: Very long, LC: Incremental-CMMI, PR: Success PS: Very large, PT: Very long, LC: Waterfall-CMMI, PR: Success Edit: I'll be constructing a "summary" with the stats of all answers.
[ "My personal experience:\n\nProject size: Very large (150+\npersons)\nProject time: Very long (+6 years)\nProject income (estimated): 40\nMillion $ (Military is paying)\nProject life cycle: Incremental\nlyfetime. Main milestones every\nyear.\nProject structure: Traditional at\nfirst (system department,\ndevelopment department, etc) not so\ngood. Process based later (the\nprocess establish a flow of work,\nrequirements, design,\nimplementation, test, feedback,\nmetrics): quite good so far.\nProject result: success (so far)\n\n", "Here you go:\n\nProject size: about 1 million lines of code, 30 people\nProject time: 9 years\nProject life cycle: good old waterfall, due to big customers requirements, but with staggered delivery to the QA team - it is very difficult to be agile when you have customers commitments to large clients\nProject structure: we are organized in departments but we use CMMI to keep them in sync - we have stakeholders, work products, deviance procedures, etc.\nProject result: we've really improved with the implementation of CMMI and have delivered our last few releases on time every time\n\n-C.\n" ]
[ 1, 1 ]
[]
[]
[ "methodology" ]
stackoverflow_0000060932_methodology.txt
Q: What are the CSS secrets to a flexible/fluid HTML form? The below HTML/CSS/Javascript (jQuery) code displays the #makes select box. Selecting an option displays the #models select box with relevant options. The #makes select box sits off-center and the #models select box fills the empty space when it is displayed. How do you style the form so that the #makes select box is centered when it is the only form element displayed, but when both select boxes are displayed, they are both centered within the container? var cars = [ { "makes" : "Honda", "models" : ['Accord','CRV','Pilot'] }, { "makes" :"Toyota", "models" : ['Prius','Camry','Corolla'] } ]; $(function() { vehicles = [] ; for(var i = 0; i < cars.length; i++) { vehicles[cars[i].makes] = cars[i].models ; } var options = ''; for (var i = 0; i < cars.length; i++) { options += '<option value="' + cars[i].makes + '">' + cars[i].makes + '</option>'; } $("#make").html(options); // populate select box with array $("#make").bind("click", function() { $("#model").children().remove() ; // clear select box var options = ''; for (var i = 0; i < vehicles[this.value].length; i++) { options += '<option value="' + vehicles[this.value][i] + '">' + vehicles[this.value][i] + '</option>'; } $("#model").html(options); // populate select box with array $("#models").addClass("show"); }); // bind end }); .hide { display: none; } .show { display: inline; } fieldset { border: #206ba4 1px solid; } fieldset legend { margin-top: -.4em; font-size: 20px; font-weight: bold; color: #206ba4; } fieldset fieldset { position: relative; margin-top: 25px; padding-top: .75em; background-color: #ebf4fa; } body { margin: 0; padding: 0; font-family: Verdana; font-size: 12px; text-align: center; } #wrapper { margin: 40px auto 0; } #myFieldset { width: 213px; } #area { margin: 20px; } #area select { width: 75px; float: left; } #area label { display: block; font-size: 1.1em; font-weight: bold; color: #000; } #area #selection { display: block; } #makes { margin: 5px; } #models { margin: 5px; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.0/jquery.min.js"></script> <div id="wrapper"> <fieldset id="myFieldset"> <legend>Cars</legend> <fieldset id="area"> <label>Select Make:</label> <div id="selection"> <div id="makes"> <select id="make"size="2"></select> </div> <div class="hide" id="models"> <select id="model" size="3"></select> </div> </div> </fieldset> </fieldset> </div> A: It's not entirely clear from your question what layout you're trying to achieve, but judging by that fact that you have applied "float:left" to the select elements, it looks like you want the select elements to appear side by side. If this is the case, you can achieve this by doing the following: To centrally align elements you need to add "text-align:center" to the containing block level element, in this case #selection. The position of elements that are floating is not affected by "text-align" declarations, so remove the "float:left" declaration from the select elements. In order for the #make and #model divs to sit side by side with out the use of floats they must be displayed as inline elements, so add "display:inline" to both #make and #model (note that this will lose the vertical margin on those elements, so you might need to make some other changes to get the exact layout you want). As select elements are displayed inline by default, an alternative to the last step is to remove the #make and #model divs and and apply the "show" and "hide" classes to the model select element directly. A: Floating the select boxes changes their display properties to "block". If you have no reason to float them, simply remove the "float: left" declaration, and add "text-align: center" to #makes and #models.
What are the CSS secrets to a flexible/fluid HTML form?
The below HTML/CSS/Javascript (jQuery) code displays the #makes select box. Selecting an option displays the #models select box with relevant options. The #makes select box sits off-center and the #models select box fills the empty space when it is displayed. How do you style the form so that the #makes select box is centered when it is the only form element displayed, but when both select boxes are displayed, they are both centered within the container? var cars = [ { "makes" : "Honda", "models" : ['Accord','CRV','Pilot'] }, { "makes" :"Toyota", "models" : ['Prius','Camry','Corolla'] } ]; $(function() { vehicles = [] ; for(var i = 0; i < cars.length; i++) { vehicles[cars[i].makes] = cars[i].models ; } var options = ''; for (var i = 0; i < cars.length; i++) { options += '<option value="' + cars[i].makes + '">' + cars[i].makes + '</option>'; } $("#make").html(options); // populate select box with array $("#make").bind("click", function() { $("#model").children().remove() ; // clear select box var options = ''; for (var i = 0; i < vehicles[this.value].length; i++) { options += '<option value="' + vehicles[this.value][i] + '">' + vehicles[this.value][i] + '</option>'; } $("#model").html(options); // populate select box with array $("#models").addClass("show"); }); // bind end }); .hide { display: none; } .show { display: inline; } fieldset { border: #206ba4 1px solid; } fieldset legend { margin-top: -.4em; font-size: 20px; font-weight: bold; color: #206ba4; } fieldset fieldset { position: relative; margin-top: 25px; padding-top: .75em; background-color: #ebf4fa; } body { margin: 0; padding: 0; font-family: Verdana; font-size: 12px; text-align: center; } #wrapper { margin: 40px auto 0; } #myFieldset { width: 213px; } #area { margin: 20px; } #area select { width: 75px; float: left; } #area label { display: block; font-size: 1.1em; font-weight: bold; color: #000; } #area #selection { display: block; } #makes { margin: 5px; } #models { margin: 5px; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.0/jquery.min.js"></script> <div id="wrapper"> <fieldset id="myFieldset"> <legend>Cars</legend> <fieldset id="area"> <label>Select Make:</label> <div id="selection"> <div id="makes"> <select id="make"size="2"></select> </div> <div class="hide" id="models"> <select id="model" size="3"></select> </div> </div> </fieldset> </fieldset> </div>
[ "It's not entirely clear from your question what layout you're trying to achieve, but judging by that fact that you have applied \"float:left\" to the select elements, it looks like you want the select elements to appear side by side. If this is the case, you can achieve this by doing the following:\n\nTo centrally align elements you need to add \"text-align:center\" to the containing block level element, in this case #selection.\nThe position of elements that are floating is not affected by \"text-align\" declarations, so remove the \"float:left\" declaration from the select elements.\nIn order for the #make and #model divs to sit side by side with out the use of floats they must be displayed as inline elements, so add \"display:inline\" to both #make and #model (note that this will lose the vertical margin on those elements, so you might need to make some other changes to get the exact layout you want).\n\nAs select elements are displayed inline by default, an alternative to the last step is to remove the #make and #model divs and and apply the \"show\" and \"hide\" classes to the model select element directly.\n", "Floating the select boxes changes their display properties to \"block\". If you have no reason to float them, simply remove the \"float: left\" declaration, and add \"text-align: center\" to #makes and #models.\n" ]
[ 1, 0 ]
[]
[]
[ "css", "html", "javascript", "jquery" ]
stackoverflow_0000060944_css_html_javascript_jquery.txt
Q: How do I determine the permission settings for PHP scripts? What are the best file permission settings for PHP scripts? Any suggestions on ways to figure out the minimum required permissions? A: The minimum permissions necessary for the script to function. A: WalloWizard is correct that you should only use the minimum permissions necessary for the script to function. However, let me be more specific, assuming that you're running on a Unix-based system such as Linux or BSD or Mac OSX. Your web server usually runs as an unprivileged user such as "nobody" and your scripts need to be readable by that user, so the best permissions are usually 644, meaning that you can read and write the script, and everyone else can only read it. In the uncommon case that the script is owned by the same user running the web server, you can set the permissions to 600, so that you can read and write the script and no one else can even read it.
How do I determine the permission settings for PHP scripts?
What are the best file permission settings for PHP scripts? Any suggestions on ways to figure out the minimum required permissions?
[ "The minimum permissions necessary for the script to function.\n", "WalloWizard is correct that you should only use the minimum permissions necessary for the script to function.\nHowever, let me be more specific, assuming that you're running on a Unix-based system such as Linux or BSD or Mac OSX. Your web server usually runs as an unprivileged user such as \"nobody\" and your scripts need to be readable by that user, so the best permissions are usually 644, meaning that you can read and write the script, and everyone else can only read it.\nIn the uncommon case that the script is owned by the same user running the web server, you can set the permissions to 600, so that you can read and write the script and no one else can even read it.\n" ]
[ 5, 5 ]
[]
[]
[ "php" ]
stackoverflow_0000061005_php.txt
Q: Scriptaculous Ajax.Autocompleter extra functionality in LI I'm using the Ajax.Autocompleter class from the Prototype/Scriptaculous library which calls an ASP.NET WebHandler which creates an unordered list with list items that contain suggestions. Now I'm working on a page where you can add suggestions to a 'stop words' table, which means that if they occur in the table, they won't be suggested anymore. I put a button inside the LI elements and when you click on it it should do an ajax request to a page which then adds the word to the table. That works. But then I want the suggestions to be refreshed instantly, so that the suggestions appear without the word just added to the table. Preferably the selected word is the word next or before the previously clicked word. How do I do this? What happens instead now is that the LI you clicked the button of gets to be the selected word and the suggestions disappear. The list items look like this: <li>{0} <img onclick=\"deleteWord('{0}');\" src=\"delete.gif\"/> </li> Where {0} represents the suggested word. The JavaScript function deleteWord(w) gets to call the webhandler which can add the word to the 'stop words' table. A: Scriptaculous Autocompleter monitors the onclick event on the 'li'. When you stick an image inside an 'li', both the 'img' and the 'li' will get the click event, as this demonstrates. That's why Autocompleter is just doing its normal thing when the button is clicked: <ul> <li onclick="alert('li');"> <img onclick="alert('image')" src="blah.gif"/> </li> </ul> What I'd try is to have your 'onclick' set some sort of state variable that tells you that Delete has been clicked. Then, override 'updateElement' in the Autocompleter to not do anything if the state is set. An alternative is to subclass the entire Autocompleter and override 'onClick'. That will make the list not disappear when your image is clicked. To have the list update in real-time, in your image's 'onclick', delete the 'li' from the list in the DOM too. Like this conceptually: img onclick="deleteWord('blah');setStateVariable();this.parentNode.parentNode.removeChild(this.parentNode);"
Scriptaculous Ajax.Autocompleter extra functionality in LI
I'm using the Ajax.Autocompleter class from the Prototype/Scriptaculous library which calls an ASP.NET WebHandler which creates an unordered list with list items that contain suggestions. Now I'm working on a page where you can add suggestions to a 'stop words' table, which means that if they occur in the table, they won't be suggested anymore. I put a button inside the LI elements and when you click on it it should do an ajax request to a page which then adds the word to the table. That works. But then I want the suggestions to be refreshed instantly, so that the suggestions appear without the word just added to the table. Preferably the selected word is the word next or before the previously clicked word. How do I do this? What happens instead now is that the LI you clicked the button of gets to be the selected word and the suggestions disappear. The list items look like this: <li>{0} <img onclick=\"deleteWord('{0}');\" src=\"delete.gif\"/> </li> Where {0} represents the suggested word. The JavaScript function deleteWord(w) gets to call the webhandler which can add the word to the 'stop words' table.
[ "Scriptaculous Autocompleter monitors the onclick event on the 'li'. When you stick an image inside an 'li', both the 'img' and the 'li' will get the click event, as this demonstrates. That's why Autocompleter is just doing its normal thing when the button is clicked:\n<ul>\n <li onclick=\"alert('li');\">\n <img onclick=\"alert('image')\" src=\"blah.gif\"/>\n </li>\n</ul>\n\nWhat I'd try is to have your 'onclick' set some sort of state variable that tells you that Delete has been clicked. Then, override 'updateElement' in the Autocompleter to not do anything if the state is set. An alternative is to subclass the entire Autocompleter and override 'onClick'. That will make the list not disappear when your image is clicked.\nTo have the list update in real-time, in your image's 'onclick', delete the 'li' from the list in the DOM too. Like this conceptually:\nimg onclick=\"deleteWord('blah');setStateVariable();this.parentNode.parentNode.removeChild(this.parentNode);\"\n\n" ]
[ 0 ]
[]
[]
[ "autocomplete", "javascript", "prototypejs", "scriptaculous" ]
stackoverflow_0000060794_autocomplete_javascript_prototypejs_scriptaculous.txt
Q: Counting number of views for a page ignoring search engines? I notice that StackOverflow has a views count for each question and that these view numbers are fairly low and accurate. I have a similar thing on one of my sites. It basically logs a "hit" whenever the page is loaded in the backend code. Unfortunately it also does this for search engine hits giving bloated and inaccurate numbers. I guess one way to not count a robot would be to do the view counting with an AJAX call once the page has loaded, but I'm sure there's other, better ways to ignore search engines in your hit counters whilst still letting them in to crawl your site. Do you know any? A: An AJAX call will do it, but usually search engines will not load images, javascript or CSS files, so it may be easier to include one of those files in the page, and pass the URL of the page you want to log a request against as a parameter in the file request. For example, in the page... http://www.example.com/example.html You might include in the head section <link href="empty.css?log=example.html" rel="stylesheet" type="text/css" /> And have your server side log the request, then return an empty css file. The same approach would apply to JavaScript or and image file, though in all cases you'll want to look carefully at what caching might take place. Another option would be to eliminate the search engines based on their user agent. There's a big list of possible user agents at http://user-agents.org/ to get you started. Of course, you could go the other way, and only count requests from things you know are web browsers (covering IE, Firefox, Safari, Opera and this newfangled Chrome thing would get you 99% of the way there). Even easier would be to use a log analytics tool like awstats or a service like Google analytics, both of which have already solved this problem. A: To solve this problem I implemented a simple filter that would look at the User-Agent header in the HTTP request and compare it to a list of known robots. I got the robot list from www.robotstxt.org. It's downloadable in a simple text-format that can easily be parsed to auto-generate the "blacklist". A: You don't really need to use AJAX, just use JavaScript to add an iFrame off screen. KEEP IT SIMPLE <script type="javascript"> document.write('<iframe src="myLogScript.php" style="visibility:hidden" width="1" height="1" frameborder="0">'); </script> A: An extension to Matt Sheppard's answer might be something like the following: <script type="text/javascript"> var thePg=window.location.pathname; var theSite=window.location.hostname; var theImage=new Image; theImage.src="/test/hitcounter.php?pg=" + thePg + "?site=" + theSite; </script> which can be plugged into a page header or footer template without needing to substitute the page name server-side. Note that if you include the query string (window.location.search), a robust version of this should encode the string to prevent evildoers from crafting page requests that exploit vulnerabilities based on weird stuff in URLs. The nice thing about this vs. a regular <img> tag or <iframe> is that the user won't see a red x if there is a problem with the hitcounter script. In some cases, it's also important to know the URL that was seen by the browser, before rewrites, etc. that happen server-side, and this give you that. If you want it both ways, then add another parameter server-side that inserts that version of the page name into the query string as well. An example of the log files from a test of this page: 10.1.1.17 - - [13/Sep/2008:22:21:00 -0400] "GET /test/testpage.html HTTP/1.1" 200 306 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.16) Gecko/20080702 Firefox/2.0.0.16" 10.1.1.17 - - [13/Sep/2008:22:21:00 -0400] "GET /test/hitcounter.php?pg=/test/testpage.html?site=www.home.***.com HTTP/1.1" 301 - "http://www.home.***.com/test/testpage.html" "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.16) Gecko/20080702 Firefox/2.0.0.16" A: The reason Stack Overflow has accurate view counts is that it only count each view/user once. Third-party hit counter (and web statistics) application often filter out search engines and display them in a separate window/tab/section. A: You are either going to have to do what you said in your question with AJAX. Or exclude out User-Agent strings that are known search engines. The only sure way to stop bots are with AJAX.
Counting number of views for a page ignoring search engines?
I notice that StackOverflow has a views count for each question and that these view numbers are fairly low and accurate. I have a similar thing on one of my sites. It basically logs a "hit" whenever the page is loaded in the backend code. Unfortunately it also does this for search engine hits giving bloated and inaccurate numbers. I guess one way to not count a robot would be to do the view counting with an AJAX call once the page has loaded, but I'm sure there's other, better ways to ignore search engines in your hit counters whilst still letting them in to crawl your site. Do you know any?
[ "An AJAX call will do it, but usually search engines will not load images, javascript or CSS files, so it may be easier to include one of those files in the page, and pass the URL of the page you want to log a request against as a parameter in the file request.\nFor example, in the page...\nhttp://www.example.com/example.html\nYou might include in the head section\n<link href=\"empty.css?log=example.html\" rel=\"stylesheet\" type=\"text/css\" />\n\nAnd have your server side log the request, then return an empty css file. The same approach would apply to JavaScript or and image file, though in all cases you'll want to look carefully at what caching might take place.\nAnother option would be to eliminate the search engines based on their user agent. There's a big list of possible user agents at http://user-agents.org/ to get you started. Of course, you could go the other way, and only count requests from things you know are web browsers (covering IE, Firefox, Safari, Opera and this newfangled Chrome thing would get you 99% of the way there).\nEven easier would be to use a log analytics tool like awstats or a service like Google analytics, both of which have already solved this problem.\n", "To solve this problem I implemented a simple filter that would look at the User-Agent header in the HTTP request and compare it to a list of known robots. \nI got the robot list from www.robotstxt.org. It's downloadable in a simple text-format that can easily be parsed to auto-generate the \"blacklist\".\n", "You don't really need to use AJAX, just use JavaScript to add an iFrame off screen. KEEP IT SIMPLE\n<script type=\"javascript\">\ndocument.write('<iframe src=\"myLogScript.php\" style=\"visibility:hidden\" width=\"1\" height=\"1\" frameborder=\"0\">');\n</script>\n\n", "An extension to Matt Sheppard's answer might be something like the following:\n <script type=\"text/javascript\">\n var thePg=window.location.pathname;\n var theSite=window.location.hostname;\n var theImage=new Image;\n theImage.src=\"/test/hitcounter.php?pg=\" + thePg + \"?site=\" + theSite;\n </script>\n\nwhich can be plugged into a page header or footer template without needing to substitute the page name server-side. Note that if you include the query string (window.location.search), a robust version of this should encode the string to prevent evildoers from crafting page requests that exploit vulnerabilities based on weird stuff in URLs. The nice thing about this vs. a regular <img> tag or <iframe> is that the user won't see a red x if there is a problem with the hitcounter script.\nIn some cases, it's also important to know the URL that was seen by the browser, before rewrites, etc. that happen server-side, and this give you that. If you want it both ways, then add another parameter server-side that inserts that version of the page name into the query string as well.\nAn example of the log files from a test of this page:\n10.1.1.17 - - [13/Sep/2008:22:21:00 -0400] \"GET /test/testpage.html HTTP/1.1\" 200 306 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.16) Gecko/20080702 Firefox/2.0.0.16\"\n10.1.1.17 - - [13/Sep/2008:22:21:00 -0400] \"GET /test/hitcounter.php?pg=/test/testpage.html?site=www.home.***.com HTTP/1.1\" 301 - \"http://www.home.***.com/test/testpage.html\" \"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.16) Gecko/20080702 Firefox/2.0.0.16\"\n\n", "The reason Stack Overflow has accurate view counts is that it only count each view/user once.\nThird-party hit counter (and web statistics) application often filter out search engines and display them in a separate window/tab/section. \n", "You are either going to have to do what you said in your question with AJAX. Or exclude out User-Agent strings that are known search engines. The only sure way to stop bots are with AJAX.\n" ]
[ 5, 2, 1, 1, 0, 0 ]
[]
[]
[ "search_engine", "website_metrics" ]
stackoverflow_0000045824_search_engine_website_metrics.txt
Q: How do I keep track of related windows in X11? Unfortunately, my question is not as simple as keeping track of two windows created by the same process. Here is what I have: Two users, Jack and Jim are remotely logged in to the same Unix system and run X servers Jack runs an application, 'AwesomeApp', that opens a GUI in a X window Jim runs another instance of this application, opening his own GUI window Now, Jack runs a supervisor application that will communicate with the process owning the first window (eg 'AwesomeApp') because it's HIS instance of 'AwesomeApp' How can his instance of the supervisor find which instance of 'AwesomeApp' window is his own? Aaaahhhh...looking it up on a per-user basis yes that could work. As long as I tell the users that they cannot log in with the same user account from two different places. A: You can use pgrep to get the process ID of Jack's instance of AwesomeApp: pgrep -u Jack AwesomeApp So if you launch the supervisor application from a shell script, you could do something like the following: AWESOME_ID=`pgrep -u $USER AwesomeApp 2>/dev/null` # run the supervisor application and pass the process id as the argument supervisor $AWESOME_ID Alternatively, if you don't want to use external programs like pgrep or ps, you could always try looking for the process in /proc directly.
How do I keep track of related windows in X11?
Unfortunately, my question is not as simple as keeping track of two windows created by the same process. Here is what I have: Two users, Jack and Jim are remotely logged in to the same Unix system and run X servers Jack runs an application, 'AwesomeApp', that opens a GUI in a X window Jim runs another instance of this application, opening his own GUI window Now, Jack runs a supervisor application that will communicate with the process owning the first window (eg 'AwesomeApp') because it's HIS instance of 'AwesomeApp' How can his instance of the supervisor find which instance of 'AwesomeApp' window is his own? Aaaahhhh...looking it up on a per-user basis yes that could work. As long as I tell the users that they cannot log in with the same user account from two different places.
[ "You can use pgrep to get the process ID of Jack's instance of AwesomeApp:\npgrep -u Jack AwesomeApp\n\n\nSo if you launch the supervisor application from a shell script, you could do something like the following:\nAWESOME_ID=`pgrep -u $USER AwesomeApp 2>/dev/null`\n\n# run the supervisor application and pass the process id as the argument\nsupervisor $AWESOME_ID\n\n\nAlternatively, if you don't want to use external programs like pgrep or ps, you could always try looking for the process in /proc directly.\n" ]
[ 1 ]
[]
[]
[ "x11" ]
stackoverflow_0000060967_x11.txt
Q: How do I get a particular labeled version of a folder in Borland StarTeam? I'm about to perform a bunch of folder moving operations in StarTeam (including some new nesting levels) and I would like to set a label so that I can roll back in case of issues. I figured out how to set a label on a folder and all its children, but I couldn't figure out how to get the version of that folder corresponding to that particular label. It seems like labels are tied to the files themselves and not the folders/folder structure. A: I've switched to Subversion and FogBugz so I am rusty on StarTeam. I think you need a View Label. From View menu, select Labels... to open the Labels dialog. On the View tab, click New... button to open View Label dialog. Type in label name as "Release 1.2.3.4", check Frozen, and hit OK. To get back to the state, From View menu, select Select Configuration... to open the Select a View Configuration dialog. Select Labeled configuration, and pick "Release 1.2.3.4" You can then create a new view from the view label to branch off you want to. See the Help file > Working with StarTeam > Managing Views. Here's a quote from Configuring a View: By default, a view has a current configuration – that is, it displays the latest revisions of the items in the project. However, you can roll back a view to a past state based on a label, promotion state, or a point in time.
How do I get a particular labeled version of a folder in Borland StarTeam?
I'm about to perform a bunch of folder moving operations in StarTeam (including some new nesting levels) and I would like to set a label so that I can roll back in case of issues. I figured out how to set a label on a folder and all its children, but I couldn't figure out how to get the version of that folder corresponding to that particular label. It seems like labels are tied to the files themselves and not the folders/folder structure.
[ "I've switched to Subversion and FogBugz so I am rusty on StarTeam. I think you need a View Label.\n\nFrom View menu, select Labels... to open the Labels dialog.\nOn the View tab, click New... button to open View Label dialog.\nType in label name as \"Release 1.2.3.4\", check Frozen, and hit OK.\n\nTo get back to the state, \n\nFrom View menu, select Select Configuration... to open the Select a View Configuration dialog.\nSelect Labeled configuration, and pick \"Release 1.2.3.4\"\n\nYou can then create a new view from the view label to branch off you want to. See the Help file > Working with StarTeam > Managing Views. Here's a quote from Configuring a View:\n\nBy default, a view has a current\n configuration – that is, it displays\n the latest revisions of the items in\n the project. However, you can roll\n back a view to a past state based on a\n label, promotion state, or a point in\n time.\n\n" ]
[ 4 ]
[]
[]
[ "starteam", "version_control" ]
stackoverflow_0000060099_starteam_version_control.txt
Q: Loading Java classes from a signed applet If I'm running a signed Java applet. Can I load additional classes from remote sources, in the same domain or maybe even the same host, and run them? I'd like to do this without changing pages or even stopping the current applet. Of course, the total size of all classes is too large to load them all at once. Is there a way to do this? And is there a way to do this with signed applets and preserve their "confidence" status? A: I think classes are lazy loaded in applets. being loaded on demand. Anyway, if the classes are outside of a jar you can simply use the applet classloader and load them by name. Ex: ClassLoader loader = this.getClass().getClassLoader(); Class clazz = loader.loadClass("acme.AppletAddon"); If you want to load classes from a jar I think you will need to create a new instance of URLClassLoader with the url(s) of the jar(s). URL[] urls = new URL[]{new URL("http://localhost:8080/addon.jar")}; URLClassLoader loader = URLClassLoader.newInstance(urls,this.getClass().getClassLoader()); Class clazz = loader.loadClass("acme.AppletAddon"); By default, applets are forbidden to create new classloaders. But if you sign your applet and include permission to create new classloaders you can do it. A: Yes, you can open URL connections to the host you ran your applet from. You can either create a classloader with HTTP urls, or download the classes (as jars) to the user's machine and create a classloader with those jars in the classpath. The applet won't stop and you don't need to load another page. Regarding the second part of your question about confidence, once the user has granted access to your applet it can download anything, yes anything, it wants to the local machine. You can probably inform the user as to what it's doing, if your UI design permits this. Hope this helps. A: Sounds like it should be possible (but I've never done it). Have you already had a look at Remote Method Invocation (RMI)?
Loading Java classes from a signed applet
If I'm running a signed Java applet. Can I load additional classes from remote sources, in the same domain or maybe even the same host, and run them? I'd like to do this without changing pages or even stopping the current applet. Of course, the total size of all classes is too large to load them all at once. Is there a way to do this? And is there a way to do this with signed applets and preserve their "confidence" status?
[ "I think classes are lazy loaded in applets. being loaded on demand.\nAnyway, if the classes are outside of a jar you can simply use the applet classloader and load them by name. Ex:\nClassLoader loader = this.getClass().getClassLoader();\nClass clazz = loader.loadClass(\"acme.AppletAddon\");\n\nIf you want to load classes from a jar I think you will need to create a new instance of URLClassLoader with the url(s) of the jar(s).\nURL[] urls = new URL[]{new URL(\"http://localhost:8080/addon.jar\")};\nURLClassLoader loader = URLClassLoader.newInstance(urls,this.getClass().getClassLoader());\nClass clazz = loader.loadClass(\"acme.AppletAddon\");\n\nBy default, applets are forbidden to create new classloaders. But if you sign your applet and include permission to create new classloaders you can do it.\n", "Yes, you can open URL connections to the host you ran your applet from. You can either create a classloader with HTTP urls, or download the classes (as jars) to the user's machine and create a classloader with those jars in the classpath. The applet won't stop and you don't need to load another page.\nRegarding the second part of your question about confidence, once the user has granted access to your applet it can download anything, yes anything, it wants to the local machine. You can probably inform the user as to what it's doing, if your UI design permits this.\nHope this helps.\n", "Sounds like it should be possible (but I've never done it). Have you already had a look at Remote Method Invocation (RMI)?\n" ]
[ 6, 2, 0 ]
[]
[]
[ "applet", "download", "java", "signed" ]
stackoverflow_0000060470_applet_download_java_signed.txt
Q: IIS Integrated Request Processing Pipeline -- Modify Request I want to implement an ISAPI filter like feature using HttpModule in IIS7 running under IIS Integrated Request Processing Pipeline mode. The goal is to look at the incoming request at the Web Server level, and inject some custom HttpHeaders into the request. (for ex: HTTP\_EAUTH\_ID) And later in the page lifecycle of an ASPX page, i should be able to use that variable as string eauthId = Request.ServerVariables["HTTP\_EAUTH\_ID"].ToString(); So implementing this module at the Web Server level, is it possible to alter the ServerVariables collection ?? A: HttpRequest.ServerVariables Property is a read-only collection. So, you cannot directly modify that. I would suggest storing your custom data in httpcontext (or global application object or your database) from your httpmodule and then reading that shared value in the aspx page. If you still want to modify server variables, there is a hack technique mentioned in this thread using Reflection. A: I believe the server variables list only contains the headers sent from the browser to the server. A: You won't be able to modify either the HttpRequest.Headers or the HttpRequest.ServerVariables collection. You will however be able to tack on your information to any of: HttpContext.Current.Items HttpContext.Current.Response.Headers Unfortunately, Request.Params, Request.QueryString, Request.Cookies, Request.Form (and almost any other place you'd think of stuffing it is read only. I'd strongly advise against using reflection if this is a HttpModule you're planning on installing into IIS 7. Given that this code will be called for (potentially) every request that goes through the web server it'll need to be really fast and reflection just isn't going to cut it (unless you have very few users). Good luck!
IIS Integrated Request Processing Pipeline -- Modify Request
I want to implement an ISAPI filter like feature using HttpModule in IIS7 running under IIS Integrated Request Processing Pipeline mode. The goal is to look at the incoming request at the Web Server level, and inject some custom HttpHeaders into the request. (for ex: HTTP\_EAUTH\_ID) And later in the page lifecycle of an ASPX page, i should be able to use that variable as string eauthId = Request.ServerVariables["HTTP\_EAUTH\_ID"].ToString(); So implementing this module at the Web Server level, is it possible to alter the ServerVariables collection ??
[ "HttpRequest.ServerVariables Property is a read-only collection. So, you cannot directly modify that. I would suggest storing your custom data in httpcontext (or global application object or your database) from your httpmodule and then reading that shared value in the aspx page.\nIf you still want to modify server variables, there is a hack technique mentioned in this thread using Reflection.\n", "I believe the server variables list only contains the headers sent from the browser to the server.\n", "You won't be able to modify either the HttpRequest.Headers or the HttpRequest.ServerVariables collection. You will however be able to tack on your information to any of: \nHttpContext.Current.Items\nHttpContext.Current.Response.Headers\n\nUnfortunately, Request.Params, Request.QueryString, Request.Cookies, Request.Form (and almost any other place you'd think of stuffing it is read only.\nI'd strongly advise against using reflection if this is a HttpModule you're planning on installing into IIS 7. Given that this code will be called for (potentially) every request that goes through the web server it'll need to be really fast and reflection just isn't going to cut it (unless you have very few users).\nGood luck!\n" ]
[ 3, 0, 0 ]
[]
[]
[ "asp.net", "httpmodule", "iis_7" ]
stackoverflow_0000060672_asp.net_httpmodule_iis_7.txt
Q: C variable and constant value comparison not matching If I have: signed char * p; and I do a comparison: if ( *p == 0xFF ) break; it will never catch 0XFF, but if I replace it with -1 it will: if ( *p == (signed char)0xFF ) break; How can this happen? Is it something with the sign flag? I though that 0xFF == -1 == 255. A: The value 0xFF is a signed int value. C will promote the *p to an int when doing the comparison, so the first if statement is equivalent to: if( -1 == 255 ) break; which is of course false. By using (signed char)0xFF the statement is equivalent to: if( -1 == -1 ) break; which works as you expect. The key point here is that the comparison is done with int types instead of signed char types. A: Integer literals have signed int type. Since 0xFF is a signed int, the compiler converts *p to a signed int and then does the comparison. When *p is -1, which is then converted from a signed char to a signed int, it is still -1 which has a representation of 0xFFFFFFFF, which is not equal to 0xFF. A: It casts to an int for the first comparison since 0xFF is still considered an int, meaning your char is -128 to 127, but the 0xFF is still 255. In the second case your telling it that 0xFF is really an signed char, not an int A: 0xff will be seen as an integer constant, with the value of 255. You should always pay attention to these kind of comparison between different types. If you want to be sure that the compiler will generate the right code, you should use the typecast: if( *p == (signed char)0xFF ) break; Anyway, beware that the next statement will not work the same way: if( (int)*p == 0xFF ) break; Also, maybe it would be a better idea to avoid signed chars, or, it you must use signed chars, to compare them with signed values such as -1 in this case: if( *p == -1 ) break; 0xff==-1 only if those values would be assigned to some char (or unsigned char) variables: char a=0xff; char b=-1; if(a==b) break;
C variable and constant value comparison not matching
If I have: signed char * p; and I do a comparison: if ( *p == 0xFF ) break; it will never catch 0XFF, but if I replace it with -1 it will: if ( *p == (signed char)0xFF ) break; How can this happen? Is it something with the sign flag? I though that 0xFF == -1 == 255.
[ "The value 0xFF is a signed int value. C will promote the *p to an int when doing the comparison, so the first if statement is equivalent to:\nif( -1 == 255 ) break;\n\nwhich is of course false. By using (signed char)0xFF the statement is equivalent to:\nif( -1 == -1 ) break;\n\nwhich works as you expect. The key point here is that the comparison is done with int types instead of signed char types.\n", "Integer literals have signed int type. Since 0xFF is a signed int, the compiler converts *p to a signed int and then does the comparison.\nWhen *p is -1, which is then converted from a signed char to a signed int, it is still -1 which has a representation of 0xFFFFFFFF, which is not equal to 0xFF.\n", "It casts to an int for the first comparison since 0xFF is still considered an int, meaning your char is -128 to 127, but the 0xFF is still 255.\nIn the second case your telling it that 0xFF is really an signed char, not an int\n", "0xff will be seen as an integer constant, with the value of 255. You should always pay attention to these kind of comparison between different types. If you want to be sure that the compiler will generate the right code, you should use the typecast:\n\nif( *p == (signed char)0xFF ) break;\n\nAnyway, beware that the next statement will not work the same way:\n\nif( (int)*p == 0xFF ) break;\n\nAlso, maybe it would be a better idea to avoid signed chars, or, it you must use signed chars, to compare them with signed values such as -1 in this case:\n\nif( *p == -1 ) break;\n\n0xff==-1 only if those values would be assigned to some char (or unsigned char) variables:\n\nchar a=0xff;\nchar b=-1;\nif(a==b) break;\n\n" ]
[ 29, 4, 2, 1 ]
[]
[]
[ "c" ]
stackoverflow_0000061227_c.txt
Q: Recursive lambda expression to traverse a tree in C# Can someone show me how to implement a recursive lambda expression to traverse a tree structure in C#. A: Ok, I found some free time finally. Here we go: class TreeNode { public string Value { get; set;} public List<TreeNode> Nodes { get; set;} public TreeNode() { Nodes = new List<TreeNode>(); } } Action<TreeNode> traverse = null; traverse = (n) => { Console.WriteLine(n.Value); n.Nodes.ForEach(traverse);}; var root = new TreeNode { Value = "Root" }; root.Nodes.Add(new TreeNode { Value = "ChildA"} ); root.Nodes[0].Nodes.Add(new TreeNode { Value = "ChildA1" }); root.Nodes[0].Nodes.Add(new TreeNode { Value = "ChildA2" }); root.Nodes.Add(new TreeNode { Value = "ChildB"} ); root.Nodes[1].Nodes.Add(new TreeNode { Value = "ChildB1" }); root.Nodes[1].Nodes.Add(new TreeNode { Value = "ChildB2" }); traverse(root); A: A proper solution, and indeed the idiomatic solution in many functional programming languages, would be the use of a fixed-point combinator. In a nutshell: a fixed-point combinator answers the question “how do I define an anonymous function to be recursive?”. But the solution is so nontrivial that whole articles are written to explain them. A simple, pragmatic alternative is to “go back in time” to the antics of C: declaration before definition. Try the following (the “factorial” function): Func<int, int> fact = null; fact = x => (x == 0) ? 1 : x * fact(x - 1); Works like a charm. Or, for a pre-order tree traversal on an object of class TreeNode which implements IEnumerable<TreeNode> appropriately to go over its children: Action<TreeNode, Action<TreeNode>> preorderTraverse = null; preorderTraverse = (node, action) => { action(node); foreach (var child in node) preorderTraverse(child, action); }; A: A simple alternative is to “go back in time” to the antics of C and C++: declaration before definition. Try the following: Func<int, int> fact = null; fact = x => (x == 0) ? 1 : x * fact(x - 1); Works like a charm. Yes, that does work, with one little caveat. C# has mutable references. So make sure you don't accidentally do something like this: Func<int, int> fact = null; fact = x => (x == 0) ? 1 : x * fact(x - 1); // Make a new reference to the factorial function Func<int, int> myFact = fact; // Use the new reference to calculate the factorial of 4 myFact(4); // returns 24 // Modify the old reference fact = x => x; // Again, use the new reference to calculate myFact(4); // returns 12 Of course, this example is a bit contrived, but this could happen when using mutable references. If you use the combinators from aku's links, this won't be possible. A: Assuming a mythical object TreeItem, that conatins a Children collection to represent your hierarchy. public void HandleTreeItems(Action<TreeItem> item, TreeItem parent) { if (parent.Children.Count > 0) { foreach (TreeItem ti in parent.Children) { HandleTreeItems(item, ti); } } item(parent); } Now to call it, passing in the lambda that handles one item, by printing its name to the console. HandleTreeItems(item => { Console.WriteLine(item.Name); }, TreeItemRoot);
Recursive lambda expression to traverse a tree in C#
Can someone show me how to implement a recursive lambda expression to traverse a tree structure in C#.
[ "Ok, I found some free time finally.\nHere we go: \nclass TreeNode\n{\n public string Value { get; set;}\n public List<TreeNode> Nodes { get; set;}\n\n\n public TreeNode()\n {\n Nodes = new List<TreeNode>();\n }\n}\n\nAction<TreeNode> traverse = null;\n\ntraverse = (n) => { Console.WriteLine(n.Value); n.Nodes.ForEach(traverse);};\n\nvar root = new TreeNode { Value = \"Root\" };\nroot.Nodes.Add(new TreeNode { Value = \"ChildA\"} );\nroot.Nodes[0].Nodes.Add(new TreeNode { Value = \"ChildA1\" });\nroot.Nodes[0].Nodes.Add(new TreeNode { Value = \"ChildA2\" });\nroot.Nodes.Add(new TreeNode { Value = \"ChildB\"} );\nroot.Nodes[1].Nodes.Add(new TreeNode { Value = \"ChildB1\" });\nroot.Nodes[1].Nodes.Add(new TreeNode { Value = \"ChildB2\" });\n\ntraverse(root);\n\n", "A proper solution, and indeed the idiomatic solution in many functional programming languages, would be the use of a fixed-point combinator. In a nutshell: a fixed-point combinator answers the question “how do I define an anonymous function to be recursive?”. But the solution is so nontrivial that whole articles are written to explain them.\nA simple, pragmatic alternative is to “go back in time” to the antics of C: declaration before definition. Try the following (the “factorial” function):\nFunc<int, int> fact = null;\nfact = x => (x == 0) ? 1 : x * fact(x - 1);\n\nWorks like a charm.\nOr, for a pre-order tree traversal on an object of class TreeNode which implements IEnumerable<TreeNode> appropriately to go over its children:\nAction<TreeNode, Action<TreeNode>> preorderTraverse = null;\npreorderTraverse = (node, action) => {\n action(node);\n foreach (var child in node) preorderTraverse(child, action);\n};\n\n", "\nA simple alternative is to “go back in time” to the antics of C and C++: declaration before definition. Try the following:\nFunc<int, int> fact = null;\nfact = x => (x == 0) ? 1 : x * fact(x - 1);\n\nWorks like a charm.\n\nYes, that does work, with one little caveat. C# has mutable references. So make sure you don't accidentally do something like this:\nFunc<int, int> fact = null;\nfact = x => (x == 0) ? 1 : x * fact(x - 1);\n\n// Make a new reference to the factorial function\nFunc<int, int> myFact = fact;\n\n// Use the new reference to calculate the factorial of 4\nmyFact(4); // returns 24\n\n// Modify the old reference\nfact = x => x;\n\n// Again, use the new reference to calculate\nmyFact(4); // returns 12\n\nOf course, this example is a bit contrived, but this could happen when using mutable references. If you use the combinators from aku's links, this won't be possible.\n", "Assuming a mythical object TreeItem, that conatins a Children collection to represent your hierarchy.\n public void HandleTreeItems(Action<TreeItem> item, TreeItem parent)\n {\n if (parent.Children.Count > 0)\n {\n foreach (TreeItem ti in parent.Children)\n {\n HandleTreeItems(item, ti);\n }\n }\n\n item(parent);\n }\n\nNow to call it, passing in the lambda that handles one item, by printing its name to the console.\nHandleTreeItems(item => { Console.WriteLine(item.Name); }, TreeItemRoot);\n\n" ]
[ 80, 30, 18, 1 ]
[]
[]
[ "c#", "lambda", "recursion" ]
stackoverflow_0000061143_c#_lambda_recursion.txt
Q: How do you resolve .Net namespace conflicts with the 'using' keyword? Here's the problem, you include multiple assemblies and add 'using namespaceX' at the top of your code file. Now you want to create a class or use a symbol which is defined in multiple namespaces, e.g. System.Windows.Controls.Image & System.Drawing.Image Now unless you use the fully qualified name, there will be a crib/build error due to ambiguity inspite of the right 'using' declarations at the top. What is the way out here? (Another knowledge base post.. I found the answer after about 10 minutes of searching because I didn't know the right keyword to search for) A: Use alias using System.Windows.Controls; using Drawing = System.Drawing; ... Image img = ... //System.Windows.Controls.Image Drawing.Image img2 = ... //System.Drawing.Image C# using directive A: This page has a very good writeup on namespaces and the using-statement: http://www.blackwasp.co.uk/Namespaces.aspx You want to read the part about "Creating Aliases" that will allow you to make an alias for one or both of the name spaces and reference them with that like this: using ControlImage = System.Windows.Controls.Image; using System.Drawing.Image; ControlImage.Image myImage = new ControlImage.Image(); myImage.Width = 200;
How do you resolve .Net namespace conflicts with the 'using' keyword?
Here's the problem, you include multiple assemblies and add 'using namespaceX' at the top of your code file. Now you want to create a class or use a symbol which is defined in multiple namespaces, e.g. System.Windows.Controls.Image & System.Drawing.Image Now unless you use the fully qualified name, there will be a crib/build error due to ambiguity inspite of the right 'using' declarations at the top. What is the way out here? (Another knowledge base post.. I found the answer after about 10 minutes of searching because I didn't know the right keyword to search for)
[ "Use alias\nusing System.Windows.Controls;\nusing Drawing = System.Drawing;\n\n...\n\nImage img = ... //System.Windows.Controls.Image\nDrawing.Image img2 = ... //System.Drawing.Image\n\nC# using directive\n", "This page has a very good writeup on namespaces and the using-statement:\nhttp://www.blackwasp.co.uk/Namespaces.aspx\nYou want to read the part about \"Creating Aliases\" that will allow you to make an alias for one or both of the name spaces and reference them with that like this:\nusing ControlImage = System.Windows.Controls.Image;\nusing System.Drawing.Image;\n\nControlImage.Image myImage = new ControlImage.Image();\nmyImage.Width = 200;\n\n" ]
[ 34, 6 ]
[]
[]
[ ".net", "namespaces", "using" ]
stackoverflow_0000061262_.net_namespaces_using.txt
Q: In PHP is it possible to use a function inside a variable I know in php you can embed variables inside variables, like: <? $var1 = "I\'m including {$var2} in this variable.."; ?> But I was wondering how, and if it was possible to include a function inside a variable. I know I could just write: <?php $var1 = "I\'m including "; $var1 .= somefunc(); $var1 = " in this variable.."; ?> But what if I have a long variable for output, and I don't want to do this every time, or I want to use multiple functions: <?php $var1 = <<<EOF <html lang="en"> <head> <title>AAAHHHHH</title> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"> </head> <body> There is <b>alot</b> of text and html here... but I want some <i>functions</i>! -somefunc() doesn't work -{somefunc()} doesn't work -$somefunc() and {$somefunc()} doesn't work of course because a function needs to be a string -more non-working: ${somefunc()} </body> </html> EOF; ?> Or I want dynamic changes in that load of code: <? function somefunc($stuff) { $output = "my bold text <b>{$stuff}</b>."; return $output; } $var1 = <<<EOF <html lang="en"> <head> <title>AAAHHHHH</title> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"> </head> <body> somefunc("is awesome!") somefunc("is actually not so awesome..") because somefunc("won\'t work due to my problem.") </body> </html> EOF; ?> Well? A: Function calls within strings are supported since PHP5 by having a variable containing the name of the function to call: <? function somefunc($stuff) { $output = "<b>{$stuff}</b>"; return $output; } $somefunc='somefunc'; echo "foo {$somefunc("bar")} baz"; ?> will output "foo <b>bar</b> baz". I find it easier however (and this works in PHP4) to either just call the function outside of the string: <? echo "foo " . somefunc("bar") . " baz"; ?> or assign to a temporary variable: <? $bar = somefunc("bar"); echo "foo {$bar} baz"; ?> A: "bla bla bla".function("blub")." and on it goes"
In PHP is it possible to use a function inside a variable
I know in php you can embed variables inside variables, like: <? $var1 = "I\'m including {$var2} in this variable.."; ?> But I was wondering how, and if it was possible to include a function inside a variable. I know I could just write: <?php $var1 = "I\'m including "; $var1 .= somefunc(); $var1 = " in this variable.."; ?> But what if I have a long variable for output, and I don't want to do this every time, or I want to use multiple functions: <?php $var1 = <<<EOF <html lang="en"> <head> <title>AAAHHHHH</title> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"> </head> <body> There is <b>alot</b> of text and html here... but I want some <i>functions</i>! -somefunc() doesn't work -{somefunc()} doesn't work -$somefunc() and {$somefunc()} doesn't work of course because a function needs to be a string -more non-working: ${somefunc()} </body> </html> EOF; ?> Or I want dynamic changes in that load of code: <? function somefunc($stuff) { $output = "my bold text <b>{$stuff}</b>."; return $output; } $var1 = <<<EOF <html lang="en"> <head> <title>AAAHHHHH</title> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"> </head> <body> somefunc("is awesome!") somefunc("is actually not so awesome..") because somefunc("won\'t work due to my problem.") </body> </html> EOF; ?> Well?
[ "Function calls within strings are supported since PHP5 by having a variable containing the name of the function to call:\n<?\nfunction somefunc($stuff)\n{\n $output = \"<b>{$stuff}</b>\";\n return $output;\n}\n$somefunc='somefunc';\necho \"foo {$somefunc(\"bar\")} baz\";\n?>\n\nwill output \"foo <b>bar</b> baz\".\nI find it easier however (and this works in PHP4) to either just call the function outside of the string:\n<?\necho \"foo \" . somefunc(\"bar\") . \" baz\";\n?>\n\nor assign to a temporary variable:\n<?\n$bar = somefunc(\"bar\");\necho \"foo {$bar} baz\";\n?>\n\n", "\"bla bla bla\".function(\"blub\").\" and on it goes\"\n" ]
[ 25, 2 ]
[ "Expanding a bit on what Jason W said:\n\nI find it easier however (and this works in PHP4) to either just call the \nfunction outside of the string:\n\n<?\necho \"foo \" . somefunc(\"bar\") . \" baz\";\n?>\n\nYou can also just embed this function call directly in your html, like:\n<?\n\nfunction get_date() {\n $date = `date`;\n return $date;\n}\n\nfunction page_title() {\n $title = \"Today's date is: \". get_date() .\"!\";\n echo \"$title\";\n}\n\nfunction page_body() {\n $body = \"Hello\";\n $body = \", World!\";\n $body = \"\\n\\n\";\n $body = \"Today is: \" . get_date() . \"\\n\";\n}\n\n?>\n<html>\n <head>\n <title><? page_title(); ?></title>\n </head>\n <body>\n <? page_body(); ?>\n </body>\n</html>\n\n\n" ]
[ -1 ]
[ "function", "html", "php", "variables" ]
stackoverflow_0000060409_function_html_php_variables.txt
Q: Unable to load System.Data.Linq.dll for CodeDom I am trying to dynamicaly compile code using CodeDom. I can load other assemblies, but I cannot load System.Data.Linq.dll. I get an error: Metadata file 'System.Data.Linq.dll' could not be found My code looks like: CompilerParameters compilerParams = new CompilerParameters(); compilerParams.CompilerOptions = "/target:library /optimize"; compilerParams.GenerateExecutable = false; compilerParams.GenerateInMemory = true; compilerParams.IncludeDebugInformation = false; compilerParams.ReferencedAssemblies.Add("mscorlib.dll"); compilerParams.ReferencedAssemblies.Add("System.dll"); compilerParams.ReferencedAssemblies.Add("System.Data.Linq.dll"); Any ideas? A: That may be because this assembly is stored in a different location than mscorlib is. It should work if you provide a full path to the assembly. The most convenient way to get the full path is to let the .NET loader do the work for you. I would try something like this: compilerParams.ReferencedAssemblies.Add(typeof(DataContext).Assembly.Location); A: This may be a silly answer, but are you sure the code is running on a machine with .NET Framework 3.5?
Unable to load System.Data.Linq.dll for CodeDom
I am trying to dynamicaly compile code using CodeDom. I can load other assemblies, but I cannot load System.Data.Linq.dll. I get an error: Metadata file 'System.Data.Linq.dll' could not be found My code looks like: CompilerParameters compilerParams = new CompilerParameters(); compilerParams.CompilerOptions = "/target:library /optimize"; compilerParams.GenerateExecutable = false; compilerParams.GenerateInMemory = true; compilerParams.IncludeDebugInformation = false; compilerParams.ReferencedAssemblies.Add("mscorlib.dll"); compilerParams.ReferencedAssemblies.Add("System.dll"); compilerParams.ReferencedAssemblies.Add("System.Data.Linq.dll"); Any ideas?
[ "That may be because this assembly is stored in a different location than mscorlib is. It should work if you provide a full path to the assembly. The most convenient way to get the full path is to let the .NET loader do the work for you. I would try something like this:\ncompilerParams.ReferencedAssemblies.Add(typeof(DataContext).Assembly.Location);\n\n", "This may be a silly answer, but are you sure the code is running on a machine with .NET Framework 3.5?\n" ]
[ 3, 0 ]
[]
[]
[ "c#", "codedom" ]
stackoverflow_0000060768_c#_codedom.txt
Q: How do I write a python HTTP server to listen on multiple ports? I'm writing a small web server in Python, using BaseHTTPServer and a custom subclass of BaseHTTPServer.BaseHTTPRequestHandler. Is it possible to make this listen on more than one port? What I'm doing now: class MyRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler): def doGET [...] class ThreadingHTTPServer(ThreadingMixIn, HTTPServer): pass server = ThreadingHTTPServer(('localhost', 80), MyRequestHandler) server.serve_forever() A: Sure; just start two different servers on two different ports in two different threads that each use the same handler. Here's a complete, working example that I just wrote and tested. If you run this code then you'll be able to get a Hello World webpage at both http://localhost:1111/ and http://localhost:2222/ from threading import Thread from SocketServer import ThreadingMixIn from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler class Handler(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.send_header("Content-type", "text/plain") self.end_headers() self.wfile.write("Hello World!") class ThreadingHTTPServer(ThreadingMixIn, HTTPServer): daemon_threads = True def serve_on_port(port): server = ThreadingHTTPServer(("localhost",port), Handler) server.serve_forever() Thread(target=serve_on_port, args=[1111]).start() serve_on_port(2222) update: This also works with Python 3 but three lines need to be slightly changed: from socketserver import ThreadingMixIn from http.server import HTTPServer, BaseHTTPRequestHandler and self.wfile.write(bytes("Hello World!", "utf-8")) A: Not easily. You could have two ThreadingHTTPServer instances, write your own serve_forever() function (don't worry it's not a complicated function). The existing function: def serve_forever(self, poll_interval=0.5): """Handle one request at a time until shutdown. Polls for shutdown every poll_interval seconds. Ignores self.timeout. If you need to do periodic tasks, do them in another thread. """ self.__serving = True self.__is_shut_down.clear() while self.__serving: # XXX: Consider using another file descriptor or # connecting to the socket to wake this up instead of # polling. Polling reduces our responsiveness to a # shutdown request and wastes cpu at all other times. r, w, e = select.select([self], [], [], poll_interval) if r: self._handle_request_noblock() self.__is_shut_down.set() So our replacement would be something like: def serve_forever(server1,server2): while True: r,w,e = select.select([server1,server2],[],[],0) if server1 in r: server1.handle_request() if server2 in r: server2.handle_request() A: I would say that threading for something this simple is overkill. You're better off using some form of asynchronous programming. Here is an example using Twisted: from twisted.internet import reactor from twisted.web import resource, server class MyResource(resource.Resource): isLeaf = True def render_GET(self, request): return 'gotten' site = server.Site(MyResource()) reactor.listenTCP(8000, site) reactor.listenTCP(8001, site) reactor.run() I also thinks it looks a lot cleaner to have each port be handled in the same way, instead of having the main thread handle one port and an additional thread handle the other. Arguably that can be fixed in the thread example, but then you're using three threads.
How do I write a python HTTP server to listen on multiple ports?
I'm writing a small web server in Python, using BaseHTTPServer and a custom subclass of BaseHTTPServer.BaseHTTPRequestHandler. Is it possible to make this listen on more than one port? What I'm doing now: class MyRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler): def doGET [...] class ThreadingHTTPServer(ThreadingMixIn, HTTPServer): pass server = ThreadingHTTPServer(('localhost', 80), MyRequestHandler) server.serve_forever()
[ "Sure; just start two different servers on two different ports in two different threads that each use the same handler. Here's a complete, working example that I just wrote and tested. If you run this code then you'll be able to get a Hello World webpage at both http://localhost:1111/ and http://localhost:2222/\nfrom threading import Thread\nfrom SocketServer import ThreadingMixIn\nfrom BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler\n\nclass Handler(BaseHTTPRequestHandler):\n def do_GET(self):\n self.send_response(200)\n self.send_header(\"Content-type\", \"text/plain\")\n self.end_headers()\n self.wfile.write(\"Hello World!\")\n\nclass ThreadingHTTPServer(ThreadingMixIn, HTTPServer):\n daemon_threads = True\n\ndef serve_on_port(port):\n server = ThreadingHTTPServer((\"localhost\",port), Handler)\n server.serve_forever()\n\nThread(target=serve_on_port, args=[1111]).start()\nserve_on_port(2222)\n\nupdate:\nThis also works with Python 3 but three lines need to be slightly changed:\nfrom socketserver import ThreadingMixIn\nfrom http.server import HTTPServer, BaseHTTPRequestHandler\n\nand\nself.wfile.write(bytes(\"Hello World!\", \"utf-8\"))\n\n", "Not easily. You could have two ThreadingHTTPServer instances, write your own serve_forever() function (don't worry it's not a complicated function).\nThe existing function:\ndef serve_forever(self, poll_interval=0.5):\n \"\"\"Handle one request at a time until shutdown.\n\n Polls for shutdown every poll_interval seconds. Ignores\n self.timeout. If you need to do periodic tasks, do them in\n another thread.\n \"\"\"\n self.__serving = True\n self.__is_shut_down.clear()\n while self.__serving:\n # XXX: Consider using another file descriptor or\n # connecting to the socket to wake this up instead of\n # polling. Polling reduces our responsiveness to a\n # shutdown request and wastes cpu at all other times.\n r, w, e = select.select([self], [], [], poll_interval)\n if r:\n self._handle_request_noblock()\n self.__is_shut_down.set()\n\nSo our replacement would be something like:\ndef serve_forever(server1,server2):\n while True:\n r,w,e = select.select([server1,server2],[],[],0)\n if server1 in r:\n server1.handle_request()\n if server2 in r:\n server2.handle_request()\n\n", "I would say that threading for something this simple is overkill. You're better off using some form of asynchronous programming.\nHere is an example using Twisted:\nfrom twisted.internet import reactor\nfrom twisted.web import resource, server\n\nclass MyResource(resource.Resource):\n isLeaf = True\n def render_GET(self, request):\n return 'gotten'\n\nsite = server.Site(MyResource())\n\nreactor.listenTCP(8000, site)\nreactor.listenTCP(8001, site)\nreactor.run()\n\nI also thinks it looks a lot cleaner to have each port be handled in the same way, instead of having the main thread handle one port and an additional thread handle the other. Arguably that can be fixed in the thread example, but then you're using three threads.\n" ]
[ 40, 6, 6 ]
[]
[]
[ "python", "webserver" ]
stackoverflow_0000060680_python_webserver.txt
Q: Tools to create maximum velocity in a .NET dev team If you were to self-fund a software project which tools, frameworks, components would you employ to ensure maximum productivity for the dev team and that the "real" problem is being worked on. What I'm looking for are low friction tools which get the job done with a minimum of fuss. Tools I'd characterize as such are SVN/TortioseSVN, ReSharper, VS itself. I'm looking for frameworks which solve the problems inherient in all software projects like ORM, logging, UI frameworks/components. An example on the UI side would be ASP.NET MVC vs WebForms vs MonoRail. A: Versioning. Subversion is the popular choice. If you can afford it, Team Foundation Server offers some benefits. If you want to be super-modern, consider a distributed versioning system, such as git, bazaar or Mercurial. Whatever you do, don't use SourceSafe or other lock-based tools, but rather merge-baseed ones. Consider installing both a Windows Explorer client (such as TortoiseSVN) as well as a Visual Studio add-in (such as AnkhSVN or VisualSVN). Issue tracking. Given that Joel Spolsky is on this site's staff, FogBugz deserves a mention. Trac, Mantis and BugZilla are widespread open-source choices. Continuous integration. CruiseControl.NET is a popular and open-source choice. There's also Draco.NET. Unit testing. NUnit is the popular open-source choice. Does the job. Consider installing the TestDriven.NET Visual Studio add-in. That said, you want to look at the answers to Essential Programming Tools and What is your best list of ‘must have’ development tools?; while not .NET-specific, they should apply anyway. A: Great tools and frameworks are essential, but the other essential is great project leadership. A: I would add Resharper to the list and Ndepend. Most likely Rhino mocks too. A: I would add one more to what edg says up there. You need people with at last some talent as well. As David Wheeler, author of the Flawfinder source code checker says: A fool with a tool is still a fool A: I'd recommend a Safari Books Online subscription as well. Oh, and gallons of coffee. A: I'll add Moq to the list for mocking to the list. Much less syntax than most other mocking frameworks. A: I'd definitely recommend Coderush+Refactor or Resharper (Coderush being my personal favourite), these tools dramatically reduce the time to go from code in your head to code on the page. For quick development the UI component sets from the likes of Telerik/DevExpress/Infragistics can be good, but in my experience can cause pain further out in the project when you want to refine things more precisely. Regarding frameworks etc I think you'd need to be a bit more specific about the project itself to get any meaningful suggestions. A: Good source control should probably be your number 1 priority. I've mentioed them before, but CVSDude are an excellent managed source control provider. I'm using a SVN package and it's brilliant. Saves a lot of hassle setting up your own server etc. A: Microsoft's Enterprise Library can be also helpful. This release of Enterprise Library includes application blocks for Caching, Cryptography, Data Access, Exception Handling, Logging, Policy Injection, Security and Validation. A: This is what we use for our team: Issue Tracking: Redmine - This is an awesome, free, Issue/Project management tool. It is a ruby on rails app however, so you'll need a proper environment to get it up and running. Source Control: Subversion with tortoiseSVN - subversion is an awesome source control solution and tortoise integrates with the explorer very nicely, no need for command line stuff. It also supports user side hook scripts, which has come in handy a number of times for my team. And that's about it really. We don't use a main framework, instead we just roll our own libraries that fit what we need to do with a given project. We do use jquery for a JavaScript library however. Some other random things would be free coffee, and the best equipment money can buy.
Tools to create maximum velocity in a .NET dev team
If you were to self-fund a software project which tools, frameworks, components would you employ to ensure maximum productivity for the dev team and that the "real" problem is being worked on. What I'm looking for are low friction tools which get the job done with a minimum of fuss. Tools I'd characterize as such are SVN/TortioseSVN, ReSharper, VS itself. I'm looking for frameworks which solve the problems inherient in all software projects like ORM, logging, UI frameworks/components. An example on the UI side would be ASP.NET MVC vs WebForms vs MonoRail.
[ "\nVersioning. Subversion is the popular choice. If you can afford it, Team Foundation Server offers some benefits. If you want to be super-modern, consider a distributed versioning system, such as git, bazaar or Mercurial. Whatever you do, don't use SourceSafe or other lock-based tools, but rather merge-baseed ones. Consider installing both a Windows Explorer client (such as TortoiseSVN) as well as a Visual Studio add-in (such as AnkhSVN or VisualSVN).\nIssue tracking. Given that Joel Spolsky is on this site's staff, FogBugz deserves a mention. Trac, Mantis and BugZilla are widespread open-source choices.\nContinuous integration. CruiseControl.NET is a popular and open-source choice. There's also Draco.NET.\nUnit testing. NUnit is the popular open-source choice. Does the job. Consider installing the TestDriven.NET Visual Studio add-in.\n\nThat said, you want to look at the answers to Essential Programming Tools and What is your best list of ‘must have’ development tools?; while not .NET-specific, they should apply anyway.\n", "Great tools and frameworks are essential, but the other essential is great project leadership. \n", "I would add Resharper to the list and Ndepend. Most likely Rhino mocks too.\n", "I would add one more to what edg says up there. You need people with at last some talent as well.\nAs David Wheeler, author of the Flawfinder source code checker says:\n\nA fool with a tool is still a fool\n\n", "I'd recommend a Safari Books Online subscription as well.\nOh, and gallons of coffee.\n", "I'll add Moq to the list for mocking to the list. Much less syntax than most other mocking frameworks.\n", "I'd definitely recommend Coderush+Refactor or Resharper (Coderush being my personal favourite), these tools dramatically reduce the time to go from code in your head to code on the page.\nFor quick development the UI component sets from the likes of Telerik/DevExpress/Infragistics can be good, but in my experience can cause pain further out in the project when you want to refine things more precisely.\nRegarding frameworks etc I think you'd need to be a bit more specific about the project itself to get any meaningful suggestions.\n", "Good source control should probably be your number 1 priority. I've mentioed them before, but CVSDude are an excellent managed source control provider. I'm using a SVN package and it's brilliant. Saves a lot of hassle setting up your own server etc.\n", "Microsoft's Enterprise Library can be also helpful.\n\nThis release of Enterprise Library includes application blocks for Caching, Cryptography, Data Access, Exception Handling, Logging, Policy Injection, Security and Validation.\n\n", "This is what we use for our team:\nIssue Tracking: Redmine - This is an awesome, free, Issue/Project management tool. It is a ruby on rails app however, so you'll need a proper environment to get it up and running.\nSource Control: Subversion with tortoiseSVN - subversion is an awesome source control solution and tortoise integrates with the explorer very nicely, no need for command line stuff. It also supports user side hook scripts, which has come in handy a number of times for my team.\nAnd that's about it really. We don't use a main framework, instead we just roll our own libraries that fit what we need to do with a given project. We do use jquery for a JavaScript library however.\nSome other random things would be free coffee, and the best equipment money can buy.\n" ]
[ 22, 5, 3, 1, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ ".net" ]
stackoverflow_0000061211_.net.txt
Q: What steps should be necessary to optimize a poorly performing query? I know this is a broad question, but I've inherited several poor performers and need to optimize them badly. I was wondering what are the most common steps involved to optimize. So, what steps do some of you guys take when faced with the same situation? Related Question: What generic techniques can be applied to optimize SQL queries? A: Look at the execution plan in query analyzer See what step costs the most Optimize the step! Return to step 1 [thx to Vinko] A: In SQL Server you can look at the Query Plan in Query Analyzer or Management Studio. This will tell you the rough percentage of time spent in each batch of statements. You'll want to look for the following: Table scans; this means you are completely missing indexes Index scans; your query may not be using the correct indexes The thickness of the arrows between each step in a query tells you how many rows are being produced by that step, very thick arrows means you are processing a lot of rows, and can indicate that some joins need to be optimized. Some other general tips: A large amount of conditional statements, such as multiple if-else statements, can cause SQL Server to constantly rebuild the query plan. You can check for this using Profiler. Make sure that different queries aren't blocking each other, such as an update statement blocking a select statement. This can be avoided by specifying the (nolock) hint in SQL Server select statements. As others have mentioned, try out the Performance Tuning wizard in Management Studio. Finally, I would highly recommend creating a set of load tests (using Visual Studio 2008 Test Edition), which you can use to simulate your application's behavior when dealing with a large amount of requests. Some SQL performance bottlenecks only manifest themselves under these circumstances, and being able to reproduce them makes it a lot easier to fix. A: Indexes may be a good place to start... The low hanging fruit can be knocked down with the SQL Server Index Tuning Wizard. A: I'm not sure about other databases, but for SQL Server I recommend the Execution Plan. It very clearly (albeit with lots of vertical and horizontal scrolling, unless you've got a 400" monitor!) shows what steps of your query are sucking up the time. If you've got one step that takes a crazy 80%, then maybe an index could be added, then after tweaking the index, re-run the Execution Plan to find your next biggest step. After a couple tweaks you may find that there really are no steps that stand out from the others i.e. they're all 1-2% each. If that is the case, then you might then need to see if there is a way you can cut down the amount of data included in your query, do those four million closed sales orders need to be included in the "Active Sales Orders" query? No, so exclude all those with STATUS='C' ... or something like that. Another improvement you'll see from the Execution Plan is bookmark lookups, basically it finds a match in the index, but then SQL Server has to quickly trawl through the table to find the record you want. This operation might at times take longer than just scanning the table in the first place would have, if that is the case, do you really need that index? With indexes, and especially with SQL Server 2005 you should look to the INCLUDE clause, this basically allows you to have a column in an index without really being in the index, so if all the data you need for your query is in your index or is an included columnn then SQL Server doesn't have to even look at the table, a big performance pickup. A: There are a couple of things you can look at to optimize your query performance. Ensure that you just have the minimum of data. Make sure you select only the columns you need. Reduce field sizes to a minimum. Consider de-normalising your database to reduce joins Avoid loops (i.e. fetch cursors), stick to set operations. Implement the query as a stored procedure as this is pre-compiled and will execute faster. Make sure that you have the correct indexes set up. If your database is used mostly for searching then consider more indexes. Use the execution plan to see how the processing is done. What you want to avoid is a table scan as this is costly. Make sure that the Auto Statistics is set to on. SQL needs this to help decide the optimal execution. See Mike Gunderloy's great post for more info. Basics of Statistics in SQL Server 2005 Make sure your indexes are not fragmented Reducing SQL Server Index Fragmentation Make sure your tables are not fragmented. How to Detect Table Fragmentation in SQL Server 2000 and 2005 A: Look at the indexes on the tables that make the query. An indexes may be needed on particular fields that participate in the where clause. Also look at the fields used in the joins in the query (if joins exist). If indexes already exist, look at the type of index. Failing that (because there are negatives to using locking hints) Look at locking hints and explicitly naming the index to use in the join. Using NOLOCKS is more obvious if you're getting a lot of deadlocked transactions. Do what roman and Andy S mentioned first though. A: The execution plan is a great start and will help you figure out what part of your query you need to tackle. Once you figure out the where, it is time to tackle the how and why. Take a look at the type of queries you are trying to preform. Avoid loops at all cost as they are slow. Avoid cursors at all costs because they are slow. Stick to set based queries when ever possible. There are ways to give sql hints on the type of joins to use if you are using joins. Be careful here though, while one hint may speed up your query once, it may slow down your query 10 fold the next time through depending on the data and parameters. Finally, make sure your database is well indexed. A good place to start is any field that is contained in a where clause probably should have a index on it.
What steps should be necessary to optimize a poorly performing query?
I know this is a broad question, but I've inherited several poor performers and need to optimize them badly. I was wondering what are the most common steps involved to optimize. So, what steps do some of you guys take when faced with the same situation? Related Question: What generic techniques can be applied to optimize SQL queries?
[ "\nLook at the execution plan in query analyzer\nSee what step costs the most\nOptimize the step!\nReturn to step 1 [thx to Vinko]\n\n", "In SQL Server you can look at the Query Plan in Query Analyzer or Management Studio. This will tell you the rough percentage of time spent in each batch of statements. You'll want to look for the following:\n\nTable scans; this means you are completely missing indexes\nIndex scans; your query may not be using the correct indexes\nThe thickness of the arrows between each step in a query tells you how many rows are being produced by that step, very thick arrows means you are processing a lot of rows, and can indicate that some joins need to be optimized.\n\nSome other general tips:\n\nA large amount of conditional statements, such as multiple if-else statements, can cause SQL Server to constantly rebuild the query plan. You can check for this using Profiler.\nMake sure that different queries aren't blocking each other, such as an update statement blocking a select statement. This can be avoided by specifying the (nolock) hint in SQL Server select statements.\nAs others have mentioned, try out the Performance Tuning wizard in Management Studio.\n\nFinally, I would highly recommend creating a set of load tests (using Visual Studio 2008 Test Edition), which you can use to simulate your application's behavior when dealing with a large amount of requests. Some SQL performance bottlenecks only manifest themselves under these circumstances, and being able to reproduce them makes it a lot easier to fix.\n", "Indexes may be a good place to start... \nThe low hanging fruit can be knocked down with the SQL Server Index Tuning Wizard.\n", "I'm not sure about other databases, but for SQL Server I recommend the Execution Plan. It very clearly (albeit with lots of vertical and horizontal scrolling, unless you've got a 400\" monitor!) shows what steps of your query are sucking up the time.\nIf you've got one step that takes a crazy 80%, then maybe an index could be added, then after tweaking the index, re-run the Execution Plan to find your next biggest step.\nAfter a couple tweaks you may find that there really are no steps that stand out from the others i.e. they're all 1-2% each. If that is the case, then you might then need to see if there is a way you can cut down the amount of data included in your query, do those four million closed sales orders need to be included in the \"Active Sales Orders\" query? No, so exclude all those with STATUS='C' ... or something like that.\nAnother improvement you'll see from the Execution Plan is bookmark lookups, basically it finds a match in the index, but then SQL Server has to quickly trawl through the table to find the record you want. This operation might at times take longer than just scanning the table in the first place would have, if that is the case, do you really need that index? \nWith indexes, and especially with SQL Server 2005 you should look to the INCLUDE clause, this basically allows you to have a column in an index without really being in the index, so if all the data you need for your query is in your index or is an included columnn then SQL Server doesn't have to even look at the table, a big performance pickup.\n", "There are a couple of things you can look at to optimize your query performance.\n\nEnsure that you just have the minimum of data. Make sure you select only the columns you need. Reduce field sizes to a minimum.\nConsider de-normalising your database to reduce joins\nAvoid loops (i.e. fetch cursors), stick to set operations.\nImplement the query as a stored procedure as this is pre-compiled and will execute faster.\nMake sure that you have the correct indexes set up. If your database is used mostly for searching then consider more indexes.\nUse the execution plan to see how the processing is done. What you want to avoid is a table scan as this is costly.\nMake sure that the Auto Statistics is set to on. SQL needs this to help decide the optimal execution. See Mike Gunderloy's great post for more info. Basics of Statistics in SQL Server 2005\nMake sure your indexes are not fragmented Reducing SQL Server Index Fragmentation\nMake sure your tables are not fragmented. How to Detect Table Fragmentation in SQL Server 2000 and 2005\n\n", "Look at the indexes on the tables that make the query. An indexes may be needed on particular fields that participate in the where clause. Also look at the fields used in the joins in the query (if joins exist). If indexes already exist, look at the type of index. \nFailing that (because there are negatives to using locking hints) Look at locking hints and explicitly naming the index to use in the join. Using NOLOCKS is more obvious if you're getting a lot of deadlocked transactions.\nDo what roman and Andy S mentioned first though.\n", "The execution plan is a great start and will help you figure out what part of your query you need to tackle.\nOnce you figure out the where, it is time to tackle the how and why. Take a look at the type of queries you are trying to preform. Avoid loops at all cost as they are slow. Avoid cursors at all costs because they are slow. Stick to set based queries when ever possible. \nThere are ways to give sql hints on the type of joins to use if you are using joins. Be careful here though, while one hint may speed up your query once, it may slow down your query 10 fold the next time through depending on the data and parameters.\nFinally, make sure your database is well indexed. A good place to start is any field that is contained in a where clause probably should have a index on it.\n" ]
[ 15, 7, 3, 2, 2, 1, 1 ]
[]
[]
[ "optimization", "sql_server" ]
stackoverflow_0000061008_optimization_sql_server.txt
Q: When did browsers start supporting multiple classes per tag? You can use more than one css class in an HTML tag in current web browsers, e.g.: <div class="style1 style2 style3">foo bar</div> This hasn't always worked; with which versions did the major browsers begin correctly supporting this feature? A: @Wayne Kao - IE6 has no problem reading more than one class name on an element, and applying styles that belong to each class. What the article is referring to is creating new styles based on the combination of class names. <div class="bold italic">content</div> .bold { font-weight: 800; } .italic { font-style: italic; { IE6 would apply both bold and italic styles to the div. However, say we wanted all elements that have bold and italic classes to also be purple. In Firefox (or possibly IE7, not sure), we could write something like this: .bold.italic { color: purple; } That would not work in IE6. A: I believe Firefox has always supported this, at least since v1.5 anyway. IE only added full support in v7. IE6 does partially support it, but its pretty buggy, so don't count on it working properly. A: According to blooberry, IE4 and Netscape 4.x do not support this. HTML 4.0 spec says class = cdata-list [CS] This attribute assigns a class name or set of class names to an element. Any number of elements may be assigned the same class name or names. Multiple class names must be separated by white space characters. A: Apparently IE 6 doesn't handle these correctly if you have CSS selectors that contain multiple class names: http://www.ryanbrill.com/archives/multiple-classes-in-ie/
When did browsers start supporting multiple classes per tag?
You can use more than one css class in an HTML tag in current web browsers, e.g.: <div class="style1 style2 style3">foo bar</div> This hasn't always worked; with which versions did the major browsers begin correctly supporting this feature?
[ "@Wayne Kao - IE6 has no problem reading more than one class name on an element, and applying styles that belong to each class. What the article is referring to is creating new styles based on the combination of class names.\n<div class=\"bold italic\">content</div>\n\n.bold {\n font-weight: 800;\n}\n\n.italic {\n font-style: italic;\n{\n\nIE6 would apply both bold and italic styles to the div. However, say we wanted all elements that have bold and italic classes to also be purple. In Firefox (or possibly IE7, not sure), we could write something like this:\n.bold.italic {\n color: purple;\n}\n\nThat would not work in IE6. \n", "I believe Firefox has always supported this, at least since v1.5 anyway. IE only added full support in v7. IE6 does partially support it, but its pretty buggy, so don't count on it working properly. \n", "According to blooberry, IE4 and Netscape 4.x do not support this. HTML 4.0 spec says\n\nclass = cdata-list [CS]\nThis attribute\nassigns a class name or set of class\nnames to an element. Any number of\nelements may be assigned the same\nclass name or names. Multiple class\nnames must be separated by white space\ncharacters.\n\n", "Apparently IE 6 doesn't handle these correctly if you have CSS selectors that contain multiple class names:\nhttp://www.ryanbrill.com/archives/multiple-classes-in-ie/\n" ]
[ 9, 2, 2, 1 ]
[]
[]
[ "browser", "css", "html" ]
stackoverflow_0000061051_browser_css_html.txt
Q: HTML.Button in ASP.NET MVC Starting from ASP.NET MVC Preview 3, HTML.Button ( and other related HTML controls) are no longer supported. The question is, what is the equivalent for them? I've an app that was built using Preview 2, now I have to make it compatible with the latest CTP releases. A: Just write <input type="button" ... /> into your html. There's nothing special at all with the html controls. A: I figured it out. It goes something like this: <form method="post" action="<%= Html.AttributeEncode(Url.Action("CastUpVote")) %>"> <input type="submit" value="<%=ViewData.Model.UpVotes%> up votes" /> </form> A: Several of the extension methods got moved to Microsoft.Web.Mvc, which is the MVC Futures DLL. You might want to look there for things that have gone missing.
HTML.Button in ASP.NET MVC
Starting from ASP.NET MVC Preview 3, HTML.Button ( and other related HTML controls) are no longer supported. The question is, what is the equivalent for them? I've an app that was built using Preview 2, now I have to make it compatible with the latest CTP releases.
[ "Just write <input type=\"button\" ... /> into your html. There's nothing special at all with the html controls.\n", "I figured it out. It goes something like this:\n<form method=\"post\" action=\"<%= Html.AttributeEncode(Url.Action(\"CastUpVote\")) %>\">\n<input type=\"submit\" value=\"<%=ViewData.Model.UpVotes%> up votes\" />\n</form>\n\n", "Several of the extension methods got moved to Microsoft.Web.Mvc, which is the MVC Futures DLL. You might want to look there for things that have gone missing.\n" ]
[ 13, 11, 4 ]
[ "<asp:Button> is the ASP.NET equivalent to the HTML.Button. It will by default generate an <input type=\"button\">. (This is the System.Web.UI.WebControls.Button class)\n" ]
[ -12 ]
[ "asp.net_mvc", "html" ]
stackoverflow_0000059267_asp.net_mvc_html.txt
Q: How do I locate a Word application window? I have a VB.net test application that clicks a link that opens the Microsoft Word application window and displays the document. How do I locate the Word application window so that I can grab some text from it? A: I've done something similar with a SourceSafe dialog, which I posted on my blog. Basically, I used either Spy++ or Winspector to find out the window class name, and make Win32 calls to do stuff with the window. I've put the source on my blog: http://harriyott.com/2006/07/sourcesafe-cant-leave-well-alone.aspx A: Are you trying to activate the word app? If you want full control, you need to automate word from your vb.net app. Check here for some samples: 1, 2 A: You can use the Word COM object to open the work document and then you manipulate it. Make sure to add a reference for Microsoft Word first. Imports System.Runtime.InteropServices Imports Microsoft.Office.Interop.Word Public Class Form1 Inherits System.Windows.Forms.Form Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim strFileName As String Dim wordapp As New Microsoft.Office.Interop.Word.Application Dim doc As Microsoft.Office.Interop.Word.Document Try doc = wordapp.Documents.Open("c:\testdoc.doc") doc.Activate() Catch ex As COMException MessageBox.Show("Error accessing Word document.") End Try End Sub End Class The doc object is a handle for the instance of Word you have created and you can use all the normal options (save, print etc). You can do likewise with the wordapp. A trick is to use the macro editor in Word to record what you want to do. You can then view this in the Macro Editor. This give you a great starting point for your VB code. Also, be sure to dispose of the Word COM objects at the end.
How do I locate a Word application window?
I have a VB.net test application that clicks a link that opens the Microsoft Word application window and displays the document. How do I locate the Word application window so that I can grab some text from it?
[ "I've done something similar with a SourceSafe dialog, which I posted on my blog. Basically, I used either Spy++ or Winspector to find out the window class name, and make Win32 calls to do stuff with the window. I've put the source on my blog: http://harriyott.com/2006/07/sourcesafe-cant-leave-well-alone.aspx\n", "Are you trying to activate the word app? If you want full control, you need to automate word from your vb.net app. Check here for some samples: 1, 2\n", "You can use the Word COM object to open the work document and then you manipulate it. Make sure to add a reference for Microsoft Word first.\nImports System.Runtime.InteropServices\nImports Microsoft.Office.Interop.Word\n\nPublic Class Form1\n\nInherits System.Windows.Forms.Form\n\nPrivate Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click\n\nDim strFileName As String\nDim wordapp As New Microsoft.Office.Interop.Word.Application\nDim doc As Microsoft.Office.Interop.Word.Document\n\nTry\n doc = wordapp.Documents.Open(\"c:\\testdoc.doc\")\n doc.Activate()\n\nCatch ex As COMException\n\n MessageBox.Show(\"Error accessing Word document.\")\n\nEnd Try\n\nEnd Sub\n\nEnd Class\n\nThe doc object is a handle for the instance of Word you have created and you can use all the normal options (save, print etc). You can do likewise with the wordapp. A trick is to use the macro editor in Word to record what you want to do. You can then view this in the Macro Editor. This give you a great starting point for your VB code.\nAlso, be sure to dispose of the Word COM objects at the end.\n" ]
[ 1, 1, 1 ]
[]
[]
[ "interop", "ms_office", "ms_word", "vb.net", "windows" ]
stackoverflow_0000061307_interop_ms_office_ms_word_vb.net_windows.txt
Q: NUnit - How to test all classes that implement a particular interface If I have interface IFoo, and have several classes that implement it, what is the best/most elegant/cleverest way to test all those classes against the interface? I'd like to reduce test code duplication, but still 'stay true' to the principles of Unit testing. What would you consider best practice? I'm using NUnit, but I suppose examples from any Unit testing framework would be valid A: If you have classes implement any one interface then they all need to implement the methods in that interface. In order to test these classes you need to create a unit test class for each of the classes. Lets go with a smarter route instead; if your goal is to avoid code and test code duplication you might want to create an abstract class instead that handles the recurring code. E.g. you have the following interface: public interface IFoo { public void CommonCode(); public void SpecificCode(); } You might want to create an abstract class: public abstract class AbstractFoo : IFoo { public void CommonCode() { SpecificCode(); } public abstract void SpecificCode(); } Testing that is easy; implement the abstract class in the test class either as an inner class: [TestFixture] public void TestClass { private class TestFoo : AbstractFoo { boolean hasCalledSpecificCode = false; public void SpecificCode() { hasCalledSpecificCode = true; } } [Test] public void testCommonCallsSpecificCode() { TestFoo fooFighter = new TestFoo(); fooFighter.CommonCode(); Assert.That(fooFighter.hasCalledSpecificCode, Is.True()); } } ...or let the test class extend the abstract class itself if that fits your fancy. [TestFixture] public void TestClass : AbstractFoo { boolean hasCalledSpecificCode; public void specificCode() { hasCalledSpecificCode = true; } [Test] public void testCommonCallsSpecificCode() { AbstractFoo fooFighter = this; hasCalledSpecificCode = false; fooFighter.CommonCode(); Assert.That(fooFighter.hasCalledSpecificCode, Is.True()); } } Having an abstract class take care of common code that an interface implies gives a much cleaner code design. I hope this makes sense to you. As a side note, this is a common design pattern called the Template Method pattern. In the above example, the template method is the CommonCode method and SpecificCode is called a stub or a hook. The idea is that anyone can extend behavior without the need to know the behind the scenes stuff. A lot of frameworks rely on this behavioral pattern, e.g. ASP.NET where you have to implement the hooks in a page or a user controls such as the generated Page_Load method which is called by the Load event, the template method calls the hooks behind the scenes. There are a lot more examples of this. Basically anything that you have to implement that is using the words "load", "init", or "render" is called by a template method. A: I disagree with Jon Limjap when he says, It is not a contract on either a.) how the method should be implemented and b.) what that method should be doing exactly (it only guarantees the return type), the two reasons that I glean would be your motive in wanting this kind of test. There could be many parts of the contract not specified in the return type. A language-agnostic example: public interface List { // adds o and returns the list public List add(Object o); // removed the first occurrence of o and returns the list public List remove(Object o); } Your unit tests on LinkedList, ArrayList, CircularlyLinkedList, and all the others should test not only that the lists themselves are returned, but also that they have been properly modified. There was an earlier question on design-by-contract, which can help point you in the right direction on one way of DRYing up these tests. If you don't want the overhead of contracts, I recommend test rigs, along the lines of what Spoike recommended: abstract class BaseListTest { abstract public List newListInstance(); public void testAddToList() { // do some adding tests } public void testRemoveFromList() { // do some removing tests } } class ArrayListTest < BaseListTest { List newListInstance() { new ArrayList(); } public void arrayListSpecificTest1() { // test something about ArrayLists beyond the List requirements } } A: I don't think this is best practice. The simple truth is that an interface is nothing more than a contract that a method is implemented. It is not a contract on either a.) how the method should be implemented and b.) what that method should be doing exactly (it only guarantees the return type), the two reasons that I glean would be your motive in wanting this kind of test. If you really want to be in control of your method implementation, you have the option of: Implementing it as a method in an abstract class, and inherit from that. You will still need to inherit it into a concrete class, but you are sure that unless it is explicitly overriden that method will do that correct thing. In .NET 3.5/C# 3.0, implementing the method as an extension method referencing to the Interface Example: public static ReturnType MethodName (this IMyinterface myImplementation, SomeObject someParameter) { //method body goes here } Any implementation properly referencing to that extension method will emit precisely that extension method so you only need to test it once. A: When testing an interface or base class contract, I prefer to let the test framework automatically take care of finding all of the implementers. This lets you concentrate on the interface under test and be reasonably sure that all implementations will be tested, without having to do a lot of manual implementation. For xUnit.net, I created a Type Resolver library to search for all implementations of a particular type (the xUnit.net extensions are just a thin wrapper over the Type Resolver functionality, so it can be adapted for use in other frameworks). In MbUnit, you can use a CombinatorialTest with UsingImplementations attributes on the parameters. For other frameworks, the base class pattern Spoike mentioned can be useful. Beyond testing the basics of the interface, you should also test that each individual implementation follows its particular requirements. A: How about a hierarchy of [TestFixture]s classes? Put the common test code in the base test class and inherit it into child test classes.. A: I don't use NUnit but I have tested C++ interfaces. I would first test a TestFoo class which is a basic implementation of it to make sure the generic stuff works. Then you just need to test the stuff that is unique to each interface.
NUnit - How to test all classes that implement a particular interface
If I have interface IFoo, and have several classes that implement it, what is the best/most elegant/cleverest way to test all those classes against the interface? I'd like to reduce test code duplication, but still 'stay true' to the principles of Unit testing. What would you consider best practice? I'm using NUnit, but I suppose examples from any Unit testing framework would be valid
[ "If you have classes implement any one interface then they all need to implement the methods in that interface. In order to test these classes you need to create a unit test class for each of the classes.\nLets go with a smarter route instead; if your goal is to avoid code and test code duplication you might want to create an abstract class instead that handles the recurring code. \nE.g. you have the following interface:\npublic interface IFoo {\n\n public void CommonCode();\n\n public void SpecificCode();\n\n}\n\nYou might want to create an abstract class:\npublic abstract class AbstractFoo : IFoo {\n\n public void CommonCode() {\n SpecificCode();\n }\n\n public abstract void SpecificCode();\n\n}\n\nTesting that is easy; implement the abstract class in the test class either as an inner class:\n[TestFixture]\npublic void TestClass {\n\n private class TestFoo : AbstractFoo {\n boolean hasCalledSpecificCode = false;\n public void SpecificCode() {\n hasCalledSpecificCode = true;\n }\n }\n\n [Test]\n public void testCommonCallsSpecificCode() {\n TestFoo fooFighter = new TestFoo();\n fooFighter.CommonCode();\n Assert.That(fooFighter.hasCalledSpecificCode, Is.True());\n }\n}\n\n...or let the test class extend the abstract class itself if that fits your fancy.\n[TestFixture]\npublic void TestClass : AbstractFoo {\n\n boolean hasCalledSpecificCode;\n public void specificCode() {\n hasCalledSpecificCode = true;\n }\n\n [Test]\n public void testCommonCallsSpecificCode() {\n AbstractFoo fooFighter = this;\n hasCalledSpecificCode = false;\n fooFighter.CommonCode();\n Assert.That(fooFighter.hasCalledSpecificCode, Is.True());\n } \n\n}\n\nHaving an abstract class take care of common code that an interface implies gives a much cleaner code design. \nI hope this makes sense to you.\n\nAs a side note, this is a common design pattern called the Template Method pattern. In the above example, the template method is the CommonCode method and SpecificCode is called a stub or a hook. The idea is that anyone can extend behavior without the need to know the behind the scenes stuff.\nA lot of frameworks rely on this behavioral pattern, e.g. ASP.NET where you have to implement the hooks in a page or a user controls such as the generated Page_Load method which is called by the Load event, the template method calls the hooks behind the scenes. There are a lot more examples of this. Basically anything that you have to implement that is using the words \"load\", \"init\", or \"render\" is called by a template method.\n", "I disagree with Jon Limjap when he says,\n\nIt is not a contract on either a.) how the method should be implemented and b.) what that method should be doing exactly (it only guarantees the return type), the two reasons that I glean would be your motive in wanting this kind of test.\n\nThere could be many parts of the contract not specified in the return type. A language-agnostic example:\npublic interface List {\n\n // adds o and returns the list\n public List add(Object o);\n\n // removed the first occurrence of o and returns the list\n public List remove(Object o);\n\n}\n\nYour unit tests on LinkedList, ArrayList, CircularlyLinkedList, and all the others should test not only that the lists themselves are returned, but also that they have been properly modified.\nThere was an earlier question on design-by-contract, which can help point you in the right direction on one way of DRYing up these tests.\nIf you don't want the overhead of contracts, I recommend test rigs, along the lines of what Spoike recommended:\nabstract class BaseListTest {\n\n abstract public List newListInstance();\n\n public void testAddToList() {\n // do some adding tests\n }\n\n public void testRemoveFromList() {\n // do some removing tests\n }\n\n}\n\nclass ArrayListTest < BaseListTest {\n List newListInstance() { new ArrayList(); }\n\n public void arrayListSpecificTest1() {\n // test something about ArrayLists beyond the List requirements\n }\n}\n\n", "I don't think this is best practice. \nThe simple truth is that an interface is nothing more than a contract that a method is implemented. It is not a contract on either a.) how the method should be implemented and b.) what that method should be doing exactly (it only guarantees the return type), the two reasons that I glean would be your motive in wanting this kind of test.\nIf you really want to be in control of your method implementation, you have the option of:\n\nImplementing it as a method in an abstract class, and inherit from that. You will still need to inherit it into a concrete class, but you are sure that unless it is explicitly overriden that method will do that correct thing.\nIn .NET 3.5/C# 3.0, implementing the method as an extension method referencing to the Interface\n\nExample:\npublic static ReturnType MethodName (this IMyinterface myImplementation, SomeObject someParameter)\n{\n //method body goes here\n}\n\nAny implementation properly referencing to that extension method will emit precisely that extension method so you only need to test it once.\n", "When testing an interface or base class contract, I prefer to let the test framework automatically take care of finding all of the implementers. This lets you concentrate on the interface under test and be reasonably sure that all implementations will be tested, without having to do a lot of manual implementation.\n\nFor xUnit.net, I created a Type Resolver library to search for all implementations of a particular type (the xUnit.net extensions are just a thin wrapper over the Type Resolver functionality, so it can be adapted for use in other frameworks).\nIn MbUnit, you can use a CombinatorialTest with UsingImplementations attributes on the parameters.\nFor other frameworks, the base class pattern Spoike mentioned can be useful.\n\nBeyond testing the basics of the interface, you should also test that each individual implementation follows its particular requirements.\n", "How about a hierarchy of [TestFixture]s classes? Put the common test code in the base test class and inherit it into child test classes..\n", "I don't use NUnit but I have tested C++ interfaces. I would first test a TestFoo class which is a basic implementation of it to make sure the generic stuff works. Then you just need to test the stuff that is unique to each interface.\n" ]
[ 15, 11, 3, 1, 1, 0 ]
[]
[]
[ ".net", "c#", "nunit", "unit_testing" ]
stackoverflow_0000039003_.net_c#_nunit_unit_testing.txt
Q: Spell Checking Service with python using mod_python What is the best available method for developing a spell check engine (for example, with aspell_python), that works with apache mod_python? apache 2.0.59+RHEL4+mod_python+aspell_python seems to crash. Is there any alternative to using aspell_python? A: Looks like RHEL4 is the culprit. Works well on Fedore 7 (the version of apache is newer and there is no crash)
Spell Checking Service with python using mod_python
What is the best available method for developing a spell check engine (for example, with aspell_python), that works with apache mod_python? apache 2.0.59+RHEL4+mod_python+aspell_python seems to crash. Is there any alternative to using aspell_python?
[ "Looks like RHEL4 is the culprit. Works well on Fedore 7 (the version of apache is newer and there is no crash)\n" ]
[ 1 ]
[]
[]
[ "aspell", "mod_python", "spell_checking" ]
stackoverflow_0000061556_aspell_mod_python_spell_checking.txt
Q: Can an iPhone App Be Run as Root? I am thinking about the design of an iPhone app I'd like to create. One possible problem is that this application will have to run as root (to access certain network ports). In a typical UNIX app, I'd just get the app to run with setuid, but I'm wondering if that is possible with an iPhone app. I've read this question in Apple's forum, which is discouraging: http://discussions.apple.com/thread.jspa?threadID=1664575 I understand that Apple wants to limit what a program can do, but there are plenty of good, legitimate reasons for a user to run a program with elevated privileges. I'm not trying to create a hacker tool here. I'm sure I could get around this on a jail-broken iPhone, but that's not what I'm after. Is there any way to run an app with elevated privileges on an unbroken iPhone? (BTW, there is no need to warn me about the NDA.) A: Section 3.3.4 of the iPhone SDK Agreement suggests that you mustn't work outside your sandbox. Given that Apple has been somewhat arbitrary on which applications they permit, you should definitely double-check with them before you start developing. Compared to 2.0.x, the sandbox restrictions have actually increased in 2.1; you can no longer even read from another application's sandbox. So, even if it currently is possible to elevate your app's privileges, it very likely won't be in a future release. A: The only options you have is Run the application as root on the iphone Set the applications setuid bit and owner root. I can't see any of them being blessed by Apple. I guess it depends on what you want to do with the privileges, if you're lucky there might be more fine grained privileges available, but afaik you have to choose a port above 1024. A: Doesn't matter one bit if you can do this on your normal desktop computer. The iPhone is not a normal desktop computer. Unlike a desktop computer, the only way to get an application on the iPhone without a jailbreak is to get it from the App Store. The only way to get on the App Store is to follow Apple's rules, and Apple's rules clearly include "no privilege escalation", "no escaping the sandbox", and "no accessing network ports outside the existing, provided APIs". What you want to do is not possible.
Can an iPhone App Be Run as Root?
I am thinking about the design of an iPhone app I'd like to create. One possible problem is that this application will have to run as root (to access certain network ports). In a typical UNIX app, I'd just get the app to run with setuid, but I'm wondering if that is possible with an iPhone app. I've read this question in Apple's forum, which is discouraging: http://discussions.apple.com/thread.jspa?threadID=1664575 I understand that Apple wants to limit what a program can do, but there are plenty of good, legitimate reasons for a user to run a program with elevated privileges. I'm not trying to create a hacker tool here. I'm sure I could get around this on a jail-broken iPhone, but that's not what I'm after. Is there any way to run an app with elevated privileges on an unbroken iPhone? (BTW, there is no need to warn me about the NDA.)
[ "Section 3.3.4 of the iPhone SDK Agreement suggests that you mustn't work outside your sandbox.\nGiven that Apple has been somewhat arbitrary on which applications they permit, you should definitely double-check with them before you start developing.\nCompared to 2.0.x, the sandbox restrictions have actually increased in 2.1; you can no longer even read from another application's sandbox. So, even if it currently is possible to elevate your app's privileges, it very likely won't be in a future release.\n", "The only options you have is\n\nRun the application as root on the iphone\nSet the applications setuid bit and owner root.\n\nI can't see any of them being blessed by Apple.\nI guess it depends on what you want to do with the privileges, if you're lucky there might be more fine grained privileges available, but afaik you have to choose a port above 1024.\n", "Doesn't matter one bit if you can do this on your normal desktop computer. The iPhone is not a normal desktop computer.\nUnlike a desktop computer, the only way to get an application on the iPhone without a jailbreak is to get it from the App Store. The only way to get on the App Store is to follow Apple's rules, and Apple's rules clearly include \"no privilege escalation\", \"no escaping the sandbox\", and \"no accessing network ports outside the existing, provided APIs\".\nWhat you want to do is not possible.\n" ]
[ 5, 2, 0 ]
[]
[]
[ "iphone", "permissions", "setuid" ]
stackoverflow_0000061346_iphone_permissions_setuid.txt
Q: Accessing Firefox cache from an XPCOM component Does anybody know how to get local path of file cached by Firefox based on its URL from an XPCOM component? A: To access cached items, new cache session must be created using createSession method provided in nsICacheService. This method creates nsICacheSession object. Information about cache item can be obtained using openCacheEntry method of the session object (method return nsICacheEntryDescriptor). To read data user must open input stream using openInputStream method of the cache entry object.
Accessing Firefox cache from an XPCOM component
Does anybody know how to get local path of file cached by Firefox based on its URL from an XPCOM component?
[ "To access cached items, new cache session must be created using createSession method provided in nsICacheService. This method creates nsICacheSession\nobject. Information about cache item can be obtained using openCacheEntry method of the session object (method return nsICacheEntryDescriptor). To read data user must open input stream using openInputStream method of the cache entry object.\n" ]
[ 4 ]
[]
[]
[ "c++", "firefox", "gecko", "xpcom" ]
stackoverflow_0000061453_c++_firefox_gecko_xpcom.txt
Q: How can I access the backing variable of an auto-implemented property? In the past we declared properties like this: public class MyClass { private int _age; public int Age { get{ return _age; } set{ _age = value; } } } Now we can do: public class MyClass { public int Age {get; set;} } My question is, how can I access the private variable that is created automatically using this notation? I would rather access the private variable and not the public accessor 'Age'. Is there a default notation to access the private variable, or it is just not possible? A: The aim of the new automatic properties is to reduce the amount of boilerplate code you need to write when you just have a simple property that doesn't need any special logic in the get or the set. If you want to access the private member that these properties use, that's usually for a few reasons: You need to more than just a simple get/set - in this case, you should just avoid using automatic properties for this member. You want to avoid the performance hit of going through the get or set and just use the member directly - in this case, I'd be surprised if there really was a performance hit. The simple get/set members are very very easy to inline, and in my (admittedly limited) testing I haven't found a difference between using the automatic properties and accessing the member directly. You only want to have public read access (i.e. just a 'get') and the class write to the member directly - in this case, you can use a private set in your automatic property. i.e. public class MyClass { public int Age {get; private set;} } This usually covers most the reasons for wanting to directly get to the backing field used by the automatic properties. A: Your usage of automatic properties implies that you do not need any getting/setting logic for the property thus a private backing variable is unneccessary. Don't use automatic properties if you have any complex logic in your class. Just go private int _age and normal getters/setters as you normally would. IMO, automatic properties are more suited for quickly implementing throwaway objects or temporary data capsules like: public class TempMessage { public int FromID { get; set; } public int ToID { get; set; } public string Message { get; set; } } Where you don't need much logic. A: This syntax is commonly called "syntax sugar", which means that the compiler takes that syntax and translates it into something else. In your example, the compiler would generate code that looks something like this: [CompilerGenerated] private int <Age>k_BackingField; public int Age { [CompilerGenerated] get { return this.<Age>k_BackingField; } [CompilerGenerated] set { this.<Age>k_BackingField = value; } Even knowing all of that, you could probably access the backing field directly but that sort of defeats the purpose of using automatic properties. I say probably here because you then depend on an implementation detail that could change at any point in a future release of the C# compiler. A: Behind the scenes what happens is the injection of a private member variable, prefixed with <>k__AutomaticallyGeneratedPropertyField# From C# 3.0 Automatic Properties explained Although it may be possible to use that private member directly, it's very hacky and unnecessary. A: You shouldn't, and it's very unlikely you need to. If you need to access the property, just use the public property (e.g. this.Age). There's nothing special about the private field backing the public property, using it in preference to the property is just superstition. A: You can't, it's a language feature as opposed to a IDE feature. To be honest i'd prefer then IDE to add the private variable in for you. I agree that it is slightly weird for the class to internally have to use the public entry point to access its own variables. Hence I don't use this new feature that much myself.
How can I access the backing variable of an auto-implemented property?
In the past we declared properties like this: public class MyClass { private int _age; public int Age { get{ return _age; } set{ _age = value; } } } Now we can do: public class MyClass { public int Age {get; set;} } My question is, how can I access the private variable that is created automatically using this notation? I would rather access the private variable and not the public accessor 'Age'. Is there a default notation to access the private variable, or it is just not possible?
[ "The aim of the new automatic properties is to reduce the amount of boilerplate code you need to write when you just have a simple property that doesn't need any special logic in the get or the set. \nIf you want to access the private member that these properties use, that's usually for a few reasons:\n\nYou need to more than just a simple get/set - in this case, you should just avoid using automatic properties for this member.\nYou want to avoid the performance hit of going through the get or set and just use the member directly - in this case, I'd be surprised if there really was a performance hit. The simple get/set members are very very easy to inline, and in my (admittedly limited) testing I haven't found a difference between using the automatic properties and accessing the member directly. \nYou only want to have public read access (i.e. just a 'get') and the class write to the member directly - in this case, you can use a private set in your automatic property. i.e.\npublic class MyClass\n{\n public int Age {get; private set;} \n}\n\nThis usually covers most the reasons for wanting to directly get to the backing field used by the automatic properties.\n", "Your usage of automatic properties implies that you do not need any getting/setting logic for the property thus a private backing variable is unneccessary.\nDon't use automatic properties if you have any complex logic in your class. Just go private int _age and normal getters/setters as you normally would.\nIMO, automatic properties are more suited for quickly implementing throwaway objects or temporary data capsules like:\npublic class TempMessage {\n public int FromID { get; set; }\n public int ToID { get; set; }\n public string Message { get; set; }\n}\n\nWhere you don't need much logic.\n", "This syntax is commonly called \"syntax sugar\", which means that the compiler takes that syntax and translates it into something else. In your example, the compiler would generate code that looks something like this:\n[CompilerGenerated]\nprivate int <Age>k_BackingField;\n\npublic int Age\n{\n [CompilerGenerated]\n get\n {\n return this.<Age>k_BackingField;\n }\n [CompilerGenerated]\n set\n {\n this.<Age>k_BackingField = value;\n }\n\nEven knowing all of that, you could probably access the backing field directly but that sort of defeats the purpose of using automatic properties. I say probably here because you then depend on an implementation detail that could change at any point in a future release of the C# compiler.\n", "\nBehind the scenes what happens is the injection of a private member variable, prefixed with <>k__AutomaticallyGeneratedPropertyField#\n\nFrom C# 3.0 Automatic Properties explained\nAlthough it may be possible to use that private member directly, it's very hacky and unnecessary.\n", "You shouldn't, and it's very unlikely you need to. If you need to access the property, just use the public property (e.g. this.Age). There's nothing special about the private field backing the public property, using it in preference to the property is just superstition.\n", "You can't, it's a language feature as opposed to a IDE feature. To be honest i'd prefer then IDE to add the private variable in for you. I agree that it is slightly weird for the class to internally have to use the public entry point to access its own variables. Hence I don't use this new feature that much myself.\n" ]
[ 92, 23, 12, 10, 7, 2 ]
[]
[]
[ "c#", "properties" ]
stackoverflow_0000061480_c#_properties.txt
Q: Breakpoints in core .NET runtime? I have a third party library that internally constructs and uses the SqlConnection class. I can inherit from the class, but it has a ton of overloads, and so far I have been unable to find the right one. What I'd like is to tack on a parameter to the connection string being used. Is there a way for me to put a breakpoint in the .NET library core itself? Specifically in the constructors of the SqlConnection class, so that I can look at the stack trace and see where it is actually being constructed? Barring that, is there some other way I can do this? Specifically, what I want to do is to tack on the Application Name parameter, so that our application is more easily identified on the server when looking at connections. Edit: Well, it appears I need more help. I think I've enabled everything related to symbol server support, and I've noticed that the directory I configured has filled up with directories that contain .pdb files. Still, I can't get the actual source to the SqlConnection class to become available. Is there some definite guide to how to do this successfully? A: You can download .NET source code and set break point right in .NET FW source code. You can use NetMassDownloader to grab .NET sources quickly. A: According to this article you can download the source code for the .NET framework and then debug it using visual studio: http://weblogs.asp.net/scottgu/archive/2007/10/03/releasing-the-source-code A: I almost forgot to mention Deblector - it's a Reflector plugin, that allows you to debug almost any .net app without source codes :) A: While source debugging is defintely better, you don't need pdbs or source for the VS debugger to set a bp on the function you want. Make sure you go to Tools/Options/Debugger and turn off the option called "Just My Code". Since the framework is not 'your code' the debugger unhelpfully prevents you from setting breakpoints there. Next you need the full name of the method as it appears in the metadata. This includes any namespaces it is nested in. I'd recommend ILDasm or Reflector if you need to find the name. On the breakpoints window in the upper left corner is a "new bp" menu button. One of the choices is to set a bp on function name. When the dialog comes up uncheck having intellisense check the name since you don't have a project. I hope that helps. A: And if you can't use source level debugging with the .Net framework source code Microsoft supplied, you could try a different debugger. Like mdbg or even windbg. edit This explains getting the released parts of .Net framework and how to set breakpoints in great detail. The NetMassDownloader will give you everything (pdb and source) in one download. But not all source code of the .Net framework is available. If your SqlConnection is not you can always use IL debuggers like the ones I mentioned. And don't forget Lutz's Reflector to give you a look at the source code anyway. A: OK, if you want definitive guide, here it is: Configuring Visual Studio to Debug .NET Framework Source Code If you want some help, go ahead and tell use which steps did you perform?
Breakpoints in core .NET runtime?
I have a third party library that internally constructs and uses the SqlConnection class. I can inherit from the class, but it has a ton of overloads, and so far I have been unable to find the right one. What I'd like is to tack on a parameter to the connection string being used. Is there a way for me to put a breakpoint in the .NET library core itself? Specifically in the constructors of the SqlConnection class, so that I can look at the stack trace and see where it is actually being constructed? Barring that, is there some other way I can do this? Specifically, what I want to do is to tack on the Application Name parameter, so that our application is more easily identified on the server when looking at connections. Edit: Well, it appears I need more help. I think I've enabled everything related to symbol server support, and I've noticed that the directory I configured has filled up with directories that contain .pdb files. Still, I can't get the actual source to the SqlConnection class to become available. Is there some definite guide to how to do this successfully?
[ "You can download .NET source code and set break point right in .NET FW source code.\nYou can use NetMassDownloader to grab .NET sources quickly.\n", "According to this article you can download the source code for the .NET framework and then debug it using visual studio:\nhttp://weblogs.asp.net/scottgu/archive/2007/10/03/releasing-the-source-code\n", "I almost forgot to mention Deblector - it's a Reflector plugin, that allows you to debug almost any .net app without source codes :)\n", "While source debugging is defintely better, you don't need pdbs or source for the VS debugger to set a bp on the function you want. \nMake sure you go to Tools/Options/Debugger and turn off the option called \"Just My Code\". Since the framework is not 'your code' the debugger unhelpfully prevents you from setting breakpoints there.\nNext you need the full name of the method as it appears in the metadata. This includes any namespaces it is nested in. I'd recommend ILDasm or Reflector if you need to find the name. \nOn the breakpoints window in the upper left corner is a \"new bp\" menu button. One of the choices is to set a bp on function name. When the dialog comes up uncheck having intellisense check the name since you don't have a project. I hope that helps.\n", "And if you can't use source level debugging with the .Net framework source code Microsoft supplied, you could try a different debugger. Like mdbg or even windbg.\nedit\nThis explains getting the released parts of .Net framework and how to set breakpoints in great detail. The NetMassDownloader will give you everything (pdb and source) in one download. But not all source code of the .Net framework is available. If your SqlConnection is not you can always use IL debuggers like the ones I mentioned. And don't forget Lutz's Reflector to give you a look at the source code anyway.\n", "OK, if you want definitive guide, here it is:\nConfiguring Visual Studio to Debug .NET Framework Source Code\nIf you want some help, go ahead and tell use which steps did you perform?\n" ]
[ 7, 3, 3, 3, 2, 0 ]
[]
[]
[ ".net", "breakpoints", "runtime", "sqlconnection" ]
stackoverflow_0000061272_.net_breakpoints_runtime_sqlconnection.txt
Q: Switching editors in Eclipse with keyboard, rather than switching Design/Source In Eclipse, I can switch through open editors using control-page up/down. This works great, except for editors like XML or JavaScript, where there are Design and Source tabs. For those editors, it just toggles between the different tabs. Is there any way to get Eclipse to ignore them? I know about alt-F6 for "Next Editor", but that doesn't use the same order that the editor tabs are displayed in, so it's confusing. A: With Ctrl-E you can jump directly to any editor by typing the beginning of it's name. Quite handy when you've got a lot of editors open. A: You're right -- looks like Eclipse has acknowledged it as a bug. It's fixed in 3.5. A: I was initially thinking Alt-← and Alt-→ might do what you want, but that's more for going forward and backwards in history of tabs you've viewed. Which might sort of get you what you want, but is probably just as confusing as Alt-F6. I think it sounds more like a bug in Eclipse, might be worth going over to eclipse.org to see if there's a pre-existing bug for this.
Switching editors in Eclipse with keyboard, rather than switching Design/Source
In Eclipse, I can switch through open editors using control-page up/down. This works great, except for editors like XML or JavaScript, where there are Design and Source tabs. For those editors, it just toggles between the different tabs. Is there any way to get Eclipse to ignore them? I know about alt-F6 for "Next Editor", but that doesn't use the same order that the editor tabs are displayed in, so it's confusing.
[ "With Ctrl-E you can jump directly to any editor by typing the beginning of it's name. Quite handy when you've got a lot of editors open.\n", "You're right -- looks like Eclipse has acknowledged it as a bug. It's fixed in 3.5.\n", "I was initially thinking Alt-← and Alt-→ might do what you want, but that's more for going forward and backwards in history of tabs you've viewed. Which might sort of get you what you want, but is probably just as confusing as Alt-F6.\nI think it sounds more like a bug in Eclipse, might be worth going over to eclipse.org to see if there's a pre-existing bug for this.\n" ]
[ 3, 2, 0 ]
[]
[]
[ "eclipse" ]
stackoverflow_0000061704_eclipse.txt