content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Changing the Directory Structure in Subversion How do I create a branch in subversion that is deeper' than just the 'branches' directory? I have the standard trunk, tags and branches structure and I want to create a branch that is several directories deeper than the 'branches' tag. Using the standard svn move method, it gives me a folder not found error. I also tried copying it into the branches folder, checked it out, and the 'svn move' it into the tree structure I wanted, but also got a 'working copy admin area is missing' error. What do I need to do to create this? For the sake of illustration, let us suppose I want to create a branch to go directly into 'branches/version_1/project/subproject' (which does not exist yet)? A: svn copy --parents http://url/to/subproject http://url/to/repository/branches/version_1/project/subproject That should create the directory you want to put the subproject in (--parents means "create the intermediate directories for me"). A: Since subversion doesn't actually think of branches as anything special other than more directories, you can always just create the directory tree you want (with svn mkdir) then copy the code you want into the tree location. Or just use the --parents flag @BlairC mentioned. A: I second the use of TortoiseSVN, simply right-click on the directory and go to TortoiseSVN->Branch/tag... to quickly create a branch at a specified directory. Be sure to fill out the URL to be what you want it to be on the resulting "Copy (Branch / Tag)" dialog window. A: If you're using TortoiseSVN, you can use its Repository Explorer to do such things. Makes it all pretty WYSIWYG simple. A: SVN doesn't really manage your branches. It simply does a wholesale copy. It's up to you how you want to manage it.
Changing the Directory Structure in Subversion
How do I create a branch in subversion that is deeper' than just the 'branches' directory? I have the standard trunk, tags and branches structure and I want to create a branch that is several directories deeper than the 'branches' tag. Using the standard svn move method, it gives me a folder not found error. I also tried copying it into the branches folder, checked it out, and the 'svn move' it into the tree structure I wanted, but also got a 'working copy admin area is missing' error. What do I need to do to create this? For the sake of illustration, let us suppose I want to create a branch to go directly into 'branches/version_1/project/subproject' (which does not exist yet)?
[ "svn copy --parents http://url/to/subproject http://url/to/repository/branches/version_1/project/subproject\n\nThat should create the directory you want to put the subproject in (--parents means \"create the intermediate directories for me\").\n", "Since subversion doesn't actually think of branches as anything special other than more directories, you can always just create the directory tree you want (with svn mkdir) then copy the code you want into the tree location.\nOr just use the --parents flag @BlairC mentioned.\n", "I second the use of TortoiseSVN, simply right-click on the directory and go to TortoiseSVN->Branch/tag... to quickly create a branch at a specified directory. Be sure to fill out the URL to be what you want it to be on the resulting \"Copy (Branch / Tag)\" dialog window.\n", "If you're using TortoiseSVN, you can use its Repository Explorer to do such things. Makes it all pretty WYSIWYG simple.\n", "SVN doesn't really manage your branches. It simply does a wholesale copy. It's up to you how you want to manage it.\n" ]
[ 14, 3, 2, 1, 1 ]
[]
[]
[ "branch", "svn", "trunk" ]
stackoverflow_0000052794_branch_svn_trunk.txt
Q: CSS overflow table row positioning I have table inside a div tab. The table has 40 rows in it and the div's height is set to show 10 rows of that table. CSS's overflow:auto lets me scroll through the 40 rows. All is well there. How can I, with JavaScript cause the table to programatically position to a given row (i.e., programmatically scroll the table up or down by row)? A: Where superHappyFunDiv is the ID of the container DIV and rows is a 0-based row index: function scrollTo(row) { var container = document.getElementById("superHappyFunDiv"); var rows = container.getElementsByTagName("tr"); row = Math.min(Math.max(row, 0), rows.length-1); container.scrollTop = rows[row].offsetTop; } Will attempt to scroll the requested row to the top of the container. Tested in IE6 and FF3.
CSS overflow table row positioning
I have table inside a div tab. The table has 40 rows in it and the div's height is set to show 10 rows of that table. CSS's overflow:auto lets me scroll through the 40 rows. All is well there. How can I, with JavaScript cause the table to programatically position to a given row (i.e., programmatically scroll the table up or down by row)?
[ "Where superHappyFunDiv is the ID of the container DIV and rows is a 0-based row index:\nfunction scrollTo(row)\n{\n var container = document.getElementById(\"superHappyFunDiv\");\n var rows = container.getElementsByTagName(\"tr\");\n\n row = Math.min(Math.max(row, 0), rows.length-1);\n container.scrollTop = rows[row].offsetTop;\n}\n\nWill attempt to scroll the requested row to the top of the container. \nTested in IE6 and FF3.\n" ]
[ 16 ]
[]
[]
[ "css", "css_tables", "html", "javascript" ]
stackoverflow_0000052873_css_css_tables_html_javascript.txt
Q: Redirecting non-www URL to www using .htaccess I'm using Helicon's ISAPI Rewrite 3, which basically enables .htaccess in IIS. I need to redirect a non-www URL to the www version, i.e. example.com should redirect to www.example.com. I used the following rule from the examples but it affects subdomains: RewriteCond %{HTTPS} (on)? RewriteCond %{HTTP:Host} ^(?!www\.)(.+)$ [NC] RewriteCond %{REQUEST_URI} (.+) RewriteRule .? http(?%1s)://www.%2%3 [R=301,L] This works for most part, but is also redirect sub.example.com to www.sub.example.com. How can I rewrite the above rule so that subdomains do not get redirected? A: Append the following RewriteCond: RewriteCond %{HTTP:Host} ^[^.]+\.[a-z]{2,5}$ [NC] That way it'll only apply the rule to nondottedsomething.uptofiveletters as you can see, subdomain.domain.com will not match the condition and thus will not be rewritten. You can change [a-z]{2,5} for a stricter tld matching regex, as well as placing all the constraints for allowed chars in domain names (as [^.]+ is more permissive than strictly necessary). All in all I think in this case that wouldn't be necessary. EDIT: sadie spotted a flaw on the regex, changed the first part of it from [^.] to [^.]+ A: I've gotten more control using urlrewriter.net, something like: <unless header="Host" match="^www\."> <if url="^(https?://)[^/]*(.*)$"> <redirect to="$1www.domain.tld$2"/> </if> <redirect url="^(.*)$" to="http://www.domain.tld$1"/> </unless> A: Zigdon has the right idea except his regex isn't quite right. Use ^example\.com$ instead of his suggestion of: ^example\.com(.*) Otherwise you won't just be matching example.com, you'll be matching things like example.comcast.net, example.com.au, etc. A: @Vinko For your generic approach, I'm not sure why you chose to limit the length of the TLD in your regex? It's not very future-proof, and I'm unsure what benefit it's providing? It's actually not even "now-proof" because there's at least one 6-character TLD out there (.museum) which won't be matched. It seems unnecessary to me to do this. Couldn't you just do ^[^.]+\.[^.]\+$? (note: the question-mark is part of the sentence, not the regex!) All that aside, there is a bigger problem with this approach that is: it will fail for domains that aren't directly beneath the TLD. This is domains in Australia, UK, Japan, and many other countries, who have hierarchies: .co.jp, .co.uk, .com.au, and so on. Whether or not that is of any concern to the OP, I don't know but it's something to be aware of if you're after a "fix all" answer. The OP hasn't yet made it clear whether he wants a generic solution or a solution for a single (or small group) of known domains. If it's the latter, see my other note about using Zigdon's approach. If it's the former, then proceed with Vinko's approach taking into account the information in this post. Edit: One thing I've left out until now, which may or may not be an option for you business-wise, is to go the other way. All our sites redirect http://www.domain.com to http://domain.com. The folks at http://no-www.org make a pretty good case (IMHO) for this being the "right" way to do it, but it's still certainly just a matter of preference. One thing is for sure though, it's far easier to write a generic rule for that kind of redirection than this one. A: @org 0100h Yes, there are many variables left out of the description of the problem, and all your points are valid ones and should be addressed in the event of an actual implementation. There are both pros and cons to your proposed regex. On the one hand it's easier and future proof, on the other, do you really want to match example.foobar if sent in the Host header? There might be some edge cases when you'll end up redirecting to the wrong domain. A thrid alternative is modifying the regex to use a list of the actual domains, if more than one, like RewriteCond %{HTTP:Host} (example.com|example.net|example.org) [NC] (Note to chris, that one will change %1) @chrisofspades It's not meant to replace it, your condition number two ensures that it doesn't have www, whereas mine doesn't. It won't change the values of %1, %2, %3 because it doesn't store the matches (iow, it doesn't use parentheses). A: Can't you adjust the RewriteCond to only operate on example.com? RewriteCond %{HTTP:Host} ^example\.com(.*) [NC] A: Why dont you just have something like this in your vhost (of httpd) file? ServerName: www.example.com ServerAlias: example.com Of course that wont re-direct, that will just carry on as normal
Redirecting non-www URL to www using .htaccess
I'm using Helicon's ISAPI Rewrite 3, which basically enables .htaccess in IIS. I need to redirect a non-www URL to the www version, i.e. example.com should redirect to www.example.com. I used the following rule from the examples but it affects subdomains: RewriteCond %{HTTPS} (on)? RewriteCond %{HTTP:Host} ^(?!www\.)(.+)$ [NC] RewriteCond %{REQUEST_URI} (.+) RewriteRule .? http(?%1s)://www.%2%3 [R=301,L] This works for most part, but is also redirect sub.example.com to www.sub.example.com. How can I rewrite the above rule so that subdomains do not get redirected?
[ "Append the following RewriteCond:\nRewriteCond %{HTTP:Host} ^[^.]+\\.[a-z]{2,5}$ [NC]\n\nThat way it'll only apply the rule to nondottedsomething.uptofiveletters as you can see, subdomain.domain.com will not match the condition and thus will not be rewritten.\nYou can change [a-z]{2,5} for a stricter tld matching regex, as well as placing all the constraints for allowed chars in domain names (as [^.]+ is more permissive than strictly necessary).\nAll in all I think in this case that wouldn't be necessary.\nEDIT: sadie spotted a flaw on the regex, changed the first part of it from [^.] to [^.]+\n", "I've gotten more control using urlrewriter.net, something like:\n<unless header=\"Host\" match=\"^www\\.\">\n <if url=\"^(https?://)[^/]*(.*)$\">\n <redirect to=\"$1www.domain.tld$2\"/>\n </if>\n <redirect url=\"^(.*)$\" to=\"http://www.domain.tld$1\"/>\n</unless>\n\n", "Zigdon has the right idea except his regex isn't quite right. Use\n^example\\.com$\ninstead of his suggestion of:\n^example\\.com(.*)\nOtherwise you won't just be matching example.com, you'll be matching things like example.comcast.net, example.com.au, etc.\n", "@Vinko\nFor your generic approach, I'm not sure why you chose to limit the length of the TLD in your regex? It's not very future-proof, and I'm unsure what benefit it's providing? It's actually not even \"now-proof\" because there's at least one 6-character TLD out there (.museum) which won't be matched.\nIt seems unnecessary to me to do this. Couldn't you just do ^[^.]+\\.[^.]\\+$? (note: the question-mark is part of the sentence, not the regex!)\nAll that aside, there is a bigger problem with this approach that is: it will fail for domains that aren't directly beneath the TLD. This is domains in Australia, UK, Japan, and many other countries, who have hierarchies: .co.jp, .co.uk, .com.au, and so on.\nWhether or not that is of any concern to the OP, I don't know but it's something to be aware of if you're after a \"fix all\" answer.\nThe OP hasn't yet made it clear whether he wants a generic solution or a solution for a single (or small group) of known domains. If it's the latter, see my other note about using Zigdon's approach. If it's the former, then proceed with Vinko's approach taking into account the information in this post.\nEdit: One thing I've left out until now, which may or may not be an option for you business-wise, is to go the other way. All our sites redirect http://www.domain.com to http://domain.com. The folks at http://no-www.org make a pretty good case (IMHO) for this being the \"right\" way to do it, but it's still certainly just a matter of preference. One thing is for sure though, it's far easier to write a generic rule for that kind of redirection than this one.\n", "@org 0100h Yes, there are many variables left out of the description of the problem, and all your points are valid ones and should be addressed in the event of an actual implementation. There are both pros and cons to your proposed regex. On the one hand it's easier and future proof, on the other, do you really want to match example.foobar if sent in the Host header? There might be some edge cases when you'll end up redirecting to the wrong domain. A thrid alternative is modifying the regex to use a list of the actual domains, if more than one, like \nRewriteCond %{HTTP:Host} (example.com|example.net|example.org) [NC]\n\n(Note to chris, that one will change %1)\n@chrisofspades It's not meant to replace it, your condition number two ensures that it doesn't have www, whereas mine doesn't. It won't change the values of %1, %2, %3 because it doesn't store the matches (iow, it doesn't use parentheses).\n", "Can't you adjust the RewriteCond to only operate on example.com?\nRewriteCond %{HTTP:Host} ^example\\.com(.*) [NC]\n\n", "Why dont you just have something like this in your vhost (of httpd) file?\nServerName: www.example.com\nServerAlias: example.com\n\nOf course that wont re-direct, that will just carry on as normal\n" ]
[ 3, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ ".htaccess", "isapi_rewrite" ]
stackoverflow_0000050931_.htaccess_isapi_rewrite.txt
Q: Separating CSS deployment from rest of site Where I work, the design and development departments are totally separated, however we (the design department) are responsible for managing the CSS for our sites. Typically, new CSS needs to be released to the production server far more often than new website code. Because of this, we are deploying the CSS separately, and it lives outside source control. However, lately, we've run into a few problems with new CSS not being synched for up site releases, and in general the process is a huge headache. I've been pushing to get the CSS under some kind of source control, but having trouble finding a good deployment method that makes everyone happy. Our biggest problem is managing changes that affect current portions of the site, where the CSS changes need to go live before the site changes, but not break anything on the exisiting site. I won't go into the finer details of the weird culture between designers and devs here, but I was wondering what experience others have had in managing large amounts of CSS (50+ files, thousands and thousands of lines) that needs to be constantly updated and released independent of site releases. A: I'll advocate the use of source control here. Especially if the development team uses branching to deal with structured releases. That way, whatever CSS is checked into the production branch is what should be deployed ... and if it is updated mid-stream, it's the responsibility of the person (designer?) that updates it to promote that code using whatever system your company uses to promote changes to production. A: The fancy name is "Content Delivery Network" (Wikipedia). We store our CSS files in a database, and then have a separate website that does nothing but serve CSS resources. We implemented this in May 2007 for 1000+ websites in 30+ countries. It has worked flawlessly for the last 15 months. Static images and even JavaScript files are handled the same way.
Separating CSS deployment from rest of site
Where I work, the design and development departments are totally separated, however we (the design department) are responsible for managing the CSS for our sites. Typically, new CSS needs to be released to the production server far more often than new website code. Because of this, we are deploying the CSS separately, and it lives outside source control. However, lately, we've run into a few problems with new CSS not being synched for up site releases, and in general the process is a huge headache. I've been pushing to get the CSS under some kind of source control, but having trouble finding a good deployment method that makes everyone happy. Our biggest problem is managing changes that affect current portions of the site, where the CSS changes need to go live before the site changes, but not break anything on the exisiting site. I won't go into the finer details of the weird culture between designers and devs here, but I was wondering what experience others have had in managing large amounts of CSS (50+ files, thousands and thousands of lines) that needs to be constantly updated and released independent of site releases.
[ "I'll advocate the use of source control here. Especially if the development team uses branching to deal with structured releases. That way, whatever CSS is checked into the production branch is what should be deployed ... and if it is updated mid-stream, it's the responsibility of the person (designer?) that updates it to promote that code using whatever system your company uses to promote changes to production.\n", "The fancy name is \"Content Delivery Network\" (Wikipedia).\nWe store our CSS files in a database, and then have a separate website that does nothing but serve CSS resources. We implemented this in May 2007 for 1000+ websites in 30+ countries. It has worked flawlessly for the last 15 months. \nStatic images and even JavaScript files are handled the same way.\n" ]
[ 4, 2 ]
[]
[]
[ "css", "deployment", "project_management", "version_control" ]
stackoverflow_0000052924_css_deployment_project_management_version_control.txt
Q: Open Source Actionscript 3 or Javascript date utility classes? I was wondering if anyone could point to an Open Source date utility class that is fairly robust. I find myself rolling my own when I want to do a lot of things I take for granted in C# and Java. For instance I did find a decent example of a DateDiff() function that I tore apart and another DatePart() function. Another examples would be parsing different date/time formats. I'm trying to avoid reinventing something if it's already built. Another possibility may be a nice set of Javascript files that I can convert to ActionScript 3. So far I've found DateJS but I want to get a good idea of what is out there. A: as3corelib has the DateUtil class and it should be pretty reliable since it's written by some Adobe employees. I haven't encountered any problems with it. A: There is also DP_DateExtensions, though I believe DateJS is more robust.
Open Source Actionscript 3 or Javascript date utility classes?
I was wondering if anyone could point to an Open Source date utility class that is fairly robust. I find myself rolling my own when I want to do a lot of things I take for granted in C# and Java. For instance I did find a decent example of a DateDiff() function that I tore apart and another DatePart() function. Another examples would be parsing different date/time formats. I'm trying to avoid reinventing something if it's already built. Another possibility may be a nice set of Javascript files that I can convert to ActionScript 3. So far I've found DateJS but I want to get a good idea of what is out there.
[ "as3corelib has the DateUtil class and it should be pretty reliable since it's written by some Adobe employees. I haven't encountered any problems with it.\n", "There is also DP_DateExtensions, though I believe DateJS is more robust.\n" ]
[ 3, 1 ]
[]
[]
[ "actionscript_3", "apache_flex", "javascript" ]
stackoverflow_0000052931_actionscript_3_apache_flex_javascript.txt
Q: Programmatically building htpasswd Is there a programmatic way to build htpasswd files, without depending on OS specific functions (i.e. exec(), passthru())? A: .httpasswd files are just text files with a specific format depending on the hash function specified. If you are using MD5 they look like this: foo:$apr1$y1cXxW5l$3vapv2yyCXaYz8zGoXj241 That's the login, a colon, ,$apr1$, the salt and 1000 times md5 encoded as base64. If you select SHA1 they look like this: foo:{SHA}BW6v589SIg3i3zaEW47RcMZ+I+M= That's the login, a colon, the string {SHA} and the SHA1 hash encoded with base64. If your language has an implementation of either MD5 or SHA1 and base64 you can just create the file like this: <?php $login = 'foo'; $pass = 'pass'; $hash = base64_encode(sha1($pass, true)); $contents = $login . ':{SHA}' . $hash; file_put_contents('.htpasswd', $contents); ?> Here's more information on the format: http://httpd.apache.org/docs/2.2/misc/password_encryptions.html
Programmatically building htpasswd
Is there a programmatic way to build htpasswd files, without depending on OS specific functions (i.e. exec(), passthru())?
[ ".httpasswd files are just text files with a specific format depending on the hash function specified. If you are using MD5 they look like this:\nfoo:$apr1$y1cXxW5l$3vapv2yyCXaYz8zGoXj241\n\nThat's the login, a colon, ,$apr1$, the salt and 1000 times md5 encoded as base64. If you select SHA1 they look like this:\nfoo:{SHA}BW6v589SIg3i3zaEW47RcMZ+I+M=\n\nThat's the login, a colon, the string {SHA} and the SHA1 hash encoded with base64.\nIf your language has an implementation of either MD5 or SHA1 and base64 you can just create the file like this:\n<?php\n\n$login = 'foo';\n$pass = 'pass';\n$hash = base64_encode(sha1($pass, true));\n\n$contents = $login . ':{SHA}' . $hash;\n\nfile_put_contents('.htpasswd', $contents);\n\n?>\n\nHere's more information on the format:\nhttp://httpd.apache.org/docs/2.2/misc/password_encryptions.html\n" ]
[ 38 ]
[ "From what it says on the PHP website, you can use crypt() in the following method:\n<?php\n\n// Set the password & username\n$username = 'user';\n$password = 'mypassword';\n\n// Get the hash, letting the salt be automatically generated\n$hash = crypt($password);\n\n// write to a file\nfile_set_contents('.htpasswd', $username ':' . $contents);\n\n?>\n\nPart of this example can be found: http://ca3.php.net/crypt\nThis will of course overwrite the entire existing file, so you'll want to do some kind of concatination.\nI'm not 100% sure this will work, but I'm pretty sure.\n", "Trac ships with a Python replacement for htpasswd, which I'm sure you could port to your language of choice: htpasswd.py.\n" ]
[ -1, -2 ]
[ ".htpasswd", "automation", "php" ]
stackoverflow_0000039916_.htpasswd_automation_php.txt
Q: How do I create tri-state checkboxes with a TreeView control in .NET? I have a treeview control in a Windows Forms project that has checkboxes turned on. Because the treeview control has nested nodes, I need the checkboxes to be able to have some sort of tri-mode selection. I can't find a way to do this (I can only have the checkboxes fully checked or unchecked). A: If you are talking about Windows Forms, this article should help you build you tri-state TreeView: http://www.codeproject.com/KB/tree/treeviewex2003.aspx?display=Print If you need tri-state checkboxes on a treeview on asp.net i think you need to use a third-party component. Take a look a this one, and click "tri-state checkboxes" on the left side: http://www.aspnetexpert.com/demos/tree/default.aspx
How do I create tri-state checkboxes with a TreeView control in .NET?
I have a treeview control in a Windows Forms project that has checkboxes turned on. Because the treeview control has nested nodes, I need the checkboxes to be able to have some sort of tri-mode selection. I can't find a way to do this (I can only have the checkboxes fully checked or unchecked).
[ "If you are talking about Windows Forms, this article should help you build you tri-state TreeView:\nhttp://www.codeproject.com/KB/tree/treeviewex2003.aspx?display=Print\nIf you need tri-state checkboxes on a treeview on asp.net i think you need to use a third-party component. Take a look a this one, and click \"tri-state checkboxes\" on the left side:\nhttp://www.aspnetexpert.com/demos/tree/default.aspx\n" ]
[ 4 ]
[]
[]
[ ".net", "asp.net", "winforms" ]
stackoverflow_0000053002_.net_asp.net_winforms.txt
Q: programmatically merge .reg file into win32 registry What's the best way to programmatically merge a .reg file into the registry? This is for unit testing; the .reg file is a test artifact which will be added then removed at the start and end of testing. Or, if there's a better way to unit test against the registry... A: It is possible to remove registry keys using a .reg file, although I'm not sure how well it's documented. Here's how: REGEDIT4 [-HKEY_CURRENT_USER\Software\<otherpath>] The - in front of the key name tells Regedit that you want to remove the key. To run this silently, type: regedit /s "myfile.reg" A: If you're shelling out, I'd use the reg command (details below). If you can tell us what language you're working with, we could provide language specific code. C:>reg /? REG Operation [Parameter List] Operation [ QUERY | ADD | DELETE | COPY | SAVE | LOAD | UNLOAD | RESTORE | COMPARE | EXPORT | IMPORT | FLAGS ] Return Code: (Except for REG COMPARE) 0 - Successful 1 - Failed For help on a specific operation type: REG ADD /? REG DELETE /? [snipped] A: I looked into it by checking out my file associations. It seems that a .reg file is just called as the first parameter to the regedit.exe executable on Windows. So you can just say regedit.exe "mytest.reg". What I'm not sure of is how to get rid of the dialog box that pops up that asks for your confirmation. A: Use the Win32 API function ShellExecute() or ShellExecuteEx(). If the comment is 'open' it should merge the .reg file. I haven't tested it, but it should work. A: One of the most frustrating things about writing unit tests is dealing with dependencies. One of the greatest things about Test-Driven Development is that it produces code that is decoupled from its dependencies. Cool, huh? When I find myself asking questions like this one, I look for ways to decouple the code I'm writing from the dependency. Separate out the reading of the registry from the complexity that you'd like to test.
programmatically merge .reg file into win32 registry
What's the best way to programmatically merge a .reg file into the registry? This is for unit testing; the .reg file is a test artifact which will be added then removed at the start and end of testing. Or, if there's a better way to unit test against the registry...
[ "It is possible to remove registry keys using a .reg file, although I'm not sure how well it's documented. Here's how:\nREGEDIT4\n\n[-HKEY_CURRENT_USER\\Software\\<otherpath>]\n\nThe - in front of the key name tells Regedit that you want to remove the key.\nTo run this silently, type:\nregedit /s \"myfile.reg\"\n\n", "If you're shelling out, I'd use the reg command (details below). If you can tell us what language you're working with, we could provide language specific code.\nC:>reg /?\nREG Operation [Parameter List]\nOperation [ QUERY | ADD | DELETE | COPY |\n SAVE | LOAD | UNLOAD | RESTORE |\n COMPARE | EXPORT | IMPORT | FLAGS ]\nReturn Code: (Except for REG COMPARE)\n0 - Successful\n 1 - Failed\nFor help on a specific operation type:\nREG ADD /?\n REG DELETE /?\n[snipped]\n", "I looked into it by checking out my file associations.\nIt seems that a .reg file is just called as the first parameter to the regedit.exe executable on Windows. \nSo you can just say regedit.exe \"mytest.reg\". What I'm not sure of is how to get rid of the dialog box that pops up that asks for your confirmation.\n", "Use the Win32 API function ShellExecute() or ShellExecuteEx(). If the comment is 'open' it should merge the .reg file. I haven't tested it, but it should work.\n", "One of the most frustrating things about writing unit tests is dealing with dependencies. One of the greatest things about Test-Driven Development is that it produces code that is decoupled from its dependencies. Cool, huh?\nWhen I find myself asking questions like this one, I look for ways to decouple the code I'm writing from the dependency. Separate out the reading of the registry from the complexity that you'd like to test.\n" ]
[ 8, 5, 2, 1, 0 ]
[]
[]
[ "registry", "unit_testing" ]
stackoverflow_0000035070_registry_unit_testing.txt
Q: Large Python Includes I have a file that I want to include in Python but the included file is fairly long and it'd be much neater to be able to split them into several files but then I have to use several include statements. Is there some way to group together several files and include them all at once? A: Put files in one folder. Add __init__.py file to the folder. Do necessary imports in __init__.py Replace multiple imports by one: import folder_name See Python Package Management A: Yes, take a look at the "6.4 Packages" section in http://docs.python.org/tut/node8.html: Basically, you can place a bunch of files into a directory and add an __init__.py file to the directory. If the directory is in your PYTHONPATH or sys.path, you can do "import directoryname" to import everything in the directory or "import directoryname.some_file_in_directory" to import a specific file that is in the directory. The __init__.py files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as "string", from unintentionally hiding valid modules that occur later on the module search path. In the simplest case, __init__.py can just be an empty file, but it can also execute initialization code for the package or set the __all__ variable, described later.
Large Python Includes
I have a file that I want to include in Python but the included file is fairly long and it'd be much neater to be able to split them into several files but then I have to use several include statements. Is there some way to group together several files and include them all at once?
[ "\nPut files in one folder. \nAdd __init__.py file to the folder. Do necessary imports in __init__.py\nReplace multiple imports by one:\nimport folder_name \n\nSee Python Package Management\n", "Yes, take a look at the \"6.4 Packages\" section in http://docs.python.org/tut/node8.html:\nBasically, you can place a bunch of files into a directory and add an __init__.py file to the directory. If the directory is in your PYTHONPATH or sys.path, you can do \"import directoryname\" to import everything in the directory or \"import directoryname.some_file_in_directory\" to import a specific file that is in the directory.\n\nThe __init__.py files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as \"string\", from unintentionally hiding valid modules that occur later on the module search path. In the simplest case, __init__.py can just be an empty file, but it can also execute initialization code for the package or set the __all__ variable, described later. \n\n" ]
[ 8, 6 ]
[]
[]
[ "python" ]
stackoverflow_0000053027_python.txt
Q: How do you delete wild card cookies in Rails? How do you delete a cookie in rails that was set with a wild card domain: cookies[:foo] = {:value => 'bar', :domain => '.acme.com'} When, following the docs, you do: cookies.delete :foo the logs say Cookie set: foo=; path=/; expires=Thu, 01 Jan 1970 00:00:00 GMT Notice that the domain is missing (it seems to use the default params for everything). Respecting the RFC, of course the cookie's still there, Browser -> ctrl/cmd-L -> javascript:alert(document.cookie); Voilà! Q: What's the "correct" way to delete such a cookie? A: Pass the :domain on delete as well. Here's the source of that method: # Removes the cookie on the client machine by setting the value to an empty string # and setting its expiration date into the past. Like []=, you can pass in an options # hash to delete cookies with extra data such as a +path+. def delete(name, options = {}) options.stringify_keys! set_cookie(options.merge("name" => name.to_s, "value" => "", "expires" => Time.at(0))) end As you can see, it just sets an empty cookie with the name you gave, set to expire in 1969, and with no contents. But it does merge in any other options you give, so you can do: cookies.delete :foo, :domain => '.acme.com' And you're set.
How do you delete wild card cookies in Rails?
How do you delete a cookie in rails that was set with a wild card domain: cookies[:foo] = {:value => 'bar', :domain => '.acme.com'} When, following the docs, you do: cookies.delete :foo the logs say Cookie set: foo=; path=/; expires=Thu, 01 Jan 1970 00:00:00 GMT Notice that the domain is missing (it seems to use the default params for everything). Respecting the RFC, of course the cookie's still there, Browser -> ctrl/cmd-L -> javascript:alert(document.cookie); Voilà! Q: What's the "correct" way to delete such a cookie?
[ "Pass the :domain on delete as well. Here's the source of that method:\n# Removes the cookie on the client machine by setting the value to an empty string\n# and setting its expiration date into the past. Like []=, you can pass in an options\n# hash to delete cookies with extra data such as a +path+.\ndef delete(name, options = {})\n options.stringify_keys!\n set_cookie(options.merge(\"name\" => name.to_s, \"value\" => \"\", \"expires\" => Time.at(0)))\nend\n\nAs you can see, it just sets an empty cookie with the name you gave, set to expire in 1969, and with no contents. But it does merge in any other options you give, so you can do:\ncookies.delete :foo, :domain => '.acme.com'\n\nAnd you're set.\n" ]
[ 20 ]
[]
[]
[ "ruby_on_rails" ]
stackoverflow_0000052917_ruby_on_rails.txt
Q: How to recover a deleted branch in TFS? I deleted a branch in TFS and just found out that I need the changes that were on it. How do I recover the branch or the changes done on it? A: Specifically in Visual Studio go to "Tools-Options" then Select "Source Control-visual Studio Team Founation Server" and check the "Show deleted items in the Source Control explorer". Having done that - you can then right click a folder and say "Undelete" A: As described in the TFS FAQ: Are Deletes physical or logical? Can accidental deletes be recovered? Deletes are fully recoverable with the “undelete” operation. You wouldn’t want to do a SQL restore because that would roll back every change to the TFS in the time since the file was deleted.
How to recover a deleted branch in TFS?
I deleted a branch in TFS and just found out that I need the changes that were on it. How do I recover the branch or the changes done on it?
[ "Specifically in Visual Studio go to \"Tools-Options\" then Select \"Source Control-visual Studio Team Founation Server\" and check the \"Show deleted items in the Source Control explorer\".\nHaving done that - you can then right click a folder and say \"Undelete\"\n", "As described in the TFS FAQ:\nAre Deletes physical or logical? Can accidental deletes be recovered?\nDeletes are fully recoverable with the “undelete” operation. You wouldn’t want to do a SQL restore because that would roll back every change to the TFS in the time since the file was deleted.\n" ]
[ 60, 6 ]
[]
[]
[ "tfs", "version_control" ]
stackoverflow_0000049456_tfs_version_control.txt
Q: GUIDs in a SLN file Visual Studio Solution files contain two GUID's per project entry. I figure one of them is from the AssemblyInfo.cs Does anyone know for sure where these come from, and what they are used for? A: Neither GUID is the same GUID as from AssemblyInfo.cs (that is the GUID for the assembly itself, not tied to Visual Studio but the end product of the build). So, for a typical line in the sln file (open the .sln in notepad or editor-of-choice if you wish to see this): Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "ConsoleSandbox", "ConsoleSandbox\ConsoleSandbox.csproj", "{55A1FD06-FB00-4F8A-9153-C432357F5CAC}" The second GUID is a unique GUID for the project itself. The solution file uses this to map other settings to that project: GlobalSection(ProjectConfigurationPlatforms) = postSolution {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Debug|Any CPU.ActiveCfg = Debug|Any CPU {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Debug|Any CPU.Build.0 = Debug|Any CPU {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Release|Any CPU.ActiveCfg = Release|Any CPU {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Release|Any CPU.Build.0 = Release|Any CPU EndGlobalSection The first GUID is actually a GUID that is the unique GUID for the solution itself (I believe). If you have a solution with more than one project, you'll actually see something like the following: Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "ConsoleSandbox", "ConsoleSandbox\ConsoleSandbox.csproj", "{55A1FD06-FB00-4F8A-9153-C432357F5CAC}" EndProject Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Composite", "..\CompositeWPF\Source\CAL\Composite\Composite.csproj", "{77138947-1D13-4E22-AEE0-5D0DD046CA34}" EndProject A: According to MSDN: [The Project] statement contains the unique project GUID and the project type GUID. This information is used by the environment to find the project file or files belonging to the solution, and the VSPackage required for each project. The project GUID is passed to IVsProjectFactory to load the specific VSPackage related to the project, then the project is loaded by the VSPackage.
GUIDs in a SLN file
Visual Studio Solution files contain two GUID's per project entry. I figure one of them is from the AssemblyInfo.cs Does anyone know for sure where these come from, and what they are used for?
[ "Neither GUID is the same GUID as from AssemblyInfo.cs (that is the GUID for the assembly itself, not tied to Visual Studio but the end product of the build).\nSo, for a typical line in the sln file (open the .sln in notepad or editor-of-choice if you wish to see this):\nProject(\"{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}\") = \"ConsoleSandbox\", \"ConsoleSandbox\\ConsoleSandbox.csproj\", \"{55A1FD06-FB00-4F8A-9153-C432357F5CAC}\"\n\nThe second GUID is a unique GUID for the project itself. The solution file uses this to map other settings to that project:\nGlobalSection(ProjectConfigurationPlatforms) = postSolution\n {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Debug|Any CPU.ActiveCfg = Debug|Any CPU\n {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Debug|Any CPU.Build.0 = Debug|Any CPU\n {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Release|Any CPU.ActiveCfg = Release|Any CPU\n {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Release|Any CPU.Build.0 = Release|Any CPU\nEndGlobalSection\n\nThe first GUID is actually a GUID that is the unique GUID for the solution itself (I believe). If you have a solution with more than one project, you'll actually see something like the following:\nProject(\"{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}\") = \"ConsoleSandbox\", \"ConsoleSandbox\\ConsoleSandbox.csproj\", \"{55A1FD06-FB00-4F8A-9153-C432357F5CAC}\"\nEndProject\nProject(\"{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}\") = \"Composite\", \"..\\CompositeWPF\\Source\\CAL\\Composite\\Composite.csproj\", \"{77138947-1D13-4E22-AEE0-5D0DD046CA34}\"\nEndProject\n\n", "According to MSDN: \n\n[The Project] statement contains the\n unique project GUID and the project\n type GUID. This information is used by\n the environment to find the project\n file or files belonging to the\n solution, and the VSPackage required\n for each project. The project GUID is\n passed to IVsProjectFactory to load\n the specific VSPackage related to the\n project, then the project is loaded by\n the VSPackage.\n\n" ]
[ 15, 7 ]
[]
[]
[ ".net", "solution", "visual_studio" ]
stackoverflow_0000053041_.net_solution_visual_studio.txt
Q: Can a STP template be hidden from subsite creation page? When a template is added using the add-template stsadm command, it becomes available to everyone when creating a subsite. Is there any way to make it only available when a site collection is being created? A: go to site actions -> Site Settings -> view all site settings -> site templates and page layouts and remove the site template from the list of allowed items. Gary Lapointe may also have made an stsadm extenstion for it; check stsadm.blogspot.com Mauro Masucci http://www.brantas.co.uk A: The url to the blog post mentioned above, for hiding the stp templates using the stsadm extention, is http://stsadm.blogspot.com/2007/08/set-available-site-templates.html Here’s an example of how to remove a template from the list of available templates for a site collection: stsadm –o gl-removeavailablesitetemplate –url "http://intranet/" -template "WIKI#0" -lcid 1033 -resetallsubsites A: Thanks Mauro. I was hoping for a solution which doesn't require going to every site collection, but it looks like there may not be one! A: Komrade, stsadm.blogspot.com may be the answer again, you can list all the site collections and then using the command that edward posted to remove the site templates. That might help make things a bit quicker! Although, you should only have to do it once per site collection, all subsites (as far as I remember) inherit their settings from the parent site.
Can a STP template be hidden from subsite creation page?
When a template is added using the add-template stsadm command, it becomes available to everyone when creating a subsite. Is there any way to make it only available when a site collection is being created?
[ "go to site actions -> Site Settings -> view all site settings -> site templates and page layouts and remove the site template from the list of allowed items.\nGary Lapointe may also have made an stsadm extenstion for it; check stsadm.blogspot.com\nMauro Masucci\nhttp://www.brantas.co.uk\n", "The url to the blog post mentioned above, for hiding the stp templates using the stsadm extention, is http://stsadm.blogspot.com/2007/08/set-available-site-templates.html\n\nHere’s an example of how to remove a template from the list of available templates for a site collection:\n\nstsadm –o gl-removeavailablesitetemplate –url \"http://intranet/\" -template \"WIKI#0\" -lcid 1033 -resetallsubsites\n\n\n", "Thanks Mauro. I was hoping for a solution which doesn't require going to every site collection, but it looks like there may not be one!\n", "Komrade,\nstsadm.blogspot.com may be the answer again, you can list all the site collections and then using the command that edward posted to remove the site templates. That might help make things a bit quicker!\nAlthough, you should only have to do it once per site collection, all subsites (as far as I remember) inherit their settings from the parent site.\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "moss", "sharepoint", "templates" ]
stackoverflow_0000051202_moss_sharepoint_templates.txt
Q: Is business logic subjective? I have a team lead who seems to think that business logic is very subjective, to the point that if my stored procedure has a WHERE ID = @ID — he would call this “business logic” What approach should I take to define “business logic” in a very objective way without offending my team lead? A: I really think you just need to agree on a clear definition of what you mean when you say "business logic". If you need to be "politically sensitive", you could even craft the definition around your team lead's understanding, then come up with another term ("domain rules"?) that defines what you want to talk about. Words and terms are relatively subjective -- of course, once you leave that company you will need to 're-learn' industry standards, so it's always better to stick with them if you can, but the main goal is to communicate clearly and get work done. A: One way to differentiate is that "business logic" is something the customer would care about and that could be explained to a customer without referring to computer-specific words. A: You could try to argue your point with a timed example, run a sql select against an indexed table and then run a loop to find exactly the same item in the same set but this time in code. The code will be much slower. Let the database do what it was designed to do, select sets and subsets of data :) I think realistically though, all you can do is get your team together to build a set of standards which you will all code to, democracy rules!
Is business logic subjective?
I have a team lead who seems to think that business logic is very subjective, to the point that if my stored procedure has a WHERE ID = @ID — he would call this “business logic” What approach should I take to define “business logic” in a very objective way without offending my team lead?
[ "I really think you just need to agree on a clear definition of what you mean when you say \"business logic\". If you need to be \"politically sensitive\", you could even craft the definition around your team lead's understanding, then come up with another term (\"domain rules\"?) that defines what you want to talk about.\nWords and terms are relatively subjective -- of course, once you leave that company you will need to 're-learn' industry standards, so it's always better to stick with them if you can, but the main goal is to communicate clearly and get work done.\n", "One way to differentiate is that \"business logic\" is something the customer would care about and that could be explained to a customer without referring to computer-specific words.\n", "You could try to argue your point with a timed example, run a sql select against an indexed table and then run a loop to find exactly the same item in the same set but this time in code. The code will be much slower.\nLet the database do what it was designed to do, select sets and subsets of data :) I think realistically though, all you can do is get your team together to build a set of standards which you will all code to, democracy rules!\n" ]
[ 9, 6, 1 ]
[]
[]
[ "business_logic" ]
stackoverflow_0000052965_business_logic.txt
Q: What does the PDB get me while debugging and how do I know it's working? I have to use a third-party component without source code. I have the release DLL and release PDB file. Let's call it 'CorporateComponent.dll'. My own code creates objects from this DLL and calls methods on these objects. CorpObject o = new CorpObject(); Int32 result = o.DoSomethingLousy(); While debugging, the method 'DoSomethingLousy' throws an exception. What does the PDB file do for me? If it does something nice, how can I be sure I'm making use of it? A: To confirm if you're using the provided PDB, CorporateComponent.pdb, during debugging within the Visual Studio IDE review the output window and locate the line indicating that the CorporateComponent.dll is loaded and followed by the string Symbols loaded. To illustrate from a project of mine: The thread 0x6a0 has exited with code 0 (0x0). The thread 0x1f78 has exited with code 0 (0x0). 'AvayaConfigurationService.vshost.exe' (Managed): Loaded 'C:\Development\Src\trunk\ntity\AvayaConfigurationService\AvayaConfigurationService\bin\Debug \AvayaConfigurationService.exe', Symbols loaded. 'AvayaConfigurationService.vshost.exe' (Managed): Loaded 'C:\Development\Src\trunk\ntity\AvayaConfigurationService\AvayaConfigurationService\bin\Debug\IPOConfigService.dll', No symbols loaded. Loaded 'C:\Development\src...\bin\Debug\AvayaConfigurationService.exe', Symbols loaded. This indicates that the PDB was found and loaded by the IDE debugger. As indicated by others When examining stack frames within your application you should be able to see the symbols from the CorporateComponent.pdb. If you don't then perhaps the third-party did not include symbol information in the release PDB build. A: The pdb contains information the debugger needs in order to correctly read the stack. Your stack traces will contain line numbers and symbol names of the stack frames inside of the modules for which you have the pdb. I'll give two usages examples. The first is the obvious answer. The second explains source-indexed pdb's. 1st usage example... Depending on calling convention and which optimizations the compiler used, it might not be possible for the debugger to manually unwind the stack through a module for which you do not have a pdb. This can happen with certain third party libraries and even for some parts of the OS. Consider a scenario in which you encounter an access violation inside of the windows OS. The stack trace does not unwind into your own application because that OS component uses a special calling convention that confuses the debugger. If you configure your symbol path to download the public OS pdb's, then there is a good chance that the stack trace will unwind into your application. That enables you to see exactly what arguments your own code passed into the OS system call. (and similar example for AV inside of a 3rd party library or even inside of your own code) 2nd usage example... Pdb's have another very useful property - they can integrate with some source control systems using a feature that microsoft calls "source indexing". A source-indexed pdb contains source control commands that specify how to fetch from source control the exact file versions that were used to build the component. Microsoft's debuggers understand how to execute the commands to automatically fetch the files during a debug session. This is a powerful feature that saves the debug egineer from having to manually sync a source tree to the correct label for a given build. It's especially useful for remote debugging sessions and for analyzing crash dumps post-mortem. The "debugging tools for windows" installation (windbg) contains a document named srcsrv.doc which provides an example demonstrating how to use srctool.exe to determine which source files are source-indexed in a given pdb. To answer your question "how do I know", the "modules" feature in the debugger can tell you which modules have a corresponding pdb. In windbg use the "lml" command. In visual studio select modules from somewhere in the debug menus. (sorry, I don't have a current version of visual studio handy) A: The PDB is a database file that maps the instructions to their line numbers in the original code so when you get a stack trace you'll get the line numbers for the code. If it's an unmanaged DLL then the PDB file will also give you the names of the functions in the stack trace, whereas that information is usually only available for managed DLLs without PDBs. A: The main I get from the pdb is line numbers and real method names for stack traces.
What does the PDB get me while debugging and how do I know it's working?
I have to use a third-party component without source code. I have the release DLL and release PDB file. Let's call it 'CorporateComponent.dll'. My own code creates objects from this DLL and calls methods on these objects. CorpObject o = new CorpObject(); Int32 result = o.DoSomethingLousy(); While debugging, the method 'DoSomethingLousy' throws an exception. What does the PDB file do for me? If it does something nice, how can I be sure I'm making use of it?
[ "To confirm if you're using the provided PDB, CorporateComponent.pdb, during debugging within the Visual Studio IDE review the output window and locate the line indicating that the CorporateComponent.dll is loaded and followed by the string Symbols loaded.\nTo illustrate from a project of mine:\nThe thread 0x6a0 has exited with code 0 (0x0).\nThe thread 0x1f78 has exited with code 0 (0x0).\n'AvayaConfigurationService.vshost.exe' (Managed): Loaded 'C:\\Development\\Src\\trunk\\ntity\\AvayaConfigurationService\\AvayaConfigurationService\\bin\\Debug \\AvayaConfigurationService.exe', Symbols loaded.\n'AvayaConfigurationService.vshost.exe' (Managed): Loaded 'C:\\Development\\Src\\trunk\\ntity\\AvayaConfigurationService\\AvayaConfigurationService\\bin\\Debug\\IPOConfigService.dll', No symbols loaded.\n\n\nLoaded 'C:\\Development\\src...\\bin\\Debug\\AvayaConfigurationService.exe', Symbols loaded.\n\nThis indicates that the PDB was found and loaded by the IDE debugger.\nAs indicated by others When examining stack frames within your application you should be able to see the symbols from the CorporateComponent.pdb. If you don't then perhaps the third-party did not include symbol information in the release PDB build.\n", "The pdb contains information the debugger needs in order to correctly read the stack. Your stack traces will contain line numbers and symbol names of the stack frames inside of the modules for which you have the pdb.\nI'll give two usages examples. The first is the obvious answer. The second explains source-indexed pdb's.\n1st usage example...\nDepending on calling convention and which optimizations the compiler used, it might not be possible for the debugger to manually unwind the stack through a module for which you do not have a pdb. This can happen with certain third party libraries and even for some parts of the OS.\nConsider a scenario in which you encounter an access violation inside of the windows OS. The stack trace does not unwind into your own application because that OS component uses a special calling convention that confuses the debugger. If you configure your symbol path to download the public OS pdb's, then there is a good chance that the stack trace will unwind into your application. That enables you to see exactly what arguments your own code passed into the OS system call. (and similar example for AV inside of a 3rd party library or even inside of your own code)\n2nd usage example...\nPdb's have another very useful property - they can integrate with some source control systems using a feature that microsoft calls \"source indexing\". A source-indexed pdb contains source control commands that specify how to fetch from source control the exact file versions that were used to build the component. Microsoft's debuggers understand how to execute the commands to automatically fetch the files during a debug session. This is a powerful feature that saves the debug egineer from having to manually sync a source tree to the correct label for a given build. It's especially useful for remote debugging sessions and for analyzing crash dumps post-mortem.\nThe \"debugging tools for windows\" installation (windbg) contains a document named srcsrv.doc which provides an example demonstrating how to use srctool.exe to determine which source files are source-indexed in a given pdb.\nTo answer your question \"how do I know\", the \"modules\" feature in the debugger can tell you which modules have a corresponding pdb. In windbg use the \"lml\" command. In visual studio select modules from somewhere in the debug menus. (sorry, I don't have a current version of visual studio handy)\n", "The PDB is a database file that maps the instructions to their line numbers in the original code so when you get a stack trace you'll get the line numbers for the code. If it's an unmanaged DLL then the PDB file will also give you the names of the functions in the stack trace, whereas that information is usually only available for managed DLLs without PDBs.\n", "The main I get from the pdb is line numbers and real method names for stack traces.\n" ]
[ 6, 5, 4, 0 ]
[]
[]
[ ".net", "debugging", "pdb_files" ]
stackoverflow_0000052600_.net_debugging_pdb_files.txt
Q: Good Ways to Use Source Control and an IDE for Plugin Code? What are good ways of dealing with the issues surrounding plugin code that interacts with outside system? To give a concrete and representative example, suppose I would like to use Subversion and Eclipse to develop plugins for WordPress. The main code body of WordPress is installed on the webserver, and the plugin code needs to be available in a subdirectory of that server. I could see how you could simply checkout a copy of your code directly under the web directory on a development machine, but how would you also then integrate this with the IDE? I am making the assumption here that all the code for the plugin is located under a single directory. Do most people just add the plugin as a project in an IDE and then place the working folder for the project wherever the 'main' software system wants it to be? Or do people use some kind of symlinks to their home directory? A: To me, adding a symlink pointing to your development folder seems like a tidy solution to the problem. If the main project is on a different machine/webserver, you could use something like sshfs to mount your development directory into the right place on the webserver. A: Short answer - I do have my development and production servers check out the appropriate directories directly from SVN. For your example: Develop on the IDE as you would normally, then, when you're ready to test, check in to your local repository. Your development webserver can then have that directory checked out and you can easily test. Once you're ready for production, merge the change into the production branch, and do an svn update on the production webserver. A: Where I work some folks like to use the FileSync Plugin for Eclipse for this purpose, though I have seen some oddities with that plugin where files in the target directory occasionally go missing. The whole structure is: Ant task to create target directory at desired location (via copy commands, mostly) FileSync Plugin configured to keep files in sync between development location and target location as you code (sync the Eclipse output folder to a location in the Web server's classpath, etc.) Of course, symlinks may work better on systems that have good support for symlinks :-)
Good Ways to Use Source Control and an IDE for Plugin Code?
What are good ways of dealing with the issues surrounding plugin code that interacts with outside system? To give a concrete and representative example, suppose I would like to use Subversion and Eclipse to develop plugins for WordPress. The main code body of WordPress is installed on the webserver, and the plugin code needs to be available in a subdirectory of that server. I could see how you could simply checkout a copy of your code directly under the web directory on a development machine, but how would you also then integrate this with the IDE? I am making the assumption here that all the code for the plugin is located under a single directory. Do most people just add the plugin as a project in an IDE and then place the working folder for the project wherever the 'main' software system wants it to be? Or do people use some kind of symlinks to their home directory?
[ "To me, adding a symlink pointing to your development folder seems like a tidy solution to the problem.\nIf the main project is on a different machine/webserver, you could use something like sshfs to mount your development directory into the right place on the webserver.\n", "Short answer - I do have my development and production servers check out the appropriate directories directly from SVN.\nFor your example:\nDevelop on the IDE as you would normally, then, when you're ready to test, check in to your local repository. Your development webserver can then have that directory checked out and you can easily test.\nOnce you're ready for production, merge the change into the production branch, and do an svn update on the production webserver.\n", "Where I work some folks like to use the FileSync Plugin for Eclipse for this purpose, though I have seen some oddities with that plugin where files in the target directory occasionally go missing. The whole structure is:\n\nAnt task to create target directory at desired location (via copy commands, mostly)\nFileSync Plugin configured to keep files in sync between development location and target location as you code (sync the Eclipse output folder to a location in the Web server's classpath, etc.)\n\nOf course, symlinks may work better on systems that have good support for symlinks :-)\n" ]
[ 1, 1, 1 ]
[]
[]
[ "ide", "plugins" ]
stackoverflow_0000053112_ide_plugins.txt
Q: linq to sql stack overflow with self-referencing entity I have an entity that contains many of itself, how do I map this in LINQ to SQl such that it does not do an infinite loop of getting e1 then all its children then their parents (e1) then its children and repeat? Now I know I can do this if I set the child property to false, however that is the most important property (don't really care about the parent). Is there a way to have it do lazy loading? Would that even fix it? A: In the designer, select a property on an entity in the diagram, and in the Code Generation section in the properties window, set Delay Loaded to True . See this blog post. A: As far as I know LINQ to SQL does lazy loading by default. And even if it doesn't, I'm pretty sure the MS guys used some kind of identity map to ensure every record is loaded only once per DataContext. Did you experience it getting stuck in a loop in practice?
linq to sql stack overflow with self-referencing entity
I have an entity that contains many of itself, how do I map this in LINQ to SQl such that it does not do an infinite loop of getting e1 then all its children then their parents (e1) then its children and repeat? Now I know I can do this if I set the child property to false, however that is the most important property (don't really care about the parent). Is there a way to have it do lazy loading? Would that even fix it?
[ "In the designer, select a property on an entity in the diagram, and in the Code Generation section in the properties window, set Delay Loaded to True .\nSee this blog post.\n", "As far as I know LINQ to SQL does lazy loading by default. And even if it doesn't, I'm pretty sure the MS guys used some kind of identity map to ensure every record is loaded only once per DataContext.\nDid you experience it getting stuck in a loop in practice?\n" ]
[ 1, 0 ]
[ "This site is not good for my pre-existing biases, turns out this one was an ill-configured route not lazy/eager loading\n" ]
[ -1 ]
[ "lazy_loading", "linq_to_sql" ]
stackoverflow_0000053045_lazy_loading_linq_to_sql.txt
Q: What registry access can you get without Administrator privileges? I know that we shouldn't being using the registry to store Application Data anymore, but in updating a Legacy application (and wanting to do the fewest changes), what Registry Hives are non-administrators allowed to use? Can I access all of HKEY_CURRENT_USER (the application currently access HKEY_LOCAL_MACHINE) without Administrator privileges? A: In general, a non-administrator user has this access to the registry: Read/Write to: HKEY_CURRENT_USER Read Only: HKEY_LOCAL_MACHINE HKEY_CLASSES_ROOT (which is just a link to HKEY_LOCAL_MACHINE\Software\Classes) It is possible to change some of these permissions on a key-by-key basis, but it's extremely rare. You should not have to worry about that. For your purposes, your application should be writing settings and configuration to HKEY_CURRENT_USER. The canonical place is anywhere within HKEY_CURRENT_USER\Software\YourCompany\YourProduct\ You could potentially hold settings that are global (for all users) in HKEY_LOCAL_MACHINE. It is very rare to need to do this, and you should avoid it. The problem is that any user can "read" those, but only an administrator (or by extension, your setup/install program) can "set" them. Other common source of trouble: your application should not write to anything in the Program files or the Windows directories. If you need to write to files, there are several options at hand; describing all of them would be a longer discussion. All of the options end up writing to a subfolder or another under %USERPROFILE% for the user in question. Finally, your application should stay out of HKEY_CURRENT_CONFIG. This hive holds hardware configuration, services configurations and other items that 99.9999% of applications should not need to look at (for example, it holds the current plug-and-play device list). If you need anything from there, most of the information is available through supported APIs elsewhere. A: Yes, you should be able to write to any place under HKEY_CURRENT_USER without having Administrator privileges. But this is effectively a private store that no other user on this machine will be able to access, so you can't put any shared configuration there.
What registry access can you get without Administrator privileges?
I know that we shouldn't being using the registry to store Application Data anymore, but in updating a Legacy application (and wanting to do the fewest changes), what Registry Hives are non-administrators allowed to use? Can I access all of HKEY_CURRENT_USER (the application currently access HKEY_LOCAL_MACHINE) without Administrator privileges?
[ "In general, a non-administrator user has this access to the registry:\nRead/Write to:\n\nHKEY_CURRENT_USER\n\nRead Only:\n\nHKEY_LOCAL_MACHINE\nHKEY_CLASSES_ROOT (which is just a link to HKEY_LOCAL_MACHINE\\Software\\Classes)\n\nIt is possible to change some of these permissions on a key-by-key basis, but it's extremely rare. You should not have to worry about that.\nFor your purposes, your application should be writing settings and configuration to HKEY_CURRENT_USER. The canonical place is anywhere within HKEY_CURRENT_USER\\Software\\YourCompany\\YourProduct\\\nYou could potentially hold settings that are global (for all users) in HKEY_LOCAL_MACHINE. It is very rare to need to do this, and you should avoid it. The problem is that any user can \"read\" those, but only an administrator (or by extension, your setup/install program) can \"set\" them.\nOther common source of trouble: your application should not write to anything in the Program files or the Windows directories. If you need to write to files, there are several options at hand; describing all of them would be a longer discussion. All of the options end up writing to a subfolder or another under %USERPROFILE% for the user in question.\nFinally, your application should stay out of HKEY_CURRENT_CONFIG. This hive holds hardware configuration, services configurations and other items that 99.9999% of applications should not need to look at (for example, it holds the current plug-and-play device list). If you need anything from there, most of the information is available through supported APIs elsewhere.\n", "Yes, you should be able to write to any place under HKEY_CURRENT_USER without having Administrator privileges. But this is effectively a private store that no other user on this machine will be able to access, so you can't put any shared configuration there.\n" ]
[ 104, 3 ]
[]
[]
[ "administrator", "privileges", "registry" ]
stackoverflow_0000053135_administrator_privileges_registry.txt
Q: Enterprise Library Application Blocks OR Home Grown Framework? We are currently looking to adopt some type of "standard" developer framework and have looked into using the Enterprise Library. Would you recommend using these blocks as the foundation for software development, or should we do something home grown? A: Like all good answers to architecture and programming questions, the answer is "it depends". It depends on how unique your data access and object design needs are. It may also depend on how you plan on supporting your application in the long term. Finally, it greatly depends on the skill level of your developers. There isn't a one-size-fits-all answer to this question, but generally, if your main focus is on cranking out software that provides some business value, pick out an existing framework and run with it. Don't spend your cycles building something that won't immediately drive business profits (i.e. increases revenues and/or decreases costs). For example, one of my organization's projects is core to the operations of the company, needs to be developed and deployed as soon as possible, and will have a long life. For these reasons, we picked CSLA with some help from Enterprise Library. We could have picked other frameworks, but the important thing is that we picked a framework that seemed like it would fit well with our application and our developer skillset and we ran with it. It gave us a good headstart and a community from which we can get support. We immediately started with functionality that provided business value and were not banging our heads against the wall trying to build a framework. We are also in the position where we can hire people in the future who have most likely had exposure to our framework, giving them a really good headstart. This should reduce long-term support costs. Are there things we don't use and overhead that we may not need? Perhaps. But, I'll trade that all day long for delivering business value in code early and often. A: It really depends on what you need to do. Generally speaking, the bigger the niche is that your company is in, the better chance that you'll find a framework to properly support you. For smaller niches, you'll more than likely need to roll your own. The company I work for has several apps all geared twoards estimating the building materials for given buildings. Since this is a pretty specific thing, and we have about 8 apps that are similar, we decided to roll our own and bring in 3rd party libraries when necessary (No sense re-inventing the wheel for some of the stuff) Your millage may vary of course.
Enterprise Library Application Blocks OR Home Grown Framework?
We are currently looking to adopt some type of "standard" developer framework and have looked into using the Enterprise Library. Would you recommend using these blocks as the foundation for software development, or should we do something home grown?
[ "Like all good answers to architecture and programming questions, the answer is \"it depends\".\nIt depends on how unique your data access and object design needs are. It may also depend on how you plan on supporting your application in the long term. Finally, it greatly depends on the skill level of your developers.\nThere isn't a one-size-fits-all answer to this question, but generally, if your main focus is on cranking out software that provides some business value, pick out an existing framework and run with it. Don't spend your cycles building something that won't immediately drive business profits (i.e. increases revenues and/or decreases costs).\nFor example, one of my organization's projects is core to the operations of the company, needs to be developed and deployed as soon as possible, and will have a long life. For these reasons, we picked CSLA with some help from Enterprise Library. We could have picked other frameworks, but the important thing is that we picked a framework that seemed like it would fit well with our application and our developer skillset and we ran with it.\nIt gave us a good headstart and a community from which we can get support. We immediately started with functionality that provided business value and were not banging our heads against the wall trying to build a framework.\nWe are also in the position where we can hire people in the future who have most likely had exposure to our framework, giving them a really good headstart. This should reduce long-term support costs.\nAre there things we don't use and overhead that we may not need? Perhaps. But, I'll trade that all day long for delivering business value in code early and often.\n", "It really depends on what you need to do. Generally speaking, the bigger the niche is that your company is in, the better chance that you'll find a framework to properly support you. For smaller niches, you'll more than likely need to roll your own.\nThe company I work for has several apps all geared twoards estimating the building materials for given buildings. Since this is a pretty specific thing, and we have about 8 apps that are similar, we decided to roll our own and bring in 3rd party libraries when necessary (No sense re-inventing the wheel for some of the stuff)\nYour millage may vary of course.\n" ]
[ 3, 1 ]
[]
[]
[ ".net", "enterprise_library", "frameworks" ]
stackoverflow_0000053065_.net_enterprise_library_frameworks.txt
Q: Mediawiki custom tag Stops page parsing I created a few mediawiki custom tags, using the guide found here http://www.mediawiki.org/wiki/Manual:Tag_extensions I will post my code below, but the problem is after it hits the first custom tag in the page, it calls it, and prints the response, but does not get anything that comes after it in the wikitext. It seems it just stops parsing the page. Any Ideas? if ( defined( 'MW_SUPPORTS_PARSERFIRSTCALLINIT' ) ) { $wgHooks['ParserFirstCallInit'][] = 'tagregister'; } else { // Otherwise do things the old fashioned way $wgExtensionFunctions[] = 'tagregister'; } function tagregister(){ global $wgParser; $wgParser->setHook('tag1','tag1func'); $wgParser->setHook('tag2','tag2func'); return true; } function tag1func($input,$params) { return "It called me"; } function tag2func($input,$params) { return "It called me -- 2"; } Update: @George Mauer -- I have seen that as well, but this does not stop the page from rendering, just the Mediawiki engine from parsing the rest of the wikitext. Its as if hitting the custom function is signaling mediawiki that processing is done. I am in the process of diving into the rabbit hole but was hoping someone else has seen this behavior. A: Never used Mediawiki but that sort of problem in my experience is indicative of a PHP error that occurred but was suppressed either with the @ operator or because PHP error output to screen is turned off. I hate to resort to this debugging method but when absolutely and utterly frustrated in PHP I will just start putting echo statements every few lines (always with a marker so I remember to remove them later), to figure out exactly where the error is coming from. Eventually, you'll get to the bottom of the rabbit hole and figure out exactly what the problematic line of code is. A: Silly me. Had to close the tags. Instead of<tag1> I had to change it to <tag1 /> or <tag1></tag1> Now all works!
Mediawiki custom tag Stops page parsing
I created a few mediawiki custom tags, using the guide found here http://www.mediawiki.org/wiki/Manual:Tag_extensions I will post my code below, but the problem is after it hits the first custom tag in the page, it calls it, and prints the response, but does not get anything that comes after it in the wikitext. It seems it just stops parsing the page. Any Ideas? if ( defined( 'MW_SUPPORTS_PARSERFIRSTCALLINIT' ) ) { $wgHooks['ParserFirstCallInit'][] = 'tagregister'; } else { // Otherwise do things the old fashioned way $wgExtensionFunctions[] = 'tagregister'; } function tagregister(){ global $wgParser; $wgParser->setHook('tag1','tag1func'); $wgParser->setHook('tag2','tag2func'); return true; } function tag1func($input,$params) { return "It called me"; } function tag2func($input,$params) { return "It called me -- 2"; } Update: @George Mauer -- I have seen that as well, but this does not stop the page from rendering, just the Mediawiki engine from parsing the rest of the wikitext. Its as if hitting the custom function is signaling mediawiki that processing is done. I am in the process of diving into the rabbit hole but was hoping someone else has seen this behavior.
[ "Never used Mediawiki but that sort of problem in my experience is indicative of a PHP error that occurred but was suppressed either with the @ operator or because PHP error output to screen is turned off.\nI hate to resort to this debugging method but when absolutely and utterly frustrated in PHP I will just start putting echo statements every few lines (always with a marker so I remember to remove them later), to figure out exactly where the error is coming from. Eventually, you'll get to the bottom of the rabbit hole and figure out exactly what the problematic line of code is.\n", "Silly me. \nHad to close the tags.\nInstead of<tag1> I had to change it to <tag1 /> or <tag1></tag1>\nNow all works!\n" ]
[ 0, 0 ]
[]
[]
[ "mediawiki", "php" ]
stackoverflow_0000049890_mediawiki_php.txt
Q: Is there an ASP.NET pagination control (Not MVC)? I've got a search results page that basically consists of a repeater with content in it. What I need is a way to paginate the results. Getting paginated results isn't the problem, what I'm after is a web control that will display a list of the available paged data, preferably by providing the number of results and a page size A: Repeaters don't do this by default. However, GridViews do. Personally, I hate GridViews, so I wrote a Paging/Sorting Repeater control. Basic Steps: Subclass the Repeater Control Add a private PagedDataSource to it Add a public PageSize property Override Control.DataBind Store the Control.DataSource in the PagedDataSource. Bind the Control.DataSource to PagedDataSource Override Control.Render Call Base.Render() Render your paging links. For a walkthrough, you could try this link: https://web.archive.org/web/20210925054103/http://aspnet.4guysfromrolla.com/articles/081804-1.aspx
Is there an ASP.NET pagination control (Not MVC)?
I've got a search results page that basically consists of a repeater with content in it. What I need is a way to paginate the results. Getting paginated results isn't the problem, what I'm after is a web control that will display a list of the available paged data, preferably by providing the number of results and a page size
[ "Repeaters don't do this by default.\nHowever, GridViews do.\nPersonally, I hate GridViews, so I wrote a Paging/Sorting Repeater control.\nBasic Steps:\n\nSubclass the Repeater Control\nAdd a private PagedDataSource to it\nAdd a public PageSize property\nOverride Control.DataBind\n\n\nStore the Control.DataSource in the PagedDataSource.\nBind the Control.DataSource to PagedDataSource\n\nOverride Control.Render\n\n\nCall Base.Render()\nRender your paging links.\n\n\nFor a walkthrough, you could try this link:\nhttps://web.archive.org/web/20210925054103/http://aspnet.4guysfromrolla.com/articles/081804-1.aspx\n" ]
[ 7 ]
[]
[]
[ "asp.net", "pagination" ]
stackoverflow_0000053220_asp.net_pagination.txt
Q: Getting international characters from a web page? I want to scrape some information off a football (soccer) web page using simple python regexp's. The problem is that players such as the first chap, ÄÄRITALO, comes out as &#196;&#196;RITALO! That is, html uses escaped markup for the special characters, such as &#196; Is there a simple way of reading the html into the correct python string? If it was XML/XHTML it would be easy, the parser would do it. A: I would recommend BeautifulSoup for HTML scraping. You also need to tell it to convert HTML entities to the corresponding Unicode characters, like so: >>> from BeautifulSoup import BeautifulSoup >>> html = "<html>&#196;&#196;RITALO!</html>" >>> soup = BeautifulSoup(html, convertEntities=BeautifulSoup.HTML_ENTITIES) >>> print soup.contents[0].string ÄÄRITALO! (It would be nice if the standard codecs module included a codec for this, such that you could do "some_string".decode('html_entities') but unfortunately it doesn't!) EDIT: Another solution: Python developer Fredrik Lundh (author of elementtree, among other things) has a function to unsecape HTML entities on his website, which works with decimal, hex and named entities (BeautifulSoup will not work with the hex ones). A: Try using BeautifulSoup. It should do the trick and give you a nicely formatted DOM to work with as well. This blog entry seems to have had some success with it. A: I haven't tried it myself, but have you tried http://zesty.ca/python/scrape.html ? It seems to have a method htmldecode(text) which would do what you want.
Getting international characters from a web page?
I want to scrape some information off a football (soccer) web page using simple python regexp's. The problem is that players such as the first chap, ÄÄRITALO, comes out as &#196;&#196;RITALO! That is, html uses escaped markup for the special characters, such as &#196; Is there a simple way of reading the html into the correct python string? If it was XML/XHTML it would be easy, the parser would do it.
[ "I would recommend BeautifulSoup for HTML scraping. You also need to tell it to convert HTML entities to the corresponding Unicode characters, like so:\n>>> from BeautifulSoup import BeautifulSoup \n>>> html = \"<html>&#196;&#196;RITALO!</html>\"\n>>> soup = BeautifulSoup(html, convertEntities=BeautifulSoup.HTML_ENTITIES)\n>>> print soup.contents[0].string\nÄÄRITALO!\n\n(It would be nice if the standard codecs module included a codec for this, such that you could do \"some_string\".decode('html_entities') but unfortunately it doesn't!)\nEDIT:\nAnother solution:\nPython developer Fredrik Lundh (author of elementtree, among other things) has a function to unsecape HTML entities on his website, which works with decimal, hex and named entities (BeautifulSoup will not work with the hex ones).\n", "Try using BeautifulSoup. It should do the trick and give you a nicely formatted DOM to work with as well.\nThis blog entry seems to have had some success with it.\n", "I haven't tried it myself, but have you tried\nhttp://zesty.ca/python/scrape.html ?\nIt seems to have a method htmldecode(text) which would do what you want.\n" ]
[ 7, 2, 0 ]
[]
[]
[ "html", "parsing", "python", "unicode" ]
stackoverflow_0000053224_html_parsing_python_unicode.txt
Q: How to find all database references In trying to figure out this problem (which is still unsolved and I still have no clue what is going on), I am wondering if maybe an external reference to the table in question is causing the problem. For example, a trigger or view or some other such thing. Is there an easy way to find all references to a given database table? Including all views, triggers, constraints, or anything at all, preferably from the command line, and also preferably without a 3rd party tool (we are using db2). A: Wow, I wouldn't have thought it, but there seems to be.. Good ole DB2. I find the publib db2 docs view very very handy by the way: http://publib.boulder.ibm.com/infocenter/db2luw/v8//index.jsp I just found the "SYSCAT.TABDEP" catalog view in it, which seems to contain more or less what you asked for. I suspect for anything not covered there you'll have to trawl through the rest of the syscat tables which are vast. (Unfortunately I can't seem to link you to the exact page on SYSCAT.TABDEP itself, the search facility should lead you to it fairly easily though). Most databases these days have a set of tables which contain data about the layout of your actual schema tables, quite handy for this sort of thing. A: You can write a query search the information schema views (definition column) to find the table in all views, triggers, procedure, etc. Not sure about FK & indexes though.
How to find all database references
In trying to figure out this problem (which is still unsolved and I still have no clue what is going on), I am wondering if maybe an external reference to the table in question is causing the problem. For example, a trigger or view or some other such thing. Is there an easy way to find all references to a given database table? Including all views, triggers, constraints, or anything at all, preferably from the command line, and also preferably without a 3rd party tool (we are using db2).
[ "Wow, I wouldn't have thought it, but there seems to be.. Good ole DB2.\nI find the publib db2 docs view very very handy by the way:\nhttp://publib.boulder.ibm.com/infocenter/db2luw/v8//index.jsp\nI just found the \"SYSCAT.TABDEP\" catalog view in it, which seems to contain more or less what you asked for. I suspect for anything not covered there you'll have to trawl through the rest of the syscat tables which are vast. (Unfortunately I can't seem to link you to the exact page on SYSCAT.TABDEP itself, the search facility should lead you to it fairly easily though).\nMost databases these days have a set of tables which contain data about the layout of your actual schema tables, quite handy for this sort of thing.\n", "You can write a query search the information schema views (definition column) to find the table in all views, triggers, procedure, etc. Not sure about FK & indexes though. \n" ]
[ 3, 0 ]
[]
[]
[ "database", "db2" ]
stackoverflow_0000053136_database_db2.txt
Q: How could I get my SVN-only host to pull from a git repository? I'd really like to get our host to pull from our Git repository instead of uploading files manually, but it doesn't have Git installed. So is there a way to trick Subversion (which they do have) into checking out a Git repository? I think I already know the answer, namely bug my host to add Git and live with it until they do, but I thought I would ask anyway. A: This page should provide a workaround for your problem. http://code.google.com/p/support/wiki/ImportingFromGit Basically, you create a read-only clone of your Git repository in the SVN repository format, exporting updates as you go. An SVN hook could be written that fires after each update to copy the new files where you need them.
How could I get my SVN-only host to pull from a git repository?
I'd really like to get our host to pull from our Git repository instead of uploading files manually, but it doesn't have Git installed. So is there a way to trick Subversion (which they do have) into checking out a Git repository? I think I already know the answer, namely bug my host to add Git and live with it until they do, but I thought I would ask anyway.
[ "This page should provide a workaround for your problem. \nhttp://code.google.com/p/support/wiki/ImportingFromGit\nBasically, you create a read-only clone of your Git repository in the SVN repository format, exporting updates as you go. An SVN hook could be written that fires after each update to copy the new files where you need them.\n" ]
[ 3 ]
[]
[]
[ "build_automation", "capistrano", "git", "svn" ]
stackoverflow_0000053290_build_automation_capistrano_git_svn.txt
Q: Retaining HTTP POST data when a request is interrupted by a login page Say a user is browsing a website, and then performs some action which changes the database (let's say they add a comment). When the request to actually add the comment comes in, however, we find we need to force them to login before they can continue. Assume the login page asks for a username and password, and redirects the user back to the URL they were going to when the login was required. That redirect works find for a URL with only GET parameters, but if the request originally contained some HTTP POST data, that is now lost. Can anyone recommend a way to handle this scenario when HTTP POST data is involved? Obviously, if necessary, the login page could dynamically generate a form with all the POST parameters to pass them along (though that seems messy), but even then, I don't know of any way for the login page to redirect the user on to their intended page while keeping the POST data in the request. Edit : One extra constraint I should have made clear - Imagine we don't know if a login will be required until the user submits their comment. For example, their cookie might have expired between when they loaded the form and actually submitted the comment. A: This is one good place where Ajax techniques might be helpful. When the user clicks the submit button, show the login dialog on client side and validate with the server before you actually submit the page. Another way I can think of is showing or hiding the login controls in a DIV tag dynamically in the main page itself. A: You might want to investigate why Django removed this feature before implementing it yourself. It doesn't seem like a Django specific problem, but rather yet another cross site forgery attack. A: Just store all the necessary data from the POST in the session until after the login process is completed. Or have some sort of temp table in the db to store in and then retrieve it. Obviously this is pseudo-code but: if ( !loggedIn ) { StorePostInSession(); ShowLoginForm(); } if ( postIsStored ) { RetrievePostFromSession(); } Or something along those lines. A: 2 choices: Write out the messy form from the login page, and JavaScript form.submit() it to the page. Have the login page itself POST to the requesting page (with the previous values), and have that page's controller perform the login verification. Roll this into whatever logic you already have for detecting the not logged in user (frameworks vary on how they do this). In pseudo-MVC: CommentController { void AddComment() { if (!Request.User.IsAuthenticated && !AuthenticateUser()) { return; } // add comment to database } bool AuthenticateUser() { if (Request.Form["username"] == "") { // show login page foreach (Key key in Request.Form) { // copy form values ViewData.Form.Add("hidden", key, Request.Form[key]); } ViewData.Form.Action = Request.Url; ShowLoginView(); return false; } else { // validate login return TryLogin(Request.Form["username"], Request.Form["password"]); } } } A: Collect the data on the page they submitted it, and store it in your backend (database?) while they go off through the login sequence, hide a transaction id or similar on the page with the login form. When they're done, return them to the page they asked for by looking it up using the transaction id on the backend, and dump all the data they posted into the form for previewing again, or just run whatever code that page would run. Note that many systems, eg blogs, get around this by having login fields in the same form as the one for posting comments, if the user needs to be logged in to comment and isn't yet. A: I know it says language-agnostic, but why not take advantage of the conventions provided by the server-side language you are using? If it were Java, the data could persist by setting a Request attribute. You would use a controller to process the form, detect the login, and then forward through. If the attributes are set, then just prepopulate the form with that data? Edit: You could also use a Session as pointed out, but I'm pretty sure if you use a forward in Java back to the login page, that the Request attribute will persist.
Retaining HTTP POST data when a request is interrupted by a login page
Say a user is browsing a website, and then performs some action which changes the database (let's say they add a comment). When the request to actually add the comment comes in, however, we find we need to force them to login before they can continue. Assume the login page asks for a username and password, and redirects the user back to the URL they were going to when the login was required. That redirect works find for a URL with only GET parameters, but if the request originally contained some HTTP POST data, that is now lost. Can anyone recommend a way to handle this scenario when HTTP POST data is involved? Obviously, if necessary, the login page could dynamically generate a form with all the POST parameters to pass them along (though that seems messy), but even then, I don't know of any way for the login page to redirect the user on to their intended page while keeping the POST data in the request. Edit : One extra constraint I should have made clear - Imagine we don't know if a login will be required until the user submits their comment. For example, their cookie might have expired between when they loaded the form and actually submitted the comment.
[ "This is one good place where Ajax techniques might be helpful. When the user clicks the submit button, show the login dialog on client side and validate with the server before you actually submit the page.\nAnother way I can think of is showing or hiding the login controls in a DIV tag dynamically in the main page itself.\n", "You might want to investigate why Django removed this feature before implementing it yourself. It doesn't seem like a Django specific problem, but rather yet another cross site forgery attack.\n", "Just store all the necessary data from the POST in the session until after the login process is completed. Or have some sort of temp table in the db to store in and then retrieve it. Obviously this is pseudo-code but:\nif ( !loggedIn ) {\n StorePostInSession();\n ShowLoginForm();\n}\n\nif ( postIsStored ) {\n RetrievePostFromSession();\n}\n\nOr something along those lines.\n", "2 choices:\n\nWrite out the messy form from the login page, and JavaScript form.submit() it to the page.\nHave the login page itself POST to the requesting page (with the previous values), and have that page's controller perform the login verification. Roll this into whatever logic you already have for detecting the not logged in user (frameworks vary on how they do this). In pseudo-MVC:\n\n\n CommentController {\n void AddComment() {\n if (!Request.User.IsAuthenticated && !AuthenticateUser()) {\n return;\n }\n // add comment to database\n }\n\n bool AuthenticateUser() {\n if (Request.Form[\"username\"] == \"\") {\n // show login page\n foreach (Key key in Request.Form) {\n // copy form values\n ViewData.Form.Add(\"hidden\", key, Request.Form[key]);\n }\n ViewData.Form.Action = Request.Url;\n\n ShowLoginView();\n return false;\n } else {\n // validate login\n return TryLogin(Request.Form[\"username\"], Request.Form[\"password\"]);\n } \n }\n }\n\n", "Collect the data on the page they submitted it, and store it in your backend (database?) while they go off through the login sequence, hide a transaction id or similar on the page with the login form. When they're done, return them to the page they asked for by looking it up using the transaction id on the backend, and dump all the data they posted into the form for previewing again, or just run whatever code that page would run.\nNote that many systems, eg blogs, get around this by having login fields in the same form as the one for posting comments, if the user needs to be logged in to comment and isn't yet.\n", "I know it says language-agnostic, but why not take advantage of the conventions provided by the server-side language you are using? If it were Java, the data could persist by setting a Request attribute. You would use a controller to process the form, detect the login, and then forward through. If the attributes are set, then just prepopulate the form with that data?\nEdit: You could also use a Session as pointed out, but I'm pretty sure if you use a forward in Java back to the login page, that the Request attribute will persist.\n" ]
[ 11, 3, 2, 2, 1, 1 ]
[]
[]
[ "language_agnostic" ]
stackoverflow_0000053260_language_agnostic.txt
Q: Should we stop using Zend WinEnabler? (This question is over 6 years old and probably no longer has any relevance.) Our system uses Zend WinEnabler. Do you use it? Is it obsolete? Should we stop using it? Is it known to cause handle/memory leaks? Here is an (old) introduction to it: PHP Creators Unveil New Product that Makes PHP Truly Viable for Windows Environments A: Since Zend appears to not be selling it anymore and all its functionality is available for free (through FastCGI), I would say so. Look at the Zend Core (installing Zend Core) if you really want to run PHP on Windows. But really, you should be asking yourself why you are running PHP on Windows at all (we used to do it, and the headaches where enormous, especially since nobody else was doing it).
Should we stop using Zend WinEnabler?
(This question is over 6 years old and probably no longer has any relevance.) Our system uses Zend WinEnabler. Do you use it? Is it obsolete? Should we stop using it? Is it known to cause handle/memory leaks? Here is an (old) introduction to it: PHP Creators Unveil New Product that Makes PHP Truly Viable for Windows Environments
[ "Since Zend appears to not be selling it anymore and all its functionality is available for free (through FastCGI), I would say so. Look at the Zend Core (installing Zend Core) if you really want to run PHP on Windows. But really, you should be asking yourself why you are running PHP on Windows at all (we used to do it, and the headaches where enormous, especially since nobody else was doing it).\n" ]
[ 0 ]
[]
[]
[ "php", "zend_server" ]
stackoverflow_0000053253_php_zend_server.txt
Q: Java Web Services API, however I can't run a JVM on my server I'm trying to use some data from a PlanPlusOnline account. They only provide a java web services API. The server for the site where the data will be used does not allow me to install Tomcat (edit: or a JVM for that matter). I'm not going to lie, I am a Java software engineer, and I do some web work on the side. I'm not familiar with web services or servlets, but I was willing to give it a shot. I'd much rather they have JSON access to the data, but as far as I know they don't. Any ideas? EDIT: to clarify. The web service provided by planplusonline is Java based. I am trying to access the data from this web service without using Java. I believe this is possible now, but I need to do more research. Anyone who can help point me in the right direction is appreciated. A: To follow up with jodonnell's comment, a Web service connection can be made in just about any server-side language. It is just that the API example they provided was in Java probably because PlanPlusOnline is written in Java. If you have a URL for the service, and an access key, then all you really need to do is figure out how to traverse the XML returned. If you can't do Java, then I suggest PHP because it could be already installed, and have the proper modules loaded. This link might be helpful: http://www.onlamp.com/pub/a/php/2007/07/26/php-web-services.html A: Are you trying to implement a client to a web service hosted somewhere else? If so, Java's not necessary. You can do web service clients in .NET, PHP, Ruby, or pretty much any modern web technology out there. All you need is a WSDL document to provide metadata about how to invoke the services. A: If I am understanding your question correctly, you only need to connect to an existing web service and not create your own web service. If that is a case, and maybe I am missing something, I do not believe you will need Tomcat at all. If you are using Netbeans you can create a new Desktop or Web application, and then right click the project name. Select New and then other, and select Web Client. Enter the information for where to find the WSDL (usually a URL) and the other required information. Once you added the WebClient create a new class that actually makes your calls to the webservice. If the web service name was PlanPlusOnline then you could have something like: public final class PlanPlusOnlineClient { //instance to this class so that we do not have to reinstantiate it every time private static PlanPlusOnlineClient _instance = new PlanPlusOnlineClient(); //generated class by netbeans with information about the web service private PlanPlusOnlineService service = null; //another generated class by netbeans but this is a property of the service //that contains information about the individual methods available. private PlanPlusOnline port = null; private PlanPlusOnlineClient() { try { service = new PlanPlusOnlineService(); port = service.getPlanPlusOnlinePort(); } catch (MalformedURLException ex) { MessageLog.error(this, ex.getClass().getName(), ex); } } public static PlanPlusOnlineClient getInstance() { return _instance; } public static String getSomethingInteresting(String param) { //this will call one of the actual methods the web //service provides. return port.getSomethingIntersting(param); } } I hope this helps you along your way with this. You should also check out http://www.netbeans.org/kb/60/websvc/ for some more information about Netbeans and web services. I am sure it is similar in other IDEs.
Java Web Services API, however I can't run a JVM on my server
I'm trying to use some data from a PlanPlusOnline account. They only provide a java web services API. The server for the site where the data will be used does not allow me to install Tomcat (edit: or a JVM for that matter). I'm not going to lie, I am a Java software engineer, and I do some web work on the side. I'm not familiar with web services or servlets, but I was willing to give it a shot. I'd much rather they have JSON access to the data, but as far as I know they don't. Any ideas? EDIT: to clarify. The web service provided by planplusonline is Java based. I am trying to access the data from this web service without using Java. I believe this is possible now, but I need to do more research. Anyone who can help point me in the right direction is appreciated.
[ "To follow up with jodonnell's comment, a Web service connection can be made in just about any server-side language. It is just that the API example they provided was in Java probably because PlanPlusOnline is written in Java. If you have a URL for the service, and an access key, then all you really need to do is figure out how to traverse the XML returned. If you can't do Java, then I suggest PHP because it could be already installed, and have the proper modules loaded. This link might be helpful:\nhttp://www.onlamp.com/pub/a/php/2007/07/26/php-web-services.html\n", "Are you trying to implement a client to a web service hosted somewhere else? If so, Java's not necessary. You can do web service clients in .NET, PHP, Ruby, or pretty much any modern web technology out there. All you need is a WSDL document to provide metadata about how to invoke the services.\n", "If I am understanding your question correctly, you only need to connect to an existing web service and not create your own web service. If that is a case, and maybe I am missing something, I do not believe you will need Tomcat at all. If you are using Netbeans you can create a new Desktop or Web application, and then right click the project name. Select New and then other, and select Web Client. Enter the information for where to find the WSDL (usually a URL) and the other required information.\nOnce you added the WebClient create a new class that actually makes your calls to the webservice. If the web service name was PlanPlusOnline then you could have something like:\npublic final class PlanPlusOnlineClient\n{\n //instance to this class so that we do not have to reinstantiate it every time\n private static PlanPlusOnlineClient _instance = new PlanPlusOnlineClient();\n\n //generated class by netbeans with information about the web service\n private PlanPlusOnlineService service = null;\n\n //another generated class by netbeans but this is a property of the service\n //that contains information about the individual methods available.\n private PlanPlusOnline port = null;\n\n private PlanPlusOnlineClient()\n {\n try\n {\n service = new PlanPlusOnlineService();\n port = service.getPlanPlusOnlinePort();\n }\n catch (MalformedURLException ex)\n {\n MessageLog.error(this, ex.getClass().getName(), ex);\n }\n }\n\n public static PlanPlusOnlineClient getInstance()\n {\n return _instance;\n }\n\n public static String getSomethingInteresting(String param)\n {\n //this will call one of the actual methods the web \n //service provides.\n return port.getSomethingIntersting(param);\n } \n\n}\n\nI hope this helps you along your way with this. You should also check out http://www.netbeans.org/kb/60/websvc/\nfor some more information about Netbeans and web services. I am sure it is similar in other IDEs.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "java", "json", "web_services" ]
stackoverflow_0000053295_java_json_web_services.txt
Q: How do you create a shortcut to a directory so that it opens in explorer Better yet, how can I make My Computer always open in Explorer as well? I usually make a shortcut to my programming directories on my quick launch bar, but I'd love for them to open in Explorer. A: explorer -d c:\path A: I use explorer /e,c:\path. @harpo explorer -d c:\path does not work for me (WinXP sp3). A: Have you considered the win+e hotkey? It isn't quite what you want, but might be close enough. A: This is a good reference: http://support.microsoft.com/kb/130510 i.e.: explorer /e,%HOMEDRIVE%%HOMEPATH%
How do you create a shortcut to a directory so that it opens in explorer
Better yet, how can I make My Computer always open in Explorer as well? I usually make a shortcut to my programming directories on my quick launch bar, but I'd love for them to open in Explorer.
[ "explorer -d c:\\path\n", "I use explorer /e,c:\\path. \n@harpo\nexplorer -d c:\\path does not work for me (WinXP sp3).\n", "Have you considered the win+e hotkey? It isn't quite what you want, but might be close enough.\n", "This is a good reference: http://support.microsoft.com/kb/130510\ni.e.:\nexplorer /e,%HOMEDRIVE%%HOMEPATH%\n\n" ]
[ 5, 4, 3, 3 ]
[]
[]
[ "explorer", "windows" ]
stackoverflow_0000053355_explorer_windows.txt
Q: Where are people getting that rotating loading image? I keep running across this loading image http://georgia.ubuntuforums.com/images/misc/lightbox_progress.gif which seems to have entered into existence in the last 18 months. All of a sudden it is in every application and is on every web site. Not wanting to be left out is there somewhere I can get this logo, perhaps with a transparent background? Also where did it come from? A: You can get many different AJAX loading animations in any colour you want here: ajaxload.info A: I believe the animation came from the Mac OS X loading screen. Here's a similar one with a transparent background: alt text http://homepage.mac.com/xraydoc/.Pictures/spinner.gif A: I think it's just a general extension to the normal clock-face style loading icon. The Firefox throbber is the first example of that style that I remember coming across; the only real difference between that and the current trend of straight lines is that the constituent symbols have been stretched to give a crisper look, moving back to more of a many-handed clock emblem.
Where are people getting that rotating loading image?
I keep running across this loading image http://georgia.ubuntuforums.com/images/misc/lightbox_progress.gif which seems to have entered into existence in the last 18 months. All of a sudden it is in every application and is on every web site. Not wanting to be left out is there somewhere I can get this logo, perhaps with a transparent background? Also where did it come from?
[ "You can get many different AJAX loading animations in any colour you want here: ajaxload.info\n", "I believe the animation came from the Mac OS X loading screen. Here's a similar one with a transparent background:\nalt text http://homepage.mac.com/xraydoc/.Pictures/spinner.gif\n", "I think it's just a general extension to the normal clock-face style loading icon. The Firefox throbber is the first example of that style that I remember coming across; the only real difference between that and the current trend of straight lines is that the constituent symbols have been stretched to give a crisper look, moving back to more of a many-handed clock emblem.\n" ]
[ 14, 5, 0 ]
[]
[]
[ "ajax", "animation", "image" ]
stackoverflow_0000053370_ajax_animation_image.txt
Q: Querying collections of value type in the Criteria API in Hibernate In my database, I have an entity table (let's call it Entity). Each entity can have a number of entity types, and the set of entity types is static. Therefore, there is a connecting table that contains rows of the entity id and the name of the entity type. In my code, EntityType is an enum, and Entity is a Hibernate-mapped class. in the Entity code, the mapping looks like this: @CollectionOfElements @JoinTable( name = "ENTITY-ENTITY-TYPE", joinColumns = @JoinColumn(name = "ENTITY-ID") ) @Column(name="ENTITY-TYPE") public Set<EntityType> getEntityTypes() { return entityTypes; } Oh, did I mention I'm using annotations? Now, what I'd like to do is create an HQL query or search using a Criteria for all Entity objects of a specific entity type. This page in the Hibernate forum says this is impossible, but then this page is 18 months old. Can anyone tell me if this feature has been implemented in one of the latest releases of Hibernate, or planned for the coming release? A: HQL: select entity from Entity entity where :type = some elements(entity.types) I think that you can also write it like: select entity from Entity entity where :type in(entity.types) A: Is your relationship bidirectional, i.e., does EntityType have an Entity property? If so, you can probably do something like entity.Name from EntityType where name = ?
Querying collections of value type in the Criteria API in Hibernate
In my database, I have an entity table (let's call it Entity). Each entity can have a number of entity types, and the set of entity types is static. Therefore, there is a connecting table that contains rows of the entity id and the name of the entity type. In my code, EntityType is an enum, and Entity is a Hibernate-mapped class. in the Entity code, the mapping looks like this: @CollectionOfElements @JoinTable( name = "ENTITY-ENTITY-TYPE", joinColumns = @JoinColumn(name = "ENTITY-ID") ) @Column(name="ENTITY-TYPE") public Set<EntityType> getEntityTypes() { return entityTypes; } Oh, did I mention I'm using annotations? Now, what I'd like to do is create an HQL query or search using a Criteria for all Entity objects of a specific entity type. This page in the Hibernate forum says this is impossible, but then this page is 18 months old. Can anyone tell me if this feature has been implemented in one of the latest releases of Hibernate, or planned for the coming release?
[ "HQL:\nselect entity from Entity entity where :type = some elements(entity.types)\n\nI think that you can also write it like:\nselect entity from Entity entity where :type in(entity.types)\n\n", "Is your relationship bidirectional, i.e., does EntityType have an Entity property? If so, you can probably do something like entity.Name from EntityType where name = ?\n" ]
[ 1, 0 ]
[]
[]
[ "enums", "hibernate", "hql" ]
stackoverflow_0000049334_enums_hibernate_hql.txt
Q: Examples for coding against the PayPal API in .NET 2.0+? Can anyone point me to a good introduction to coding against the paypal API? A: Found this article by Rick Strahl recently http://www.west-wind.com/presentations/PayPalIntegration/PayPalIntegration.asp. Have not implemeted anything from it yet, Rick has quite a few articles around the web on ecommerce in aspnet, and he seems to show up everytime I'm searching for it. A: I would suggest you start by downloading the SDK: https://www.paypal.com/IntegrationCenter/ic_sdk-resource.html The SDK includes the following: Client libraries that call PayPal APIs API documentation for SDK components Sample code for Website Payments Pro and various administrative APIs Testing console that can verify connectivity to PayPal and submit API calls You may also want to take a look at Encore Systems .NET* Class Library for PayPal SOAP API A: I don't know what your needs are, but you might want to consider Google Checkout. Joe Audette was having considerable difficulty integrating PayPal. I've used Google Checkout and have had great success. Note that you can go much, MUCH deeper with Google Checkout than the sample linked above. EDIT: I didn't see Joe's updates. Look like he did eventually get it working.
Examples for coding against the PayPal API in .NET 2.0+?
Can anyone point me to a good introduction to coding against the paypal API?
[ "Found this article by Rick Strahl recently http://www.west-wind.com/presentations/PayPalIntegration/PayPalIntegration.asp. \nHave not implemeted anything from it yet, Rick has quite a few articles around the web on ecommerce in aspnet, and he seems to show up everytime I'm searching for it.\n", "I would suggest you start by downloading the SDK:\nhttps://www.paypal.com/IntegrationCenter/ic_sdk-resource.html\nThe SDK includes the following:\n\nClient libraries that call PayPal APIs\nAPI documentation for SDK components\nSample code for Website Payments Pro and various administrative APIs\nTesting console that can verify connectivity to PayPal and submit API calls\n\nYou may also want to take a look at Encore Systems .NET* Class Library for PayPal SOAP API\n", "I don't know what your needs are, but you might want to consider Google Checkout. Joe Audette was having considerable difficulty integrating PayPal.\nI've used Google Checkout and have had great success. Note that you can go much, MUCH deeper with Google Checkout than the sample linked above.\n\nEDIT: I didn't see Joe's updates. Look like he did eventually get it working.\n" ]
[ 5, 2, 0 ]
[]
[]
[ ".net", "asp.net", "paypal" ]
stackoverflow_0000053070_.net_asp.net_paypal.txt
Q: When building a Handler, should it be .ashx or .axd? Say I'm building an ASP.Net class that inherits from IHttpHandler, should I wire this up to a URL ending in .ashx, or should I use the .axd extension? Does it matter as long as there's no naming conflict? A: Ahh.. ScottGu says it doesn't matter, but .ashx is slightly better because there's less chance of a conflict with things like trace.axd and others. That's why the flag went up in my head that .ashx might be better. http://forums.asp.net/t/964074.aspx A: Out in "the wild", .ashx are definitely the most popular extension.
When building a Handler, should it be .ashx or .axd?
Say I'm building an ASP.Net class that inherits from IHttpHandler, should I wire this up to a URL ending in .ashx, or should I use the .axd extension? Does it matter as long as there's no naming conflict?
[ "Ahh.. ScottGu says it doesn't matter, but .ashx is slightly better because there's less chance of a conflict with things like trace.axd and others. That's why the flag went up in my head that .ashx might be better.\nhttp://forums.asp.net/t/964074.aspx\n", "Out in \"the wild\", .ashx are definitely the most popular extension.\n" ]
[ 3, 0 ]
[]
[]
[ "asp.net" ]
stackoverflow_0000053450_asp.net.txt
Q: Is it possible to unit test a class that makes P/Invoke calls? I want to wrap a piece of code that uses the Windows Impersonation API into a neat little helper class, and as usual, I'm looking for a way to go test-first. However, while WindowsIdentity is a managed class, the LogonUser call that is required to actually perform the logging in as another user is an unmanaged function in advapi32.dll. I think I can work around this by introducing an interface for my helper class to use and hiding the P/Invoke calls in an implementation, but testing that implementation will still be a problem. And you can imagine actually performing the impersonation in the test can be a bit problematic, given that the user would actually need to exist on the system. A: Guideline: Don't test code that you haven't written. You shouldn't be concerned with WinAPI implementation not working (most probably it works as expected). Your concern should be testing the 'Wiring' i.e. if your code makes the right WinAPI call. In which case, all you need is to mock out the interface and let the mock framework tell if you the call was made with the right params. If yes, you're done. Create IWinAPIFacade (with relevant WinAPI methods) and implementation CWinAPIFacade. Write a test which plugs in a mock of IWinAPIFacade and verify that the appropriate call is made Write a test to ensure that CWinAPIFacade is created and plugged in as a default (in normal functioning) Implement CWinAPIFacade which simply blind-delegates to Platform Invoke calls - no need to auto-test this layer. Just do a manual verification. Hopefully this won't change that often and nothing breaks. If you find that it does in the future, barricade it with some tests. A: I am not sure if I follow you.. You don't want to test the PInvoke yourself (you didn't write it) so you want to test that the wrapper class is performing as expected right? So, just create your interface in the wrapper class and test against that? In terms of needing to set up users etc, I think that would be a bullet you need to bite. It would seem odd to mock a wrapper PInvoke call, since you would simply just confirm and interface exists :)
Is it possible to unit test a class that makes P/Invoke calls?
I want to wrap a piece of code that uses the Windows Impersonation API into a neat little helper class, and as usual, I'm looking for a way to go test-first. However, while WindowsIdentity is a managed class, the LogonUser call that is required to actually perform the logging in as another user is an unmanaged function in advapi32.dll. I think I can work around this by introducing an interface for my helper class to use and hiding the P/Invoke calls in an implementation, but testing that implementation will still be a problem. And you can imagine actually performing the impersonation in the test can be a bit problematic, given that the user would actually need to exist on the system.
[ "Guideline: Don't test code that you haven't written.\nYou shouldn't be concerned with WinAPI implementation not working (most probably it works as expected). \nYour concern should be testing the 'Wiring' i.e. if your code makes the right WinAPI call. In which case, all you need is to mock out the interface and let the mock framework tell if you the call was made with the right params. If yes, you're done.\n\nCreate IWinAPIFacade (with relevant WinAPI methods) and implementation CWinAPIFacade.\nWrite a test which plugs in a mock of IWinAPIFacade and verify that the appropriate call is made\nWrite a test to ensure that CWinAPIFacade is created and plugged in as a default (in normal functioning)\nImplement CWinAPIFacade which simply blind-delegates to Platform Invoke calls - no need to auto-test this layer. Just do a manual verification. Hopefully this won't change that often and nothing breaks. If you find that it does in the future, barricade it with some tests.\n\n", "I am not sure if I follow you.. You don't want to test the PInvoke yourself (you didn't write it) so you want to test that the wrapper class is performing as expected right?\nSo, just create your interface in the wrapper class and test against that?\nIn terms of needing to set up users etc, I think that would be a bullet you need to bite. It would seem odd to mock a wrapper PInvoke call, since you would simply just confirm and interface exists :)\n" ]
[ 12, 0 ]
[]
[]
[ "c#", "impersonation", "unit_testing", "unmanaged" ]
stackoverflow_0000053439_c#_impersonation_unit_testing_unmanaged.txt
Q: How do I enable external access to MySQL Server? How do I enable external access to MySQL Server? I can connect locally but I cannot connect from another box on the network. I just tried grant all privileges on *.* to root@'%' identified by '*****' with grant option; And restarted MySQL Server with no success. A: You probably have to edit the configuration file (usually my.cnf) to listen in the external interface instead of on localhost only. Change the bind-address parameter to your machine's IP address. If this is an old MySQL installation, you should comment out the skip-networking parameter. Afterwards, restart MySQL and you'll be set A: Command and syntax looks fine. Have you checked the server is listening on an interface other than 127.0.0.1? By default Im pretty sure it only listens on the localhost address (127.0.0.1)
How do I enable external access to MySQL Server?
How do I enable external access to MySQL Server? I can connect locally but I cannot connect from another box on the network. I just tried grant all privileges on *.* to root@'%' identified by '*****' with grant option; And restarted MySQL Server with no success.
[ "You probably have to edit the configuration file (usually my.cnf) to listen in the external interface instead of on localhost only.\nChange the bind-address parameter to your machine's IP address.\nIf this is an old MySQL installation, you should comment out the skip-networking parameter.\nAfterwards, restart MySQL and you'll be set\n", "Command and syntax looks fine. Have you checked the server is listening on an interface other than 127.0.0.1? By default Im pretty sure it only listens on the localhost address (127.0.0.1)\n" ]
[ 22, 2 ]
[]
[]
[ "mysql" ]
stackoverflow_0000053491_mysql.txt
Q: Getting closest element by id I have two elements: <input a> <input b onclick="..."> When b is clicked, I want to access a and manipulate some of its data. A does not have a globally unique name, so document.getElementsByName is out. Looking into the event object, I thought event.target.parentNode would have some function like getElementsByName, but this does not seem to be the case with <td>s. Is there any simple way to do this? A: If a and b are next to each other and have the same parent, you can use the prevSibling property of b to find a. A: You should be able to find the element that was clicked from the event object. Depending on your browser you might want e.target or e.srcElement. The code below is from this w3schools example: function whichElement(e) { var targ; if (!e) var e = window.event; if (e.target) { targ=e.target; } else if (e.srcElement) { targ = e.srcElement; } if (targ.nodeType==3) { // defeat Safari bug targ = targ.parentNode; } var tname; tname = targ.tagName; alert("You clicked on a " + tname + " element."); } You may then use the nextSibling and prevSibling DOM traversal functions. Some more information here. And yet again a w3schools reference for XML DOM Nodes. A: Prototype also has nice functions to move around in the DOM. In your example something like the following would do the trick: b.up().down('a') And if there are is more than one a element at that level you have the power of CSS selectors at your hand to specify exactly which element you want A: Leave your plain vanilla JavaScript behind. Get JQuery--it will save you a ton of time. http://docs.jquery.com/Selectors
Getting closest element by id
I have two elements: <input a> <input b onclick="..."> When b is clicked, I want to access a and manipulate some of its data. A does not have a globally unique name, so document.getElementsByName is out. Looking into the event object, I thought event.target.parentNode would have some function like getElementsByName, but this does not seem to be the case with <td>s. Is there any simple way to do this?
[ "If a and b are next to each other and have the same parent, you can use the prevSibling property of b to find a.\n", "\nYou should be able to find the element that was clicked from the event object. Depending on your browser you might want e.target or e.srcElement. The code below is from this w3schools example:\nfunction whichElement(e) {\n var targ;\n if (!e) var e = window.event;\n if (e.target) {\n targ=e.target;\n } else if (e.srcElement) {\n targ = e.srcElement;\n }\n\n if (targ.nodeType==3) { // defeat Safari bug \n targ = targ.parentNode;\n }\n\n var tname;\n tname = targ.tagName;\n alert(\"You clicked on a \" + tname + \" element.\");\n}\n\nYou may then use the nextSibling and prevSibling DOM traversal functions. Some more information here. And yet again a w3schools reference for XML DOM Nodes.\n\n", "Prototype also has nice functions to move around in the DOM. In your example something like the following would do the trick:\nb.up().down('a')\n\nAnd if there are is more than one a element at that level you have the power of CSS selectors at your hand to specify exactly which element you want\n", "Leave your plain vanilla JavaScript behind. Get JQuery--it will save you a ton of time.\nhttp://docs.jquery.com/Selectors\n" ]
[ 5, 2, 1, 0 ]
[]
[]
[ "dom", "html", "javascript" ]
stackoverflow_0000053256_dom_html_javascript.txt
Q: Can you bind a DataTrigger to an Attached Property? In WPF, is it possible for a DataTrigger to bind to an attached property? I essentially want to use a converter on an attached property to provide a style when a particular validation rule has been broken. I am using markup like the following: <DataTrigger Binding="{Binding Path=Validation.Errors, RelativeSource={RelativeSource Self}, Converter={StaticResource RequiredToBoolConverter}}" Value="True"> <Setter Property="Background" Value="LightGreen" /> </DataTrigger> However, when this runs, I get the following: System.Windows.Data Error: 39 : BindingExpression path error: 'Validation' property not found on 'object' ''TextBox' (Name='')'. BindingExpression:Path=Validation.Errors; DataItem='TextBox' (Name=''); target element is 'TextBox' (Name=''); target property is 'NoTarget' (type 'Object') If I change my DataTrigger binding path to "Text", I do not get the databinding error (but of course it does not provide the behaviour I am seeking). A: You need to wrap the property in parentheses: <DataTrigger Binding="{Binding Path=(Validation.Errors).YourAttachedProperty,...
Can you bind a DataTrigger to an Attached Property?
In WPF, is it possible for a DataTrigger to bind to an attached property? I essentially want to use a converter on an attached property to provide a style when a particular validation rule has been broken. I am using markup like the following: <DataTrigger Binding="{Binding Path=Validation.Errors, RelativeSource={RelativeSource Self}, Converter={StaticResource RequiredToBoolConverter}}" Value="True"> <Setter Property="Background" Value="LightGreen" /> </DataTrigger> However, when this runs, I get the following: System.Windows.Data Error: 39 : BindingExpression path error: 'Validation' property not found on 'object' ''TextBox' (Name='')'. BindingExpression:Path=Validation.Errors; DataItem='TextBox' (Name=''); target element is 'TextBox' (Name=''); target property is 'NoTarget' (type 'Object') If I change my DataTrigger binding path to "Text", I do not get the databinding error (but of course it does not provide the behaviour I am seeking).
[ "You need to wrap the property in parentheses:\n<DataTrigger Binding=\"{Binding Path=(Validation.Errors).YourAttachedProperty,...\n\n" ]
[ 31 ]
[]
[]
[ "binding", "datatrigger", "wpf" ]
stackoverflow_0000053301_binding_datatrigger_wpf.txt
Q: How would you go about for switching a site from Prototype to jQuery I have written a site in Prototype but want to switch to jQuery. Any ideas on how best make the switch? A: Personally, I like to take things in steps, so I would start by using both, like this: jQuery.noConflict(); // Put all your code in your document ready area jQuery(document).ready(function($){ // Do jQuery stuff using $ $("div").hide(); }); // Use Prototype with $(...), etc. $('someid').hide(); That way you don't have to convert all your old code at once, but can start using jquery on new stuff, and migrate your old Prototype code when ever it's convenient. I don't know the size of your project, so I can't say whether or not this applies to you, but Spolsky had a great article about "The big rewrite" and why it's such a bad idea in Things you should never do, Part 1. It's well worth a read! For more on using jquery with Prototype, see Using jQuery with other libraries in the jquery docs.
How would you go about for switching a site from Prototype to jQuery
I have written a site in Prototype but want to switch to jQuery. Any ideas on how best make the switch?
[ "Personally, I like to take things in steps, so I would start by using both, like this:\njQuery.noConflict();\n\n// Put all your code in your document ready area\njQuery(document).ready(function($){\n // Do jQuery stuff using $\n $(\"div\").hide();\n});\n\n// Use Prototype with $(...), etc.\n$('someid').hide();\n\nThat way you don't have to convert all your old code at once, but can start using jquery on new stuff, and migrate your old Prototype code when ever it's convenient. I don't know the size of your project, so I can't say whether or not this applies to you, but Spolsky had a great article about \"The big rewrite\" and why it's such a bad idea in Things you should never do, Part 1. It's well worth a read!\nFor more on using jquery with Prototype, see Using jQuery with other libraries in the jquery docs.\n" ]
[ 11 ]
[]
[]
[ "javascript", "jquery", "prototypejs" ]
stackoverflow_0000053555_javascript_jquery_prototypejs.txt
Q: Is it possible to determine which process starts my .Net application? I am developing console application in .Net and I want to change a behavior a little based on information that application was started from cmd.exe or from explorer.exe. Is it possible? A: Process this_process = Process.GetCurrentProcess(); int parent_pid = 0; using (ManagementObject MgmtObj = new ManagementObject("win32_process.handle='" + this_process.Id.ToString() + "'")) { MgmtObj.Get(); parent_pid = Convert.ToInt32(MgmtObj["ParentProcessId"]); } string parent_process_name = Process.GetProcessById(parent_pid).ProcessName; A: The CreateToolhelp32Snapshot Function has a Process32First method that will allow you to read a PROCESSENTRY32 Structure. The structure has a property that will get you the information you want: th32ParentProcessID - The identifier of the process that created this process (its parent process). This article will help you get started using the ToolHelpSnapshot function: http://www.codeproject.com/KB/cs/IsApplicationRunning.aspx A: One issue with the ToolHelp/ManagementObject approaches is that the parent process could already have exited. The GetStartupInfo Win32 function (use PInvoke if there's no .NET equivalent) fills in a structure that includes the window title. For a Win32 console application "app.exe", this title string is "app" when started from cmd and "c:\full\path\to\app.exe" when started from explorer (or the VS debugger). Of course this is a hack (subject to change in other versions, etc.). #define WIN32_LEAN_AND_MEAN #include <windows.h> int main() { STARTUPINFO si; GetStartupInfo(&si); MessageBox(NULL, si.lpTitle, NULL, MB_OK); return 0; }
Is it possible to determine which process starts my .Net application?
I am developing console application in .Net and I want to change a behavior a little based on information that application was started from cmd.exe or from explorer.exe. Is it possible?
[ "Process this_process = Process.GetCurrentProcess();\nint parent_pid = 0;\nusing (ManagementObject MgmtObj = new ManagementObject(\"win32_process.handle='\" + this_process.Id.ToString() + \"'\"))\n{\n MgmtObj.Get();\n parent_pid = Convert.ToInt32(MgmtObj[\"ParentProcessId\"]);\n}\nstring parent_process_name = Process.GetProcessById(parent_pid).ProcessName;\n\n", "The CreateToolhelp32Snapshot Function has a Process32First method that will allow you to read a PROCESSENTRY32 Structure. The structure has a property that will get you the information you want:\n\nth32ParentProcessID - The identifier\n of the process that created this\n process (its parent process).\n\nThis article will help you get started using the ToolHelpSnapshot function:\nhttp://www.codeproject.com/KB/cs/IsApplicationRunning.aspx\n", "One issue with the ToolHelp/ManagementObject approaches is that the parent process could already have exited.\nThe GetStartupInfo Win32 function (use PInvoke if there's no .NET equivalent) fills in a structure that includes the window title. For a Win32 console application \"app.exe\", this title string is \"app\" when started from cmd and \"c:\\full\\path\\to\\app.exe\" when started from explorer (or the VS debugger).\nOf course this is a hack (subject to change in other versions, etc.).\n#define WIN32_LEAN_AND_MEAN\n#include <windows.h>\nint main()\n{\n STARTUPINFO si;\n GetStartupInfo(&si);\n MessageBox(NULL, si.lpTitle, NULL, MB_OK);\n return 0;\n}\n\n" ]
[ 9, 3, 3 ]
[]
[]
[ ".net", "process_management", "windows" ]
stackoverflow_0000053501_.net_process_management_windows.txt
Q: NMBLookup OS X returning inconsistant results We're trying to get SMB volume listings in our OS X application, and have been using NMBLookup, as suggested by Apple, to get listings. However, more often than not, we're not able to get a full listing of available SMB volumes using the tool. We've got a good benchmark in that we can see the full listing the Apple Finder gets, and the majority of the time, our listing is not matching up, usually missing servers. We're tried a number ways of executing the command, but haven't yet found anything that brings us back a complete listing. nmblookup -M -- - nmblookup '*' etc Does anyone know what we could be doing wrong, or know of a better way to query for SMB volumes available on local subnets? A: This work fairly well in our network. The point is to use smbclient -L on each of the entries returned by nmblookup: nmblookup -M -- - | grep -v querying | while read sw do echo $sw | awk -F' ' '{print $1}' | xargs smbclient -L done Edit: @paul - now I see what you mean - a vista has just joined our network and the Finder shows it but not nmblookup, but smbclient shows it in the "Server" section. smbclient has a "Server" section where it lists the machines found on the network. The command line I use is: smbclient -L 192.168.0.4 //the IP as returned by nmblookup of the master browser cristi:~ diciu$ smbclient -L 192.168.0.4 Domain=[DOMAIN] OS=[Unix] Server=[Samba 3.0.24-7.fc5] Sharename Type Comment --------- ---- ------- internal Disk some share [..] Anonymous login successful Domain=[DOMAIN] OS=[Unix] Server=[Samba 3.0.24-7.fc5] Server Comment --------- ------- MMM Vista box not showing up in nmblookup
NMBLookup OS X returning inconsistant results
We're trying to get SMB volume listings in our OS X application, and have been using NMBLookup, as suggested by Apple, to get listings. However, more often than not, we're not able to get a full listing of available SMB volumes using the tool. We've got a good benchmark in that we can see the full listing the Apple Finder gets, and the majority of the time, our listing is not matching up, usually missing servers. We're tried a number ways of executing the command, but haven't yet found anything that brings us back a complete listing. nmblookup -M -- - nmblookup '*' etc Does anyone know what we could be doing wrong, or know of a better way to query for SMB volumes available on local subnets?
[ "This work fairly well in our network. The point is to use smbclient -L on each of the entries returned by nmblookup:\n\nnmblookup -M -- - | grep -v querying | while read sw\ndo\necho $sw | awk -F' ' '{print $1}' | xargs smbclient -L \ndone\n\nEdit:\n@paul - now I see what you mean - a vista has just joined our network and the Finder shows it but not nmblookup, but smbclient shows it in the \"Server\" section.\nsmbclient has a \"Server\" section where it lists the machines found on the network.\nThe command line I use is:\n\nsmbclient -L 192.168.0.4 //the IP as returned by nmblookup of the master browser\n\ncristi:~ diciu$ smbclient -L 192.168.0.4\nDomain=[DOMAIN] OS=[Unix] Server=[Samba 3.0.24-7.fc5]\n Sharename Type Comment\n --------- ---- -------\n internal Disk some share\n[..]\nAnonymous login successful\nDomain=[DOMAIN] OS=[Unix] Server=[Samba 3.0.24-7.fc5]\n\n Server Comment\n --------- -------\n MMM Vista box not showing up in nmblookup\n\n" ]
[ 2 ]
[]
[]
[ "finder", "macos", "smb" ]
stackoverflow_0000053583_finder_macos_smb.txt
Q: How do I compose existing Linq Expressions I want to compose the results of two Linq Expressions. They exist in the form Expression<Func<T, bool>> So the two that I want to compose are essentially delegates on a parameter (of type T) that both return a boolean. The result I would like composed would be the logical evaluation of the booleans. I would probably implement it as an extension method so my syntax would be something like: Expression<Func<User, bool>> expression1 = t => t.Name == "steve"; Expression<Func<User, bool>> expression2 = t => t.Age == 28; Expression<Func<User, bool>> composedExpression = expression1.And(expression2); And later on in my code I want to evaluate the composed expression var user = new User(); bool evaluated = composedExpression.Compile().Invoke(user); I have poked around with a few different ideas but I fear that it is more complex than I had hoped. How is this done? A: Here is an example: var user1 = new User {Name = "steve", Age = 28}; var user2 = new User {Name = "foobar", Age = 28}; Expression<Func<User, bool>> expression1 = t => t.Name == "steve"; Expression<Func<User, bool>> expression2 = t => t.Age == 28; var invokedExpression = Expression.Invoke(expression2, expression1.Parameters.Cast<Expression>()); var result = Expression.Lambda<Func<User, bool>>(Expression.And(expression1.Body, invokedExpression), expression1.Parameters); Console.WriteLine(result.Compile().Invoke(user1)); // true Console.WriteLine(result.Compile().Invoke(user2)); // false You can reuse this code via extension methods: class User { public string Name { get; set; } public int Age { get; set; } } public static class PredicateExtensions { public static Expression<Func<T, bool>> And<T>(this Expression<Func<T, bool>> expression1,Expression<Func<T, bool>> expression2) { InvocationExpression invokedExpression = Expression.Invoke(expression2, expression1.Parameters.Cast<Expression>()); return Expression.Lambda<Func<T, bool>>(Expression.And(expression1.Body, invokedExpression), expression1.Parameters); } } class Program { static void Main(string[] args) { var user1 = new User {Name = "steve", Age = 28}; var user2 = new User {Name = "foobar", Age = 28}; Expression<Func<User, bool>> expression1 = t => t.Name == "steve"; Expression<Func<User, bool>> expression2 = t => t.Age == 28; var result = expression1.And(expression2); Console.WriteLine(result.Compile().Invoke(user1)); Console.WriteLine(result.Compile().Invoke(user2)); } }
How do I compose existing Linq Expressions
I want to compose the results of two Linq Expressions. They exist in the form Expression<Func<T, bool>> So the two that I want to compose are essentially delegates on a parameter (of type T) that both return a boolean. The result I would like composed would be the logical evaluation of the booleans. I would probably implement it as an extension method so my syntax would be something like: Expression<Func<User, bool>> expression1 = t => t.Name == "steve"; Expression<Func<User, bool>> expression2 = t => t.Age == 28; Expression<Func<User, bool>> composedExpression = expression1.And(expression2); And later on in my code I want to evaluate the composed expression var user = new User(); bool evaluated = composedExpression.Compile().Invoke(user); I have poked around with a few different ideas but I fear that it is more complex than I had hoped. How is this done?
[ "Here is an example:\nvar user1 = new User {Name = \"steve\", Age = 28};\nvar user2 = new User {Name = \"foobar\", Age = 28};\n\nExpression<Func<User, bool>> expression1 = t => t.Name == \"steve\";\nExpression<Func<User, bool>> expression2 = t => t.Age == 28;\n\nvar invokedExpression = Expression.Invoke(expression2, expression1.Parameters.Cast<Expression>());\n\nvar result = Expression.Lambda<Func<User, bool>>(Expression.And(expression1.Body, invokedExpression), expression1.Parameters);\n\nConsole.WriteLine(result.Compile().Invoke(user1)); // true\nConsole.WriteLine(result.Compile().Invoke(user2)); // false\n\nYou can reuse this code via extension methods: \nclass User\n{\n public string Name { get; set; }\n public int Age { get; set; }\n}\n\npublic static class PredicateExtensions\n{\n public static Expression<Func<T, bool>> And<T>(this Expression<Func<T, bool>> expression1,Expression<Func<T, bool>> expression2)\n {\n InvocationExpression invokedExpression = Expression.Invoke(expression2, expression1.Parameters.Cast<Expression>());\n\n return Expression.Lambda<Func<T, bool>>(Expression.And(expression1.Body, invokedExpression), expression1.Parameters);\n }\n}\n\nclass Program\n{\n static void Main(string[] args)\n {\n var user1 = new User {Name = \"steve\", Age = 28};\n var user2 = new User {Name = \"foobar\", Age = 28};\n\n Expression<Func<User, bool>> expression1 = t => t.Name == \"steve\";\n Expression<Func<User, bool>> expression2 = t => t.Age == 28;\n\n var result = expression1.And(expression2);\n\n Console.WriteLine(result.Compile().Invoke(user1));\n Console.WriteLine(result.Compile().Invoke(user2));\n }\n}\n\n" ]
[ 20 ]
[]
[]
[ "c#", "linq" ]
stackoverflow_0000053597_c#_linq.txt
Q: Creating an object without knowing the class name at design time Using reflection, I need to investigate a user DLL and create an object of a class in it. What is the simple way of doing it? A: Try Activator.CreateInstance. A: System.Reflection.Assembly is the class you will want to use. It contains many method for iterating over the types contained with a user DLL. You can iterate through each class, perhaps see if it inherits from a particular interface etc. http://msdn.microsoft.com/en-us/library/system.reflection.assembly_members.aspx Investigate Assembly.GetTypes() method for getting the list of types, or Assembly.GetExportedTypes() for the public ones only. A: You can create an instance of a class from a Type object using Activator.CreateInstance, to get all types in a dll you can use Assembly.GetTypes A: Take a look at these links: http://www.java2s.com/Code/CSharp/Development-Class/Createanobjectusingreflection.htm http://msdn.microsoft.com/en-us/library/k3a58006.aspx You basically use reflection to load an assembly, then find a type you're interested in. Once you have the type, you can ask to find it's constructors or other methods / properties. Once you have the constructor, you can invoke it. Easy! A: As it has already been said, you need to poke the System.Reflection namespace. If you know in advance the location/name of the DLL you want to load, you need to iterate through the Assembly.GetTypes(). In Pseudocode it would look something like this: Create and assembly object. Iterate through all the types contained in the assembly. Once you find the one you are looking for, invoke it (CreateInstance)… Use it wisely. ;) I have plenty of Reflection code if you want to take a look around, but the task is really simple and there are at least a dozen of articles with samples out there in the wild. (Aka Google). Despite that, the MSDN is your friend for Reflection Reference.
Creating an object without knowing the class name at design time
Using reflection, I need to investigate a user DLL and create an object of a class in it. What is the simple way of doing it?
[ "Try Activator.CreateInstance. \n", "System.Reflection.Assembly is the class you will want to use. It contains many method for iterating over the types contained with a user DLL. You can iterate through each class, perhaps see if it inherits from a particular interface etc.\nhttp://msdn.microsoft.com/en-us/library/system.reflection.assembly_members.aspx\nInvestigate Assembly.GetTypes() method for getting the list of types, or Assembly.GetExportedTypes() for the public ones only.\n", "You can create an instance of a class from a Type object using Activator.CreateInstance, to get all types in a dll you can use Assembly.GetTypes\n", "Take a look at these links:\nhttp://www.java2s.com/Code/CSharp/Development-Class/Createanobjectusingreflection.htm\nhttp://msdn.microsoft.com/en-us/library/k3a58006.aspx\nYou basically use reflection to load an assembly, then find a type you're interested in. Once you have the type, you can ask to find it's constructors or other methods / properties. Once you have the constructor, you can invoke it. Easy!\n", "As it has already been said, you need to poke the System.Reflection namespace. \nIf you know in advance the location/name of the DLL you want to load, you need to iterate through the Assembly.GetTypes().\nIn Pseudocode it would look something like this:\nCreate and assembly object. \nIterate through all the types contained in the assembly. \nOnce you find the one you are looking for, invoke it (CreateInstance)… \nUse it wisely.\n;)\nI have plenty of Reflection code if you want to take a look around, but the task is really simple and there are at least a dozen of articles with samples out there in the wild. (Aka Google). \nDespite that, the MSDN is your friend for Reflection Reference. \n" ]
[ 14, 3, 1, 1, 1 ]
[]
[]
[ "c#", "reflection" ]
stackoverflow_0000053649_c#_reflection.txt
Q: How to efficiently SQL select newest entries from a MySQL database? Possible Duplicate: SQL Query to get latest price I have a database containing stock price history. I want to select most recent prices for every stock that is listed. I know PostreSQL has a DISTINCT ON statement that would suit ideally here. Table columns are name, closingPrice and date; name and date together form a unique index. The easiest (and very uneffective) way is SELECT * FROM stockPrices s WHERE s.date = (SELECT MAX(date) FROM stockPrices si WHERE si.name = s.name); Much better approach I found is SELECT * FROM stockPrices s JOIN ( SELECT name, MAX(date) AS date FROM stockPrices si GROUP BY name ) lastEntry ON s.name = lastEntry.name AND s.date = lastEntry.date; What would be an efficient way to do this? What indexes should I create? duplicate of: SQL Query to get latest price A: I think that your second approach is very efficient. What's its problem? You have to add indexes to name and date.
How to efficiently SQL select newest entries from a MySQL database?
Possible Duplicate: SQL Query to get latest price I have a database containing stock price history. I want to select most recent prices for every stock that is listed. I know PostreSQL has a DISTINCT ON statement that would suit ideally here. Table columns are name, closingPrice and date; name and date together form a unique index. The easiest (and very uneffective) way is SELECT * FROM stockPrices s WHERE s.date = (SELECT MAX(date) FROM stockPrices si WHERE si.name = s.name); Much better approach I found is SELECT * FROM stockPrices s JOIN ( SELECT name, MAX(date) AS date FROM stockPrices si GROUP BY name ) lastEntry ON s.name = lastEntry.name AND s.date = lastEntry.date; What would be an efficient way to do this? What indexes should I create? duplicate of: SQL Query to get latest price
[ "I think that your second approach is very efficient. What's its problem?\nYou have to add indexes to name and date.\n" ]
[ 0 ]
[]
[]
[ "mysql", "sql" ]
stackoverflow_0000053670_mysql_sql.txt
Q: Centralizing/controlling arbitrary builds of .NET projects and solutions Over the years I have created and tweaked a set of NAnt scripts to perform complete project builds. The main script takes a single application end point (a web application project for example) and does a complete, from source control, build of it. The scripts are preconfigured with the necessary information regarding build output locations, source control addresses, etc. The main point is that you can feed it very little information and build a given project from the ground up. This satisfies the "arbitrary" part of my question. In the past I have worked for companies that produce a few software products (mostly web applications). This environment lends itself very well to a typical continuous integration setup where there is an integrator for each product. I have set up integrators to serve as both CI builds as well as integrators to handle a complete release candidate build and QA deployment. These integrators use the master build scripts, so the integrators themselves are very little more than source control monitoring and a call to the master NAnt script. I now work for a development group that creates many applications. Often, developers are called on to support applications originally built by others. When I started there was no build management in place. I am in a particularly unique position within the group as the lead developer of a 4 person team for one business unit's product suite (around a half dozen complete systems). I have implemented CruiseControl.Net with the master build scripts for doing both CI builds as well as RC builds. This works find for the fixed set of projects within the business' product suite. I have been using CCNet for many years now so I'm fully aware of what it can do. I have great respect for its contribution to the continuous integration arena as I use it for all the projects in my suite of products. I have stressed to my team the use of the official RC build integrator as the master builder for anything destined for any location other than development. This provides great control over the fixed set of projects that are under CCNet's control. However, there are other developers building other applications. Some of these are 1 developer projects that are often not even in source control until well into the project life cycle (something else I'm trying to change). Many of these projects are one-offs that won't have much of a life in development after they have been deployed. Despite that, they'll still need to be supported. Integral to supporting those is the fact that without centralized build management of these projects the release candidate builds that go to QA and eventually production are left to be done on individual developer machines. This, of course, provides zero guarantee that everything is in source control among the other factors of a developer machine build. The issue I've been trying to solve is: what kind of system can I use to provide centralized control over these sort of arbitrary builds? This is definitely not a unique problem. However, in much of the reading I have done about centralized builds, build automation and continuous integration the focus is on fixed projects/products and the task of supporting continued development on them. What types of process are used by business that are doing development on new projects constantly? Are they not using these types of processes? While the master build scripts do live on the build server, they are clumsy to use. Also I'd prefer to limit the console access to the build server. So some management system is required to provider easier access to firing off arbitrary builds on a central system. I realize that what I'm looking for may lay under the covers of MS Team Build. Unfortunately, whenever I start reading about it, I get that quicksand feeling when I start getting into the MS marketing material and quickly lose my way, never really finding out if what I want to do can be done with it. Plus, the licensing costs have been addressed as a likely show stopper in some past general discussions on the topic of Team Foundation Server and Team System. I'm a eager to hear from anyone who has solved this problem who might offer suggestions. I have done some work on a centralized build system based around my master "build-any-project" build scripts. However, what I have is in its infancy and has been constructed to support mainly just the types of projects that I work on. There lacks the kind of support required at this point to handle many application types or the plethora of project/solution configurations that are possible with Visual Studio. A: The biggest problem I see with a central build system is that even with the best will in the world tools will diverge between teams or over time. I favour designing any build system for a particular project such that it requires checking out a single module eg. MyProjectBuildEnvironment and then running a single script in a very tool neutral manner e.g. on a windows system build.bat Where possible all tools used by the build environment should be runnable simply by checking out the module MyProjectBuildEnvironmen rather than requiring machine level installers. These two constraints will not impede the freedoms of teams to use the tools that they prefer at a given time. The central build system can then be a simple system that checks out one module per project and simply executes the build.bat file. You could call it a meta build system. To be honest it would probably be overkill as a simple wiki describing the build module name for each project would be enough to allow anyone to checkout that one module and kick it off by the common build.bat command. As a final note the script for starting the build should always examine the environment and tell the user if any tools are missing or any machine configuration needs to be tweaked to successfully complete the build.
Centralizing/controlling arbitrary builds of .NET projects and solutions
Over the years I have created and tweaked a set of NAnt scripts to perform complete project builds. The main script takes a single application end point (a web application project for example) and does a complete, from source control, build of it. The scripts are preconfigured with the necessary information regarding build output locations, source control addresses, etc. The main point is that you can feed it very little information and build a given project from the ground up. This satisfies the "arbitrary" part of my question. In the past I have worked for companies that produce a few software products (mostly web applications). This environment lends itself very well to a typical continuous integration setup where there is an integrator for each product. I have set up integrators to serve as both CI builds as well as integrators to handle a complete release candidate build and QA deployment. These integrators use the master build scripts, so the integrators themselves are very little more than source control monitoring and a call to the master NAnt script. I now work for a development group that creates many applications. Often, developers are called on to support applications originally built by others. When I started there was no build management in place. I am in a particularly unique position within the group as the lead developer of a 4 person team for one business unit's product suite (around a half dozen complete systems). I have implemented CruiseControl.Net with the master build scripts for doing both CI builds as well as RC builds. This works find for the fixed set of projects within the business' product suite. I have been using CCNet for many years now so I'm fully aware of what it can do. I have great respect for its contribution to the continuous integration arena as I use it for all the projects in my suite of products. I have stressed to my team the use of the official RC build integrator as the master builder for anything destined for any location other than development. This provides great control over the fixed set of projects that are under CCNet's control. However, there are other developers building other applications. Some of these are 1 developer projects that are often not even in source control until well into the project life cycle (something else I'm trying to change). Many of these projects are one-offs that won't have much of a life in development after they have been deployed. Despite that, they'll still need to be supported. Integral to supporting those is the fact that without centralized build management of these projects the release candidate builds that go to QA and eventually production are left to be done on individual developer machines. This, of course, provides zero guarantee that everything is in source control among the other factors of a developer machine build. The issue I've been trying to solve is: what kind of system can I use to provide centralized control over these sort of arbitrary builds? This is definitely not a unique problem. However, in much of the reading I have done about centralized builds, build automation and continuous integration the focus is on fixed projects/products and the task of supporting continued development on them. What types of process are used by business that are doing development on new projects constantly? Are they not using these types of processes? While the master build scripts do live on the build server, they are clumsy to use. Also I'd prefer to limit the console access to the build server. So some management system is required to provider easier access to firing off arbitrary builds on a central system. I realize that what I'm looking for may lay under the covers of MS Team Build. Unfortunately, whenever I start reading about it, I get that quicksand feeling when I start getting into the MS marketing material and quickly lose my way, never really finding out if what I want to do can be done with it. Plus, the licensing costs have been addressed as a likely show stopper in some past general discussions on the topic of Team Foundation Server and Team System. I'm a eager to hear from anyone who has solved this problem who might offer suggestions. I have done some work on a centralized build system based around my master "build-any-project" build scripts. However, what I have is in its infancy and has been constructed to support mainly just the types of projects that I work on. There lacks the kind of support required at this point to handle many application types or the plethora of project/solution configurations that are possible with Visual Studio.
[ "The biggest problem I see with a central build system is that even with the best will in the world tools will diverge between teams or over time.\nI favour designing any build system for a particular project such that it requires checking out a single module eg. MyProjectBuildEnvironment and then running a single script in a very tool neutral manner e.g. on a windows system build.bat\nWhere possible all tools used by the build environment should be runnable simply by checking out the module MyProjectBuildEnvironmen rather than requiring machine level installers.\nThese two constraints will not impede the freedoms of teams to use the tools that they prefer at a given time.\nThe central build system can then be a simple system that checks out one module per project and simply executes the build.bat file. You could call it a meta build system.\nTo be honest it would probably be overkill as a simple wiki describing the build module name for each project would be enough to allow anyone to checkout that one module and kick it off by the common build.bat command.\nAs a final note the script for starting the build should always examine the environment and tell the user if any tools are missing or any machine configuration needs to be tweaked to successfully complete the build.\n" ]
[ 3 ]
[]
[]
[ ".net", "build_automation", "projects_and_solutions", "visual_studio" ]
stackoverflow_0000053391_.net_build_automation_projects_and_solutions_visual_studio.txt
Q: How to properly link your a custom css file in sharepoint I've created a custom list, and made some changes to the way the CQWP renders it on a page by modifying ItemStyle.xsl. However, I'd like to use some custom css classes and therefore I'd like to link to my own custom .css file from the head tag of the pages containing this CQWP. So my question is, where to do put my .css file and how do I link it properly to a page containing the CQWPs. Please have in mind that I'm making a solution that should be deployed on multi sharepoint installations. Thanks. A: The microsoft official way is just to copy them into the relevant folders (as seen by downloading their template packs). However, you could also create your own site definition and add the items to the correct libraries and lists in the same way that the master pages are added. If you are going to deploy CSS and Master Pages through features remember you will have to activate your the publishing infrastructure on the site collection and the publishing feature on the site. To deploy a master page/page layout as a feature you should follow the steps at the site below, you can use the "fileurl" element to specify your CSS and place it into the correct folder (style library, for example): http://www.sharepointnutsandbolts.com/2007/04/deploying-master-pages-and-page-layouts.html A: Consider uploading them to "Style Library" in the root of the site collection. If you don't have a "Style Library" at the root, consider making one -- it's just a document library. Make sure the permissions are set correctly so everyone who needs to read it can. You can reference them using "/Style%20Library/my.css" but this won't work on site collections that don't live at the root of the domain.
How to properly link your a custom css file in sharepoint
I've created a custom list, and made some changes to the way the CQWP renders it on a page by modifying ItemStyle.xsl. However, I'd like to use some custom css classes and therefore I'd like to link to my own custom .css file from the head tag of the pages containing this CQWP. So my question is, where to do put my .css file and how do I link it properly to a page containing the CQWPs. Please have in mind that I'm making a solution that should be deployed on multi sharepoint installations. Thanks.
[ "The microsoft official way is just to copy them into the relevant folders (as seen by downloading their template packs). However, you could also create your own site definition and add the items to the correct libraries and lists in the same way that the master pages are added.\nIf you are going to deploy CSS and Master Pages through features remember you will have to activate your the publishing infrastructure on the site collection and the publishing feature on the site.\nTo deploy a master page/page layout as a feature you should follow the steps at the site below, you can use the \"fileurl\" element to specify your CSS and place it into the correct folder (style library, for example):\nhttp://www.sharepointnutsandbolts.com/2007/04/deploying-master-pages-and-page-layouts.html\n", "Consider uploading them to \"Style Library\" in the root of the site collection.\nIf you don't have a \"Style Library\" at the root, consider making one -- it's just a document library.\nMake sure the permissions are set correctly so everyone who needs to read it can.\nYou can reference them using \"/Style%20Library/my.css\" but this won't work on site collections that don't live at the root of the domain.\n" ]
[ 3, 0 ]
[]
[]
[ "css", "sharepoint" ]
stackoverflow_0000053610_css_sharepoint.txt
Q: Why are SQL aggregate functions so much slower than Python and Java (or Poor Man's OLAP) I need a real DBA's opinion. Postgres 8.3 takes 200 ms to execute this query on my Macbook Pro while Java and Python perform the same calculation in under 20 ms (350,000 rows): SELECT count(id), avg(a), avg(b), avg(c), avg(d) FROM tuples; Is this normal behaviour when using a SQL database? The schema (the table holds responses to a survey): CREATE TABLE tuples (id integer primary key, a integer, b integer, c integer, d integer); \copy tuples from '350,000 responses.csv' delimiter as ',' I wrote some tests in Java and Python for context and they crush SQL (except for pure python): java 1.5 threads ~ 7 ms java 1.5 ~ 10 ms python 2.5 numpy ~ 18 ms python 2.5 ~ 370 ms Even sqlite3 is competitive with Postgres despite it assumping all columns are strings (for contrast: even using just switching to numeric columns instead of integers in Postgres results in 10x slowdown) Tunings i've tried without success include (blindly following some web advice): increased the shared memory available to Postgres to 256MB increased the working memory to 2MB disabled connection and statement logging used a stored procedure via CREATE FUNCTION ... LANGUAGE SQL So my question is, is my experience here normal, and this is what I can expect when using a SQL database? I can understand that ACID must come with costs, but this is kind of crazy in my opinion. I'm not asking for realtime game speed, but since Java can process millions of doubles in under 20 ms, I feel a bit jealous. Is there a better way to do simple OLAP on the cheap (both in terms of money and server complexity)? I've looked into Mondrian and Pig + Hadoop but not super excited about maintaining yet another server application and not sure if they would even help. No the Python code and Java code do all the work in house so to speak. I just generate 4 arrays with 350,000 random values each, then take the average. I don't include the generation in the timings, only the averaging step. The java threads timing uses 4 threads (one per array average), overkill but it's definitely the fastest. The sqlite3 timing is driven by the Python program and is running from disk (not :memory:) I realize Postgres is doing much more behind the scenes, but most of that work doesn't matter to me since this is read only data. The Postgres query doesn't change timing on subsequent runs. I've rerun the Python tests to include spooling it off the disk. The timing slows down considerably to nearly 4 secs. But I'm guessing that Python's file handling code is pretty much in C (though maybe not the csv lib?) so this indicates to me that Postgres isn't streaming from the disk either (or that you are correct and I should bow down before whoever wrote their storage layer!) A: I would say your test scheme is not really useful. To fulfill the db query, the db server goes through several steps: parse the SQL work up a query plan, i. e. decide on which indices to use (if any), optimize etc. if an index is used, search it for the pointers to the actual data, then go to the appropriate location in the data or if no index is used, scan the whole table to determine which rows are needed load the data from disk into a temporary location (hopefully, but not necessarily, memory) perform the count() and avg() calculations So, creating an array in Python and getting the average basically skips all these steps save the last one. As disk I/O is among the most expensive operations a program has to perform, this is a major flaw in the test (see also the answers to this question I asked here before). Even if you read the data from disk in your other test, the process is completely different and it's hard to tell how relevant the results are. To obtain more information about where Postgres spends its time, I would suggest the following tests: Compare the execution time of your query to a SELECT without the aggregating functions (i. e. cut step 5) If you find that the aggregation leads to a significant slowdown, try if Python does it faster, obtaining the raw data through the plain SELECT from the comparison. To speed up your query, reduce disk access first. I doubt very much that it's the aggregation that takes the time. There's several ways to do that: Cache data (in memory!) for subsequent access, either via the db engine's own capabilities or with tools like memcached Reduce the size of your stored data Optimize the use of indices. Sometimes this can mean to skip index use altogether (after all, it's disk access, too). For MySQL, I seem to remember that it's recommended to skip indices if you assume that the query fetches more than 10% of all the data in the table. If your query makes good use of indices, I know that for MySQL databases it helps to put indices and data on separate physical disks. However, I don't know whether that's applicable for Postgres. There also might be more sophisticated problems such as swapping rows to disk if for some reason the result set can't be completely processed in memory. But I would leave that kind of research until I run into serious performance problems that I can't find another way to fix, as it requires knowledge about a lot of little under-the-hood details in your process. Update: I just realized that you seem to have no use for indices for the above query and most likely aren't using any, too, so my advice on indices probably wasn't helpful. Sorry. Still, I'd say that the aggregation is not the problem but disk access is. I'll leave the index stuff in, anyway, it might still have some use. A: Postgres is doing a lot more than it looks like (maintaining data consistency for a start!) If the values don't have to be 100% spot on, or if the table is updated rarely, but you are running this calculation often, you might want to look into Materialized Views to speed it up. (Note, I have not used materialized views in Postgres, they look at little hacky, but might suite your situation). Materialized Views Also consider the overhead of actually connecting to the server and the round trip required to send the request to the server and back. I'd consider 200ms for something like this to be pretty good, A quick test on my oracle server, the same table structure with about 500k rows and no indexes, takes about 1 - 1.5 seconds, which is almost all just oracle sucking the data off disk. The real question is, is 200ms fast enough? -------------- More -------------------- I was interested in solving this using materialized views, since I've never really played with them. This is in oracle. First I created a MV which refreshes every minute. create materialized view mv_so_x build immediate refresh complete START WITH SYSDATE NEXT SYSDATE + 1/24/60 as select count(*),avg(a),avg(b),avg(c),avg(d) from so_x; While its refreshing, there is no rows returned SQL> select * from mv_so_x; no rows selected Elapsed: 00:00:00.00 Once it refreshes, its MUCH faster than doing the raw query SQL> select count(*),avg(a),avg(b),avg(c),avg(d) from so_x; COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D) ---------- ---------- ---------- ---------- ---------- 1899459 7495.38839 22.2905454 5.00276131 2.13432836 Elapsed: 00:00:05.74 SQL> select * from mv_so_x; COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D) ---------- ---------- ---------- ---------- ---------- 1899459 7495.38839 22.2905454 5.00276131 2.13432836 Elapsed: 00:00:00.00 SQL> If we insert into the base table, the result is not immediately viewable view the MV. SQL> insert into so_x values (1,2,3,4,5); 1 row created. Elapsed: 00:00:00.00 SQL> commit; Commit complete. Elapsed: 00:00:00.00 SQL> select * from mv_so_x; COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D) ---------- ---------- ---------- ---------- ---------- 1899459 7495.38839 22.2905454 5.00276131 2.13432836 Elapsed: 00:00:00.00 SQL> But wait a minute or so, and the MV will update behind the scenes, and the result is returned fast as you could want. SQL> / COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D) ---------- ---------- ---------- ---------- ---------- 1899460 7495.35823 22.2905352 5.00276078 2.17647059 Elapsed: 00:00:00.00 SQL> This isn't ideal. for a start, its not realtime, inserts/updates will not be immediately visible. Also, you've got a query running to update the MV whether you need it or not (this can be tune to whatever time frame, or on demand). But, this does show how much faster an MV can make it seem to the end user, if you can live with values which aren't quite upto the second accurate. A: I retested with MySQL specifying ENGINE = MEMORY and it doesn't change a thing (still 200 ms). Sqlite3 using an in-memory db gives similar timings as well (250 ms). The math here looks correct (at least the size, as that's how big the sqlite db is :-) I'm just not buying the disk-causes-slowness argument as there is every indication the tables are in memory (the postgres guys all warn against trying too hard to pin tables to memory as they swear the OS will do it better than the programmer) To clarify the timings, the Java code is not reading from disk, making it a totally unfair comparison if Postgres is reading from the disk and calculating a complicated query, but that's really besides the point, the DB should be smart enough to bring a small table into memory and precompile a stored procedure IMHO. UPDATE (in response to the first comment below): I'm not sure how I'd test the query without using an aggregation function in a way that would be fair, since if i select all of the rows it'll spend tons of time serializing and formatting everything. I'm not saying that the slowness is due to the aggregation function, it could still be just overhead from concurrency, integrity, and friends. I just don't know how to isolate the aggregation as the sole independent variable. A: Those are very detailed answers, but they mostly beg the question, how do I get these benefits without leaving Postgres given that the data easily fits into memory, requires concurrent reads but no writes and is queried with the same query over and over again. Is it possible to precompile the query and optimization plan? I would have thought the stored procedure would do this, but it doesn't really help. To avoid disk access it's necessary to cache the whole table in memory, can I force Postgres to do that? I think it's already doing this though, since the query executes in just 200 ms after repeated runs. Can I tell Postgres that the table is read only, so it can optimize any locking code? I think it's possible to estimate the query construction costs with an empty table (timings range from 20-60 ms) I still can't see why the Java/Python tests are invalid. Postgres just isn't doing that much more work (though I still haven't addressed the concurrency aspect, just the caching and query construction) UPDATE: I don't think it's fair to compare the SELECTS as suggested by pulling 350,000 through the driver and serialization steps into Python to run the aggregation, nor even to omit the aggregation as the overhead in formatting and displaying is hard to separate from the timing. If both engines are operating on in memory data, it should be an apples to apples comparison, I'm not sure how to guarantee that's already happening though. I can't figure out how to add comments, maybe i don't have enough reputation? A: I'm a MS-SQL guy myself, and we'd use DBCC PINTABLE to keep a table cached, and SET STATISTICS IO to see that it's reading from cache, and not disk. I can't find anything on Postgres to mimic PINTABLE, but pg_buffercache seems to give details on what is in the cache - you may want to check that, and see if your table is actually being cached. A quick back of the envelope calculation makes me suspect that you're paging from disk. Assuming Postgres uses 4-byte integers, you have (6 * 4) bytes per row, so your table is a minimum of (24 * 350,000) bytes ~ 8.4MB. Assuming 40 MB/s sustained throughput on your HDD, you're looking at right around 200ms to read the data (which, as pointed out, should be where almost all of the time is being spent). Unless I screwed up my math somewhere, I don't see how it's possible that you are able to read 8MB into your Java app and process it in the times you're showing - unless that file is already cached by either the drive or your OS. A: I don't think that your results are all that surprising -- if anything it is that Postgres is so fast. Does the Postgres query run faster a second time once it has had a chance to cache the data? To be a little fairer your test for Java and Python should cover the cost of acquiring the data in the first place (ideally loading it off disk). If this performance level is a problem for your application in practice but you need a RDBMS for other reasons then you could look at memcached. You would then have faster cached access to raw data and could do the calculations in code. A: One other thing that an RDBMS generally does for you is to provide concurrency by protecting you from simultaneous access by another process. This is done by placing locks, and there's some overhead from that. If you're dealing with entirely static data that never changes, and especially if you're in a basically "single user" scenario, then using a relational database doesn't necessarily gain you much benefit. A: Are you using TCP to access the Postgres? In that case Nagle is messing with your timing. A: You need to increase postgres' caches to the point where the whole working set fits into memory before you can expect to see perfomance comparable to doing it in-memory with a program. A: Thanks for the Oracle timings, that's the kind of stuff I'm looking for (disappointing though :-) Materialized views are probably worth considering as I think I can precompute the most interesting forms of this query for most users. I don't think query round trip time should be very high as i'm running the the queries on the same machine that runs Postgres, so it can't add much latency? I've also done some checking into the cache sizes, and it seems Postgres relies on the OS to handle caching, they specifically mention BSD as the ideal OS for this, so I thinking Mac OS ought to be pretty smart about bringing the table into memory. Unless someone has more specific params in mind I think more specific caching is out of my control. In the end I can probably put up with 200 ms response times, but knowing that 7 ms is a possible target makes me feel unsatisfied, as even 20-50 ms times would enable more users to have more up to date queries and get rid of a lots of caching and precomputed hacks. I just checked the timings using MySQL 5 and they are slightly worse than Postgres. So barring some major caching breakthroughs, I guess this is what I can expect going the relational db route. I wish I could up vote some of your answers, but I don't have enough points yet.
Why are SQL aggregate functions so much slower than Python and Java (or Poor Man's OLAP)
I need a real DBA's opinion. Postgres 8.3 takes 200 ms to execute this query on my Macbook Pro while Java and Python perform the same calculation in under 20 ms (350,000 rows): SELECT count(id), avg(a), avg(b), avg(c), avg(d) FROM tuples; Is this normal behaviour when using a SQL database? The schema (the table holds responses to a survey): CREATE TABLE tuples (id integer primary key, a integer, b integer, c integer, d integer); \copy tuples from '350,000 responses.csv' delimiter as ',' I wrote some tests in Java and Python for context and they crush SQL (except for pure python): java 1.5 threads ~ 7 ms java 1.5 ~ 10 ms python 2.5 numpy ~ 18 ms python 2.5 ~ 370 ms Even sqlite3 is competitive with Postgres despite it assumping all columns are strings (for contrast: even using just switching to numeric columns instead of integers in Postgres results in 10x slowdown) Tunings i've tried without success include (blindly following some web advice): increased the shared memory available to Postgres to 256MB increased the working memory to 2MB disabled connection and statement logging used a stored procedure via CREATE FUNCTION ... LANGUAGE SQL So my question is, is my experience here normal, and this is what I can expect when using a SQL database? I can understand that ACID must come with costs, but this is kind of crazy in my opinion. I'm not asking for realtime game speed, but since Java can process millions of doubles in under 20 ms, I feel a bit jealous. Is there a better way to do simple OLAP on the cheap (both in terms of money and server complexity)? I've looked into Mondrian and Pig + Hadoop but not super excited about maintaining yet another server application and not sure if they would even help. No the Python code and Java code do all the work in house so to speak. I just generate 4 arrays with 350,000 random values each, then take the average. I don't include the generation in the timings, only the averaging step. The java threads timing uses 4 threads (one per array average), overkill but it's definitely the fastest. The sqlite3 timing is driven by the Python program and is running from disk (not :memory:) I realize Postgres is doing much more behind the scenes, but most of that work doesn't matter to me since this is read only data. The Postgres query doesn't change timing on subsequent runs. I've rerun the Python tests to include spooling it off the disk. The timing slows down considerably to nearly 4 secs. But I'm guessing that Python's file handling code is pretty much in C (though maybe not the csv lib?) so this indicates to me that Postgres isn't streaming from the disk either (or that you are correct and I should bow down before whoever wrote their storage layer!)
[ "I would say your test scheme is not really useful. To fulfill the db query, the db server goes through several steps:\n\nparse the SQL\nwork up a query plan, i. e. decide on which indices to use (if any), optimize etc.\nif an index is used, search it for the pointers to the actual data, then go to the appropriate location in the data or\nif no index is used, scan the whole table to determine which rows are needed\nload the data from disk into a temporary location (hopefully, but not necessarily, memory)\nperform the count() and avg() calculations\n\nSo, creating an array in Python and getting the average basically skips all these steps save the last one. As disk I/O is among the most expensive operations a program has to perform, this is a major flaw in the test (see also the answers to this question I asked here before). Even if you read the data from disk in your other test, the process is completely different and it's hard to tell how relevant the results are.\nTo obtain more information about where Postgres spends its time, I would suggest the following tests:\n\nCompare the execution time of your query to a SELECT without the aggregating functions (i. e. cut step 5)\nIf you find that the aggregation leads to a significant slowdown, try if Python does it faster, obtaining the raw data through the plain SELECT from the comparison.\n\nTo speed up your query, reduce disk access first. I doubt very much that it's the aggregation that takes the time.\nThere's several ways to do that:\n\nCache data (in memory!) for subsequent access, either via the db engine's own capabilities or with tools like memcached\nReduce the size of your stored data\nOptimize the use of indices. Sometimes this can mean to skip index use altogether (after all, it's disk access, too). For MySQL, I seem to remember that it's recommended to skip indices if you assume that the query fetches more than 10% of all the data in the table.\nIf your query makes good use of indices, I know that for MySQL databases it helps to put indices and data on separate physical disks. However, I don't know whether that's applicable for Postgres.\nThere also might be more sophisticated problems such as swapping rows to disk if for some reason the result set can't be completely processed in memory. But I would leave that kind of research until I run into serious performance problems that I can't find another way to fix, as it requires knowledge about a lot of little under-the-hood details in your process.\n\nUpdate:\nI just realized that you seem to have no use for indices for the above query and most likely aren't using any, too, so my advice on indices probably wasn't helpful. Sorry. Still, I'd say that the aggregation is not the problem but disk access is. I'll leave the index stuff in, anyway, it might still have some use.\n", "Postgres is doing a lot more than it looks like (maintaining data consistency for a start!)\nIf the values don't have to be 100% spot on, or if the table is updated rarely, but you are running this calculation often, you might want to look into Materialized Views to speed it up.\n(Note, I have not used materialized views in Postgres, they look at little hacky, but might suite your situation).\nMaterialized Views\nAlso consider the overhead of actually connecting to the server and the round trip required to send the request to the server and back.\nI'd consider 200ms for something like this to be pretty good, A quick test on my oracle server, the same table structure with about 500k rows and no indexes, takes about 1 - 1.5 seconds, which is almost all just oracle sucking the data off disk.\nThe real question is, is 200ms fast enough?\n-------------- More --------------------\nI was interested in solving this using materialized views, since I've never really played with them. This is in oracle.\nFirst I created a MV which refreshes every minute.\ncreate materialized view mv_so_x \nbuild immediate \nrefresh complete \nSTART WITH SYSDATE NEXT SYSDATE + 1/24/60\n as select count(*),avg(a),avg(b),avg(c),avg(d) from so_x;\n\nWhile its refreshing, there is no rows returned\nSQL> select * from mv_so_x;\n\nno rows selected\n\nElapsed: 00:00:00.00\n\nOnce it refreshes, its MUCH faster than doing the raw query\nSQL> select count(*),avg(a),avg(b),avg(c),avg(d) from so_x;\n\n COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)\n---------- ---------- ---------- ---------- ----------\n 1899459 7495.38839 22.2905454 5.00276131 2.13432836\n\nElapsed: 00:00:05.74\nSQL> select * from mv_so_x;\n\n COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)\n---------- ---------- ---------- ---------- ----------\n 1899459 7495.38839 22.2905454 5.00276131 2.13432836\n\nElapsed: 00:00:00.00\nSQL> \n\nIf we insert into the base table, the result is not immediately viewable view the MV.\nSQL> insert into so_x values (1,2,3,4,5);\n\n1 row created.\n\nElapsed: 00:00:00.00\nSQL> commit;\n\nCommit complete.\n\nElapsed: 00:00:00.00\nSQL> select * from mv_so_x;\n\n COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)\n---------- ---------- ---------- ---------- ----------\n 1899459 7495.38839 22.2905454 5.00276131 2.13432836\n\nElapsed: 00:00:00.00\nSQL> \n\nBut wait a minute or so, and the MV will update behind the scenes, and the result is returned fast as you could want.\nSQL> /\n\n COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)\n---------- ---------- ---------- ---------- ----------\n 1899460 7495.35823 22.2905352 5.00276078 2.17647059\n\nElapsed: 00:00:00.00\nSQL> \n\nThis isn't ideal. for a start, its not realtime, inserts/updates will not be immediately visible. Also, you've got a query running to update the MV whether you need it or not (this can be tune to whatever time frame, or on demand). But, this does show how much faster an MV can make it seem to the end user, if you can live with values which aren't quite upto the second accurate.\n", "I retested with MySQL specifying ENGINE = MEMORY and it doesn't change a thing (still 200 ms). Sqlite3 using an in-memory db gives similar timings as well (250 ms).\nThe math here looks correct (at least the size, as that's how big the sqlite db is :-)\nI'm just not buying the disk-causes-slowness argument as there is every indication the tables are in memory (the postgres guys all warn against trying too hard to pin tables to memory as they swear the OS will do it better than the programmer)\nTo clarify the timings, the Java code is not reading from disk, making it a totally unfair comparison if Postgres is reading from the disk and calculating a complicated query, but that's really besides the point, the DB should be smart enough to bring a small table into memory and precompile a stored procedure IMHO.\nUPDATE (in response to the first comment below):\nI'm not sure how I'd test the query without using an aggregation function in a way that would be fair, since if i select all of the rows it'll spend tons of time serializing and formatting everything. I'm not saying that the slowness is due to the aggregation function, it could still be just overhead from concurrency, integrity, and friends. I just don't know how to isolate the aggregation as the sole independent variable.\n", "Those are very detailed answers, but they mostly beg the question, how do I get these benefits without leaving Postgres given that the data easily fits into memory, requires concurrent reads but no writes and is queried with the same query over and over again.\nIs it possible to precompile the query and optimization plan? I would have thought the stored procedure would do this, but it doesn't really help.\nTo avoid disk access it's necessary to cache the whole table in memory, can I force Postgres to do that? I think it's already doing this though, since the query executes in just 200 ms after repeated runs.\nCan I tell Postgres that the table is read only, so it can optimize any locking code?\nI think it's possible to estimate the query construction costs with an empty table (timings range from 20-60 ms) \nI still can't see why the Java/Python tests are invalid. Postgres just isn't doing that much more work (though I still haven't addressed the concurrency aspect, just the caching and query construction)\nUPDATE: \nI don't think it's fair to compare the SELECTS as suggested by pulling 350,000 through the driver and serialization steps into Python to run the aggregation, nor even to omit the aggregation as the overhead in formatting and displaying is hard to separate from the timing. If both engines are operating on in memory data, it should be an apples to apples comparison, I'm not sure how to guarantee that's already happening though.\nI can't figure out how to add comments, maybe i don't have enough reputation?\n", "I'm a MS-SQL guy myself, and we'd use DBCC PINTABLE to keep a table cached, and SET STATISTICS IO to see that it's reading from cache, and not disk. \nI can't find anything on Postgres to mimic PINTABLE, but pg_buffercache seems to give details on what is in the cache - you may want to check that, and see if your table is actually being cached.\nA quick back of the envelope calculation makes me suspect that you're paging from disk. Assuming Postgres uses 4-byte integers, you have (6 * 4) bytes per row, so your table is a minimum of (24 * 350,000) bytes ~ 8.4MB. Assuming 40 MB/s sustained throughput on your HDD, you're looking at right around 200ms to read the data (which, as pointed out, should be where almost all of the time is being spent). \nUnless I screwed up my math somewhere, I don't see how it's possible that you are able to read 8MB into your Java app and process it in the times you're showing - unless that file is already cached by either the drive or your OS.\n", "I don't think that your results are all that surprising -- if anything it is that Postgres is so fast.\nDoes the Postgres query run faster a second time once it has had a chance to cache the data? To be a little fairer your test for Java and Python should cover the cost of acquiring the data in the first place (ideally loading it off disk).\nIf this performance level is a problem for your application in practice but you need a RDBMS for other reasons then you could look at memcached. You would then have faster cached access to raw data and could do the calculations in code.\n", "One other thing that an RDBMS generally does for you is to provide concurrency by protecting you from simultaneous access by another process. This is done by placing locks, and there's some overhead from that.\nIf you're dealing with entirely static data that never changes, and especially if you're in a basically \"single user\" scenario, then using a relational database doesn't necessarily gain you much benefit.\n", "Are you using TCP to access the Postgres? In that case Nagle is messing with your timing.\n", "You need to increase postgres' caches to the point where the whole working set fits into memory before you can expect to see perfomance comparable to doing it in-memory with a program.\n", "Thanks for the Oracle timings, that's the kind of stuff I'm looking for (disappointing though :-)\nMaterialized views are probably worth considering as I think I can precompute the most interesting forms of this query for most users.\nI don't think query round trip time should be very high as i'm running the the queries on the same machine that runs Postgres, so it can't add much latency?\nI've also done some checking into the cache sizes, and it seems Postgres relies on the OS to handle caching, they specifically mention BSD as the ideal OS for this, so I thinking Mac OS ought to be pretty smart about bringing the table into memory. Unless someone has more specific params in mind I think more specific caching is out of my control.\nIn the end I can probably put up with 200 ms response times, but knowing that 7 ms is a possible target makes me feel unsatisfied, as even 20-50 ms times would enable more users to have more up to date queries and get rid of a lots of caching and precomputed hacks.\nI just checked the timings using MySQL 5 and they are slightly worse than Postgres. So barring some major caching breakthroughs, I guess this is what I can expect going the relational db route.\nI wish I could up vote some of your answers, but I don't have enough points yet.\n" ]
[ 15, 12, 6, 3, 3, 1, 1, 1, 0, 0 ]
[]
[]
[ "aggregate", "olap", "optimization", "python", "sql" ]
stackoverflow_0000051553_aggregate_olap_optimization_python_sql.txt
Q: ASP.NET ObjectDataSource Binding Automatically to Repeater - Possible? I have a Question class: class Question { public int QuestionNumber { get; set; } public string Question { get; set; } public string Answer { get; set; } } Now I make an ICollection of these available through an ObjectDataSource, and display them using a Repeater bound to the DataSource. I use <%#Eval("Question")%> to display the Question, and I use a TextBox and <%#Bind("Answer")%> to accept an answer. If my ObjectDataSource returns three Question objects, then my Repeater displays the three questions with a TextBox following each question for the user to provide an answer. So far it works great. Now I want to take the user's response and put it back into the relevant Question classes, which I will then persist. Surely the framework should take care of all of this for me? I've used the Bind method, I've specified a DataSourceID, I've specified an Update method in my ObjectDataSource class, but there seems no way to actually kickstart the whole thing. I tried adding a Command button and in the code behind calling MyDataSource.Update(), but it attempts to call my Update method with no parameters, rather than the Question parameter it expects. Surely there's an easy way to achieve all of this with little or no codebehind? It seems like all the bits are there, but there's some glue missing to stick them all together. Help! Anthony A: You have to handle the postback event (button click or whatever) then enumerate the repeater items like this: foreach(RepeaterItem item in rptQuestions.Items) { //pull out question var question = (Question)item.DataItem; question.Answer = ((TextBox)item.FindControl("txtAnswer")).Text; question.Save() ? <--- not sure what you want to do with it } A: The bind method really isn't for the repeater, it's more for the formview or gridview, where you are editing just one item in the list not every item in the list. On both you click a edit button which then gives you the bound controls (bound using bind) and then hit the save link which auto saves the item back into your datasource without any code behind. A: Then what's the point in the Bind method (as opposed to the Eval method) if I have to bind everything back up manually on postback? A: Ben: Having tried it, item.DataItem is always null, and according to the following post, it's not designed to be used that way: http://www.netnewsgroups.net/group/microsoft.public.dotnet.framework.aspnet/topic4049.aspx So how on earth do I manually bind it back?
ASP.NET ObjectDataSource Binding Automatically to Repeater - Possible?
I have a Question class: class Question { public int QuestionNumber { get; set; } public string Question { get; set; } public string Answer { get; set; } } Now I make an ICollection of these available through an ObjectDataSource, and display them using a Repeater bound to the DataSource. I use <%#Eval("Question")%> to display the Question, and I use a TextBox and <%#Bind("Answer")%> to accept an answer. If my ObjectDataSource returns three Question objects, then my Repeater displays the three questions with a TextBox following each question for the user to provide an answer. So far it works great. Now I want to take the user's response and put it back into the relevant Question classes, which I will then persist. Surely the framework should take care of all of this for me? I've used the Bind method, I've specified a DataSourceID, I've specified an Update method in my ObjectDataSource class, but there seems no way to actually kickstart the whole thing. I tried adding a Command button and in the code behind calling MyDataSource.Update(), but it attempts to call my Update method with no parameters, rather than the Question parameter it expects. Surely there's an easy way to achieve all of this with little or no codebehind? It seems like all the bits are there, but there's some glue missing to stick them all together. Help! Anthony
[ "You have to handle the postback event (button click or whatever) then enumerate the repeater items like this:\nforeach(RepeaterItem item in rptQuestions.Items)\n{\n //pull out question\n var question = (Question)item.DataItem;\n question.Answer = ((TextBox)item.FindControl(\"txtAnswer\")).Text;\n\n question.Save() ? <--- not sure what you want to do with it\n}\n\n", "The bind method really isn't for the repeater, it's more for the formview or gridview, where you are editing just one item in the list not every item in the list. \nOn both you click a edit button which then gives you the bound controls (bound using bind) and then hit the save link which auto saves the item back into your datasource without any code behind.\n", "Then what's the point in the Bind method (as opposed to the Eval method) if I have to bind everything back up manually on postback?\n", "Ben: Having tried it, item.DataItem is always null, and according to the following post, it's not designed to be used that way:\nhttp://www.netnewsgroups.net/group/microsoft.public.dotnet.framework.aspnet/topic4049.aspx\nSo how on earth do I manually bind it back?\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "asp.net", "bind", "objectdatasource", "repeater" ]
stackoverflow_0000052485_asp.net_bind_objectdatasource_repeater.txt
Q: Get the App.Config of another Exe I have an exe with an App.Config file. Now I want to create a wrapper dll around the exe in order to consume some of the functionalities. The question is how can I access the app.config property in the exe from the wrapper dll? Maybe I should be a little bit more in my questions, I have the following app.config content with the exe: <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="myKey" value="myValue"/> </appSettings> </configuration> The question is how to how to get "myValue" out from the wrapper dll? thanks for your solution. Actually my initial concept was to avoid XML file reading method or LINQ or whatever. My preferred solution was to use the configuration manager libraries and the like. I'll appreciate any help that uses the classes that are normally associated with accessing app.config properties. A: The ConfigurationManager.OpenMappedExeConfiguration Method will allow you to do this. Sample from the MSDN page: static void GetMappedExeConfigurationSections() { // Get the machine.config file. ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // You may want to map to your own exe.comfig file here. fileMap.ExeConfigFilename = @"C:\test\ConfigurationManager.exe.config"; System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); // Loop to get the sections. Display basic information. Console.WriteLine("Name, Allow Definition"); int i = 0; foreach (ConfigurationSection section in config.Sections) { Console.WriteLine( section.SectionInformation.Name + "\t" + section.SectionInformation.AllowExeDefinition); i += 1; } Console.WriteLine("[Total number of sections: {0}]", i); // Display machine.config path. Console.WriteLine("[File path: {0}]", config.FilePath); } EDIT: This should output the "myKey" value: ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); fileMap.ExeConfigFilename = @"C:\test\ConfigurationManager.exe.config"; System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); Console.WriteLine(config.AppSettings.Settings["MyKey"].Value); A: After some testing, I found a way to do this. Add the App.Config file to the test project. Use "Add as a link" option. Use System.Configuration.ConfigurationManager.AppSettings["myKey"] to access the value. A: I think what you're looking for is: System.Configuration.ConfigurationManager.OpenExeConfiguration(string path) A: I'd second Gishu's point that there's another way. Wouldn't it be better to abstact the common/"public" part of the EXE out into DLL create a wrapper EXE to run it? This is certainly the more usual pattern of development. Only the stuff that you wish to consume would go into the DLL, and the EXE would do all the stuff it currently does, minus what's gone into the DLL.
Get the App.Config of another Exe
I have an exe with an App.Config file. Now I want to create a wrapper dll around the exe in order to consume some of the functionalities. The question is how can I access the app.config property in the exe from the wrapper dll? Maybe I should be a little bit more in my questions, I have the following app.config content with the exe: <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="myKey" value="myValue"/> </appSettings> </configuration> The question is how to how to get "myValue" out from the wrapper dll? thanks for your solution. Actually my initial concept was to avoid XML file reading method or LINQ or whatever. My preferred solution was to use the configuration manager libraries and the like. I'll appreciate any help that uses the classes that are normally associated with accessing app.config properties.
[ "The ConfigurationManager.OpenMappedExeConfiguration Method will allow you to do this.\nSample from the MSDN page:\nstatic void GetMappedExeConfigurationSections()\n{\n // Get the machine.config file.\n ExeConfigurationFileMap fileMap =\n new ExeConfigurationFileMap();\n // You may want to map to your own exe.comfig file here.\n fileMap.ExeConfigFilename = \n @\"C:\\test\\ConfigurationManager.exe.config\";\n System.Configuration.Configuration config =\n ConfigurationManager.OpenMappedExeConfiguration(fileMap, \n ConfigurationUserLevel.None);\n\n // Loop to get the sections. Display basic information.\n Console.WriteLine(\"Name, Allow Definition\");\n int i = 0;\n foreach (ConfigurationSection section in config.Sections)\n {\n Console.WriteLine(\n section.SectionInformation.Name + \"\\t\" +\n section.SectionInformation.AllowExeDefinition);\n i += 1;\n\n }\n Console.WriteLine(\"[Total number of sections: {0}]\", i);\n\n // Display machine.config path.\n Console.WriteLine(\"[File path: {0}]\", config.FilePath);\n}\n\n\nEDIT: This should output the \"myKey\" value:\nExeConfigurationFileMap fileMap =\n new ExeConfigurationFileMap();\nfileMap.ExeConfigFilename = \n @\"C:\\test\\ConfigurationManager.exe.config\";\nSystem.Configuration.Configuration config =\n ConfigurationManager.OpenMappedExeConfiguration(fileMap, \n ConfigurationUserLevel.None);\nConsole.WriteLine(config.AppSettings.Settings[\"MyKey\"].Value);\n\n", "After some testing, I found a way to do this.\n\nAdd the App.Config file to the test project. Use \"Add as a link\" option.\nUse System.Configuration.ConfigurationManager.AppSettings[\"myKey\"] to access the value.\n\n", "I think what you're looking for is:\nSystem.Configuration.ConfigurationManager.OpenExeConfiguration(string path)\n\n", "I'd second Gishu's point that there's another way. Wouldn't it be better to abstact the common/\"public\" part of the EXE out into DLL create a wrapper EXE to run it? This is certainly the more usual pattern of development. Only the stuff that you wish to consume would go into the DLL, and the EXE would do all the stuff it currently does, minus what's gone into the DLL. \n" ]
[ 24, 6, 4, 0 ]
[ "It's an xml file, you can use Linq-XML or DOM based approaches to parse out the relevant information.\n(that said I'd question if there isn't a better design for whatever it is.. you're trying to achieve.)\n", "Adding a link in the IDE would only help during development. I think lomaxx has the right idea: System.Configuration.ConfigurationManager.OpenExeConfiguration.\n" ]
[ -1, -1 ]
[ ".net", "appsettings", "c#", "configuration_files", "system.configuration" ]
stackoverflow_0000053545_.net_appsettings_c#_configuration_files_system.configuration.txt
Q: Standard way to merge Entities in LlblGenPro I start with an entity A with primary key A1, it has child collections B and C, but they are empty, because I haven't prefetched them. I now get a new occurrence of A (A prime) with primary key A1 with the child collections B and C filled. What is a good way to get the A and A prime to be the same object and to get A collections of B and C filled? A: Once you, have 2 separate objects in memory and you have references to both of them the only way to merge them is to change all references to point to one of the objects, which might be impossible. However there's something you can do not to arrive in this situation you can use a SD.LLBLGen.Pro.ORMSupportClasses.Context class which you can attach to an adapter and which acts as a caching layer and when entities are loaded it returns the same object for a unique entity, basically it does't let you duplicate entities in memory and always returns the reference to a already loaded entity.
Standard way to merge Entities in LlblGenPro
I start with an entity A with primary key A1, it has child collections B and C, but they are empty, because I haven't prefetched them. I now get a new occurrence of A (A prime) with primary key A1 with the child collections B and C filled. What is a good way to get the A and A prime to be the same object and to get A collections of B and C filled?
[ "Once you, have 2 separate objects in memory and you have references to both of them the only way to merge them is to change all references to point to one of the objects, which might be impossible. However there's something you can do not to arrive in this situation you can use a SD.LLBLGen.Pro.ORMSupportClasses.Context class which you can attach to an adapter and which acts as a caching layer and when entities are loaded it returns the same object for a unique entity, basically it does't let you duplicate entities in memory and always returns the reference to a already loaded entity.\n" ]
[ 3 ]
[]
[]
[ "c#", "llblgenpro", "orm" ]
stackoverflow_0000047414_c#_llblgenpro_orm.txt
Q: Best way to hide DB connection code in PHP5 for apps that only require one connection? Below I present three options for simplifying my database access when only a single connection is involved (this is often the case for the web apps I work on). The general idea is to make the DB connection transparent, such that it connects the first time my script executes a query, and then it remains connected until the script terminates. I'd like to know which one you think is the best and why. I don't know the names of any design patterns that these might fit so sorry for not using them. And if there's any better way of doing this with PHP5, please share. To give a brief introduction: there is a DB_Connection class containing a query method. This is a third-party class which is out of my control and whose interface I've simplified for the purpose of this example. In each option I've also provided an example model for an imaginary DB "items" table to give some context. Option 3 is the one that provides me with the interface I like most, but I don't think it's practical unfortunately. I've described the pros and cons (that I can see) of each in the comment blocks below. At the moment I lean towards Option 1 since the burden is put on my DB wrapper class instead of on the models. All comments appreciated! Note: For some reason, the Stack Overflow preview is showing an encoded HTML entity instead of underscores. If the post comes through like that, please take this into account. <?php /** * This is the 3rd-party DB interface I'm trying to wrap. * I've simplified the interface to one method for this example. * * This class is used in each option below. */ class DB_Connection { public function &query($sql) { } } /** * OPTION 1 * * Cons: Have to wrap every public DB_Connection method. * Pros: The model code is simple. */ class DB { private static $connection; private static function &getConnection() { if (!self::$connection) { self::$connection = new DB_Connection(); } return self::$connection; } public static function &query($sql) { $dbh = self::getConnection(); return $dbh->query($sql); } } class Item { public static function &getList() { return DB::query("SELECT * FROM items"); } } /** * OPTION 2 * * Pros: Don't have to wrap every DB_Connection function like in Option 1 * Cons: Every function in the model is responsible for checking the connection */ class DB { protected static $connection = null; public function connect() { self::$connection = new DB_Connection(); } } class Item extends DB { public static function &getList() { if (!self::$connection) $this->connect(); return self::$connection->query("SELECT * FROM items"); } } /** * OPTION 3 * * Use magic methods * * Pros: Simple model code AND don't have to reimplement the DB_Connection interface * Cons: __callStatic requires PHP 5.3.0 and its args can't be passed-by-reference. */ class DB { private static $connection = null; public static function &getConnection() { if (!self::$connection) { self::$connection = new DB_Connection(); } return self::$connection; } public static function __callStatic($name, $args) { if (in_array($name, get_class_methods('DB_Connection'))) { return call_user_func_array( array(self::getConnection(), $name), $args); } } } A: Based on your examples above, I'd say option 1 is the best - simplicity always wins, and you can handle a failed connection differently depending on the method (you might want to fail differently for a stored procedure call than a simple SELECT, for instance). A: Semantically speaking I think option 1 makes the most sense, if you're treating DB as a resource then the DB_Connectioin is an object that it uses but not necessarily the object itself. However, several things I caution you against. First, don't make your DB class have all static methods as it will strongly impact your ability to test your code. Consider instead a very simple inversion of control container like this: class DB { private $connection; public function &query($sql) { return $connection->query($sql); } public __construct(&$db_connection) { $this->connection = $db_connection; } } class Item { public function &getList() { return ResourceManager::getDB()->query("SELECT * FROM items"); } } class ResourceManager { private $db_connection; private function &getDbConnection() { if (!$this->connection) { $this->connection = new DB_Connection(); } return $this->connection; } private $db; public static function getDB() { if(!$this->db) $this->db = new DB(getDbConnection()); return $this->db; } There are significant benefits. If you don't want DB to be used as a singleton you just make one modification to ResourceManager. If you decide it should not be a singleton - you make the modification in one place. If you want to return a different instance of DB based on some context - again, the change is in only one place. Now if you want to test Item in isolation of DB simply create a setDb($db) method in ResourceManager and use it to set a fake/mock database (simplemock will serve you well in that respect). Second - and this is another modification that this design eases - you might not want to keep your database connection open the entire time, it can end up using far more resources than need be. Finally, as you mention that DB_Connection has other methods not shown, it seems like the it might be being used for more than simply maintaining a connection. Since you say you have no control over it, might I recommend extracting an interface from it of the methods that you DO care about and making a MyDBConnection extends DB_Connection class that implements your interface. In my experience something like that will ultimately ease some pain as well.
Best way to hide DB connection code in PHP5 for apps that only require one connection?
Below I present three options for simplifying my database access when only a single connection is involved (this is often the case for the web apps I work on). The general idea is to make the DB connection transparent, such that it connects the first time my script executes a query, and then it remains connected until the script terminates. I'd like to know which one you think is the best and why. I don't know the names of any design patterns that these might fit so sorry for not using them. And if there's any better way of doing this with PHP5, please share. To give a brief introduction: there is a DB_Connection class containing a query method. This is a third-party class which is out of my control and whose interface I've simplified for the purpose of this example. In each option I've also provided an example model for an imaginary DB "items" table to give some context. Option 3 is the one that provides me with the interface I like most, but I don't think it's practical unfortunately. I've described the pros and cons (that I can see) of each in the comment blocks below. At the moment I lean towards Option 1 since the burden is put on my DB wrapper class instead of on the models. All comments appreciated! Note: For some reason, the Stack Overflow preview is showing an encoded HTML entity instead of underscores. If the post comes through like that, please take this into account. <?php /** * This is the 3rd-party DB interface I'm trying to wrap. * I've simplified the interface to one method for this example. * * This class is used in each option below. */ class DB_Connection { public function &query($sql) { } } /** * OPTION 1 * * Cons: Have to wrap every public DB_Connection method. * Pros: The model code is simple. */ class DB { private static $connection; private static function &getConnection() { if (!self::$connection) { self::$connection = new DB_Connection(); } return self::$connection; } public static function &query($sql) { $dbh = self::getConnection(); return $dbh->query($sql); } } class Item { public static function &getList() { return DB::query("SELECT * FROM items"); } } /** * OPTION 2 * * Pros: Don't have to wrap every DB_Connection function like in Option 1 * Cons: Every function in the model is responsible for checking the connection */ class DB { protected static $connection = null; public function connect() { self::$connection = new DB_Connection(); } } class Item extends DB { public static function &getList() { if (!self::$connection) $this->connect(); return self::$connection->query("SELECT * FROM items"); } } /** * OPTION 3 * * Use magic methods * * Pros: Simple model code AND don't have to reimplement the DB_Connection interface * Cons: __callStatic requires PHP 5.3.0 and its args can't be passed-by-reference. */ class DB { private static $connection = null; public static function &getConnection() { if (!self::$connection) { self::$connection = new DB_Connection(); } return self::$connection; } public static function __callStatic($name, $args) { if (in_array($name, get_class_methods('DB_Connection'))) { return call_user_func_array( array(self::getConnection(), $name), $args); } } }
[ "Based on your examples above, I'd say option 1 is the best - simplicity always wins, and you can handle a failed connection differently depending on the method (you might want to fail differently for a stored procedure call than a simple SELECT, for instance).\n", "Semantically speaking I think option 1 makes the most sense, if you're treating DB as a resource then the DB_Connectioin is an object that it uses but not necessarily the object itself.\nHowever, several things I caution you against. First, don't make your DB class have all static methods as it will strongly impact your ability to test your code. Consider instead a very simple inversion of control container like this:\nclass DB {\n private $connection;\n public function &query($sql) {\n return $connection->query($sql);\n }\n public __construct(&$db_connection) {\n $this->connection = $db_connection;\n }\n}\n\nclass Item {\n public function &getList() {\n return ResourceManager::getDB()->query(\"SELECT * FROM items\");\n }\n}\n\nclass ResourceManager {\n private $db_connection;\n private function &getDbConnection() {\n if (!$this->connection) {\n $this->connection = new DB_Connection();\n }\n return $this->connection;\n }\n private $db;\n public static function getDB() {\n if(!$this->db) $this->db = new DB(getDbConnection());\n return $this->db;\n}\n\nThere are significant benefits. If you don't want DB to be used as a singleton you just make one modification to ResourceManager. If you decide it should not be a singleton - you make the modification in one place. If you want to return a different instance of DB based on some context - again, the change is in only one place.\nNow if you want to test Item in isolation of DB simply create a setDb($db) method in ResourceManager and use it to set a fake/mock database (simplemock will serve you well in that respect).\nSecond - and this is another modification that this design eases - you might not want to keep your database connection open the entire time, it can end up using far more resources than need be.\nFinally, as you mention that DB_Connection has other methods not shown, it seems like the it might be being used for more than simply maintaining a connection. Since you say you have no control over it, might I recommend extracting an interface from it of the methods that you DO care about and making a MyDBConnection extends DB_Connection class that implements your interface. In my experience something like that will ultimately ease some pain as well.\n" ]
[ 1, 1 ]
[]
[]
[ "database", "oop", "php" ]
stackoverflow_0000053353_database_oop_php.txt
Q: Can you do "builds" with PHP scripts or an interpreted language? Correct me if I'm wrong, but a "build" is a "compile", and not every language compiles. Continuous Integration involves building components to see if they continue to work beyond unit tests, which I might be oversimplifying. But if your project involves a language that does not compile, how do you perform nightly builds or use continuous integration techniques? A: Hmm... I'd define "building" as something like "preparing, packaging and deploying all artifacts of a software system". The compilation to machine code is only one of many steps in the build. Others might be checking out the latest version of the code from scm-system, getting external dependencies, setting configuration values depending on the target the software gets deployed to and running some kind of test suite to ensure you've got a "working/running build" before you actually deploy. "Building" software can/must be done for any software, independent of your programming langugage. Intepreted languages have the "disadvantage" that syntactic or structural (meaning e.g. calling a method with wrong parameters etc.) errors normally will only be detected at runtime (if you don't have a separate step in your build which checks for such errors e.g. with PHPLint). Thus (automated) Testcases (like Unit-Tests - see PHPUnit or SimpleTest - and Frontend-Tests - see Selenium) are all the more important for big PHP projects to ensure the good health of the code. There's a great Build-Tool (like Ant for Java or Rake for Ruby) for PHP too: Phing CI-Systems like Xinc or Hudson are simply used to automagically (like anytime a change is checked into scm) package your code, check it for obvious errors, run your tests (in short: run your build) and report the results back to your development team. A: Create a daily tag of your current source control trunk?
Can you do "builds" with PHP scripts or an interpreted language?
Correct me if I'm wrong, but a "build" is a "compile", and not every language compiles. Continuous Integration involves building components to see if they continue to work beyond unit tests, which I might be oversimplifying. But if your project involves a language that does not compile, how do you perform nightly builds or use continuous integration techniques?
[ "Hmm... I'd define \"building\" as something like \"preparing, packaging and deploying all artifacts of a software system\". The compilation to machine code is only one of many steps in the build. Others might be checking out the latest version of the code from scm-system, getting external dependencies, setting configuration values depending on the target the software gets deployed to and running some kind of test suite to ensure you've got a \"working/running build\" before you actually deploy.\n\"Building\" software can/must be done for any software, independent of your programming langugage. Intepreted languages have the \"disadvantage\" that syntactic or structural (meaning e.g. calling a method with wrong parameters etc.) errors normally will only be detected at runtime (if you don't have a separate step in your build which checks for such errors e.g. with PHPLint). \nThus (automated) Testcases (like Unit-Tests - see PHPUnit or SimpleTest - and Frontend-Tests - see Selenium) are all the more important for big PHP projects to ensure the good health of the code.\nThere's a great Build-Tool (like Ant for Java or Rake for Ruby) for PHP too: Phing\nCI-Systems like Xinc or Hudson are simply used to automagically (like anytime a change is checked into scm) package your code, check it for obvious errors, run your tests (in short: run your build) and report the results back to your development team.\n", "Create a daily tag of your current source control trunk?\n" ]
[ 6, 1 ]
[]
[]
[ "build_process", "continuous_integration", "interpreted_language" ]
stackoverflow_0000053807_build_process_continuous_integration_interpreted_language.txt
Q: How to Test Web Code? Does anyone have some good hints for writing test code for database-backend development where there is a heavy dependency on state? Specifically, I want to write tests for code that retrieve records from the database, but the answers will depend on the data in the database (which may change over time). Do people usually make a separate development system with a 'frozen' database so that any given function should always return the exact same result set? I am quite sure this is not a new issue, so I would be very interested to learn from other people's experience. Are there good articles out there that discuss this issue of web-based development in general? I usually write PHP code, but I would expect all of these issues are largely language and framework agnostic. A: You should look into DBUnit, or try to find a PHP equivalent (there must be one out there). You can use it to prepare the database with a specific set of data which represents your test data, and thus each test will no longer depend on the database and some existing state. This way, each test is self contained and will not break during further database usage. Update: A quick google search showed a DB unit extension for PHPUnit. A: If you're mostly concerned with data layer testing, you might want to check out this book: xUnit Test Patterns: Refactoring Test Code. I was always unsure about it myself, but this book does a great job to help enumerate the concerns like performance, reproducibility, etc. A: I guess it depends what database you're using, but Red Gate (www.red-gate.com) make a tool called SQL Data Generator. This can be configured to fill your database with sensible looking test data. You can also tell it to always use the same seed in its random number generator so your 'random' data is the same every time. You can then write your unit tests to make use of this reliable, repeatable data. As for testing the web side of things, I'm currently looking into Selenium (selenium.openqa.org). This appears to be a cross-browser capable test suite which will help you test functionality. However, as with all of these web site test tools, there's no real way to test how well these things look in all of the browsers without casting a human eye over them! A: We use an in-memory database (hsql : http://hsqldb.org/). Hibernate (http://www.hibernate.org/) makes it easy for us to point our unit tests at the testing db, with the added bonus that they run as quick as lightning.. A: I have the exact same problem with my work and I find that the best idea is to have a PHP script to re-create the database and then a separate script where I throw crazy data at it to see if it breaks it. I have not ever used any Unit testing or suchlike so cannot say if it works or not sorry. A: If you can setup the database with a known quantity prior to running the tests and tear down at the end, then you'll know what data you are working with. Then you can use something like Selenium to easily test from your UI (assuming web-based here, but there are a lot of UI testing tools out there for other UI-flavours) and detect the presence of certain records pulled back from the database. It's definitely worth setting up either a test version of the database - or make your test scripts populate the database with known data as part of the tests. A: You could try http://selenium.openqa.org/ it is more for GUI testing rather than a data layer testing application but does record your actions which then can be played back to automate tests across different platforms. A: Here's my strategy (I use JUnit, but I'm sure there's a way to do the equivalent in PHP): I have a method that runs before all of the Unit Tests for a specific DAO class. It puts the dev database into a known state (adds all test data, etc.). As I run tests, I keep track of any data added to the known state. This data is cleaned up at the end of each test. After all the tests for the class have run, another method removes all the test data in the dev database, leaving it in the state it was in before the tests were run. It's a bit of work to do all this, but I usually write the methods in a DBTestCommon class where all of my DAO test classes can get to them. A: I would propose to use three databases. One production database, one development database (filled with some meaningful data for each developer) and one testing database (with empty tables and maybe a few rows that are always needed). A way to test database code is: Insert a few rows (using SQL) to initialize state Run the function that you want to test Compare expected with actual results. Here you could use your normal unit testing framework Clean up the rows that were changed (so the next run won't see the previous run) The cleanup could be done in a standard way (of course, only in the testing database) with DELETE * FROM table. A: In general I agree with Peter but for creating and deleting of test data I wouldn't use SQL directly. I prefer to use some CRUD API that is used in product to create data as similar to production as possible...
How to Test Web Code?
Does anyone have some good hints for writing test code for database-backend development where there is a heavy dependency on state? Specifically, I want to write tests for code that retrieve records from the database, but the answers will depend on the data in the database (which may change over time). Do people usually make a separate development system with a 'frozen' database so that any given function should always return the exact same result set? I am quite sure this is not a new issue, so I would be very interested to learn from other people's experience. Are there good articles out there that discuss this issue of web-based development in general? I usually write PHP code, but I would expect all of these issues are largely language and framework agnostic.
[ "You should look into DBUnit, or try to find a PHP equivalent (there must be one out there). You can use it to prepare the database with a specific set of data which represents your test data, and thus each test will no longer depend on the database and some existing state. This way, each test is self contained and will not break during further database usage.\nUpdate: A quick google search showed a DB unit extension for PHPUnit.\n", "If you're mostly concerned with data layer testing, you might want to check out this book: xUnit Test Patterns: Refactoring Test Code. I was always unsure about it myself, but this book does a great job to help enumerate the concerns like performance, reproducibility, etc.\n", "I guess it depends what database you're using, but Red Gate (www.red-gate.com) make a tool called SQL Data Generator. This can be configured to fill your database with sensible looking test data. You can also tell it to always use the same seed in its random number generator so your 'random' data is the same every time.\nYou can then write your unit tests to make use of this reliable, repeatable data.\nAs for testing the web side of things, I'm currently looking into Selenium (selenium.openqa.org). This appears to be a cross-browser capable test suite which will help you test functionality. However, as with all of these web site test tools, there's no real way to test how well these things look in all of the browsers without casting a human eye over them!\n", "We use an in-memory database (hsql : http://hsqldb.org/). Hibernate (http://www.hibernate.org/) makes it easy for us to point our unit tests at the testing db, with the added bonus that they run as quick as lightning..\n", "I have the exact same problem with my work and I find that the best idea is to have a PHP script to re-create the database and then a separate script where I throw crazy data at it to see if it breaks it.\nI have not ever used any Unit testing or suchlike so cannot say if it works or not sorry.\n", "If you can setup the database with a known quantity prior to running the tests and tear down at the end, then you'll know what data you are working with.\nThen you can use something like Selenium to easily test from your UI (assuming web-based here, but there are a lot of UI testing tools out there for other UI-flavours) and detect the presence of certain records pulled back from the database.\nIt's definitely worth setting up either a test version of the database - or make your test scripts populate the database with known data as part of the tests.\n", "You could try http://selenium.openqa.org/ it is more for GUI testing rather than a data layer testing application but does record your actions which then can be played back to automate tests across different platforms.\n", "Here's my strategy (I use JUnit, but I'm sure there's a way to do the equivalent in PHP):\nI have a method that runs before all of the Unit Tests for a specific DAO class. It puts the dev database into a known state (adds all test data, etc.). As I run tests, I keep track of any data added to the known state. This data is cleaned up at the end of each test. After all the tests for the class have run, another method removes all the test data in the dev database, leaving it in the state it was in before the tests were run. It's a bit of work to do all this, but I usually write the methods in a DBTestCommon class where all of my DAO test classes can get to them.\n", "I would propose to use three databases. One production database, one development database (filled with some meaningful data for each developer) and one testing database (with empty tables and maybe a few rows that are always needed).\nA way to test database code is:\n\nInsert a few rows (using SQL) to initialize state\nRun the function that you want to test\nCompare expected with actual results. Here you could use your normal unit testing framework\nClean up the rows that were changed (so the next run won't see the previous run)\n\nThe cleanup could be done in a standard way (of course, only in the testing database) with DELETE * FROM table.\n", "In general I agree with Peter but for creating and deleting of test data I wouldn't use SQL directly. I prefer to use some CRUD API that is used in product to create data as similar to production as possible...\n" ]
[ 6, 3, 2, 2, 1, 1, 1, 1, 1, 1 ]
[]
[]
[ "database", "testing" ]
stackoverflow_0000002913_database_testing.txt
Q: Java Compiler Options to produce .exe files What compiler (I'm using gcj 4.x) options should I use to generate an "exe" file for my java application to run in windows? A: To compile the Java program MyJavaProg.java, type: gcj -c -g -O MyJavaProg.java To link it, use the command: gcj --main=MyJavaProg -o MyJavaProg MyJavaProg.o and then linking to create an executable mycxxprog.exe g++ -o mycxxprog.exe mycxxprog.o
Java Compiler Options to produce .exe files
What compiler (I'm using gcj 4.x) options should I use to generate an "exe" file for my java application to run in windows?
[ "To compile the Java program MyJavaProg.java, type:\ngcj -c -g -O MyJavaProg.java\n\nTo link it, use the command:\ngcj --main=MyJavaProg -o MyJavaProg MyJavaProg.o\n\nand then linking to create an executable mycxxprog.exe\ng++ -o mycxxprog.exe mycxxprog.o\n\n" ]
[ 12 ]
[]
[]
[ "compiler_construction", "gnu", "java", "windows" ]
stackoverflow_0000053845_compiler_construction_gnu_java_windows.txt
Q: How to create Virtual COM ports I am trying to implement a software Null Modem. Any suggestion how to create virtual COM Ports? Looking for tips, strategy, known techniques, etc.. A: I have used the open-source com0com on windows for this before, and it worked well. The related com2tcp project was more challenging to get working reliably.
How to create Virtual COM ports
I am trying to implement a software Null Modem. Any suggestion how to create virtual COM Ports? Looking for tips, strategy, known techniques, etc..
[ "I have used the open-source com0com on windows for this before, and it worked well. The related com2tcp project was more challenging to get working reliably.\n" ]
[ 4 ]
[]
[]
[ "serial_port", "virtual_serial_port" ]
stackoverflow_0000053857_serial_port_virtual_serial_port.txt
Q: What are you currently using for data access? What particular method/application are you using to communicate between your application and a database? Custom code with stored procedures? SubSonic? nHibernate? Entity Framework? LINQ? A: I primarily use Microsoft Enterprise Library Data Access Block to access stored procedures in MS SQL Server databases. A: I've been using NHibernate for the last year or so, and it's proved to be a really quick way of getting basic CRUD (almost) for free. If this is something you're looking to get into, I can recommend Billy McCafferty's NHibernate best practices article on CodeProject: http://www.codeproject.com/KB/architecture/NHibernateBestPractices.aspx This has proven to be a great scalable and flexible solution and makes it easy to achieve a clear separation of the DAL from the other layers. A: We're using IdeaBlade on our projects. I've found it to be pretty easy to use. A: I used Hibernate in my previous job to connect to both MySql and Sql Server but I have since switched over to .NET so currently I work with LINQ and I really enjoy it. A: At work our code base is C++ and Perl and we talk to a MySQL database. For our interface we have some fairly thin custom classes wrapped around the basic MySQL client libraries for our C++ code and the DBI module for our Perl scripts. A: SubSonic and LINQ to SQL, hopefully one day soon LINQ to SubSonic though! A: I primarily use NHibernate, both at work and on my freetime projects. This started as an attempt to break out of the norm at work to use ADO.NET datareaders/datasets and we now have a few projects using Hibernate/NHibernate. A: SqlHelper class from the older version of the MS Enterprise App Blocks. It is far from perfect, but hard to beat its simplicity for simple CRUD apps. A: MS SQL Stored Procedures. A: I usually create a DataTier with LiNQ. It consist of repositories that implement composite interfaces, so I have total flexibility on how to use them. IPersonRepository : IReadRepository<Person>, ICreateRepository<Person>, IUpdateRepository<Person> //and so on.. They are mostly domain object centric, so they emit domain objects and take care of all the mapping logic themselves. They might also create some list dictionaries, f.ex a dictionary consisting of the id and name of a person, so I don't have to pull up too much from the db to display a drop down list. Although sometimes, for smaller projects, I just use Attribute base mapping without a .dbml. I feel that this approach gives a very clean application model, because all the messy data centric logic is hidden in the DataTier. The Business-/ServiceTier is pure business :) A: SQL Server All stored procedures Handrolled polymorphic entity framework that I reuse from project to project to handle the Sproc Resultset -> Object mapping. I guess that makes me oldschool. A: MVC framework where model's has datasource classes with the actual database language, the developer in most cases uses save, saveField, delete, find etc methods and the framework translates this to sql queries. This is not only safer and easier, it is also very convenient in that the code is datasource indepenedent, ie you can change database server and keep the code. A: I've started with Hibernate on Java project at my workplace, and then realized that there exist the .Net port (NHibernate) and used it again in a .Net project. I've also came across the article that joesteele mentions, and used it as a base for my projects with some minor modifications when needed, mostly when needed to target transaction beginning and ending manually. The same practice and library that can be applied to both Java and C# platforms, targeting the Windows, or Linux as application platforms, makes development on different platforms easier than needing to learn different frameworks. Although i'm planning to exmine the Subsonic, iBatis and LINQ, for now Hibernate and NHibernate seem like the right tool for the job while i have to target both Windows and Linux platforms. A: We've got an oracle back end with something like 500 stored procedures where applications run directly against the data. I started building a custom or-mapped domain model that I've been integrating but I did it wrong initially and now am stuck dealing with that headache as well...ugh
What are you currently using for data access?
What particular method/application are you using to communicate between your application and a database? Custom code with stored procedures? SubSonic? nHibernate? Entity Framework? LINQ?
[ "I primarily use Microsoft Enterprise Library Data Access Block to access stored procedures in MS SQL Server databases.\n", "I've been using NHibernate for the last year or so, and it's proved to be a really quick way of getting basic CRUD (almost) for free.\nIf this is something you're looking to get into, I can recommend Billy McCafferty's NHibernate best practices article on CodeProject:\nhttp://www.codeproject.com/KB/architecture/NHibernateBestPractices.aspx\nThis has proven to be a great scalable and flexible solution and makes it easy to achieve a clear separation of the DAL from the other layers.\n", "We're using IdeaBlade on our projects. I've found it to be pretty easy to use. \n", "I used Hibernate in my previous job to connect to both MySql and Sql Server but I have since switched over to .NET so currently I work with LINQ and I really enjoy it. \n", "At work our code base is C++ and Perl and we talk to a MySQL database. For our interface we have some fairly thin custom classes wrapped around the basic MySQL client libraries for our C++ code and the DBI module for our Perl scripts.\n", "SubSonic and LINQ to SQL, hopefully one day soon LINQ to SubSonic though!\n", "I primarily use NHibernate, both at work and on my freetime projects. This started as an attempt to break out of the norm at work to use ADO.NET datareaders/datasets and we now have a few projects using Hibernate/NHibernate.\n", "SqlHelper class from the older version of the MS Enterprise App Blocks. It is far from perfect, but hard to beat its simplicity for simple CRUD apps.\n", "MS SQL Stored Procedures.\n", "I usually create a DataTier with LiNQ.\nIt consist of repositories that implement composite interfaces, so I have total flexibility on how to use them. \nIPersonRepository : IReadRepository<Person>, ICreateRepository<Person>, IUpdateRepository<Person> //and so on..\n\nThey are mostly domain object centric, so they emit domain objects and take care of all the mapping logic themselves.\nThey might also create some list dictionaries, f.ex a dictionary consisting of the id and name of a person, so I don't have to pull up too much from the db to display a drop down list.\nAlthough sometimes, for smaller projects, I just use Attribute base mapping without a .dbml.\nI feel that this approach gives a very clean application model, because all the messy data centric logic is hidden in the DataTier. The Business-/ServiceTier is pure business :)\n", "\nSQL Server\nAll stored procedures\nHandrolled polymorphic entity framework that I reuse from project to project to handle the Sproc Resultset -> Object mapping.\n\nI guess that makes me oldschool.\n", "MVC framework where model's has datasource classes with the actual database language, the developer in most cases uses save, saveField, delete, find etc methods and the framework translates this to sql queries. This is not only safer and easier, it is also very convenient in that the code is datasource indepenedent, ie you can change database server and keep the code.\n", "I've started with Hibernate on Java project at my workplace, and then realized that there exist the .Net port (NHibernate) and used it again in a .Net project. I've also came across the article that joesteele mentions, and used it as a base for my projects with some minor modifications when needed, mostly when needed to target transaction beginning and ending manually.\nThe same practice and library that can be applied to both Java and C# platforms, targeting the Windows, or Linux as application platforms, makes development on different platforms easier than needing to learn different frameworks.\nAlthough i'm planning to exmine the Subsonic, iBatis and LINQ, for now Hibernate and NHibernate seem like the right tool for the job while i have to target both Windows and Linux platforms.\n", "We've got an oracle back end with something like 500 stored procedures where applications run directly against the data.\nI started building a custom or-mapped domain model that I've been integrating but I did it wrong initially and now am stuck dealing with that headache as well...ugh\n" ]
[ 2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ ".net", "data_access", "data_binding", "data_structures", "devforce" ]
stackoverflow_0000045152_.net_data_access_data_binding_data_structures_devforce.txt
Q: Can XpsDocuments be serialized to XML for storage in a database? And, if not, is the only other alternative a blob? A: XPS documents are zip files that contain XML. You could extract the contents of the zip file and store that in the database, but then you would need to unzip and re-zip every time data came in or out of the database. Edit: In other words, not in any practical manner.
Can XpsDocuments be serialized to XML for storage in a database?
And, if not, is the only other alternative a blob?
[ "XPS documents are zip files that contain XML. You could extract the contents of the zip file and store that in the database, but then you would need to unzip and re-zip every time data came in or out of the database.\nEdit: In other words, not in any practical manner.\n" ]
[ 1 ]
[]
[]
[ "sql_server", "xpsdocument" ]
stackoverflow_0000053841_sql_server_xpsdocument.txt
Q: How to control IIS 5.1 from command line? I found some informations about controlling IIS 5.1 from command line via adsutil.vbs (http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/d3df4bc9-0954-459a-b5e6-7a8bc462960c.mspx?mfr=true). The utility is available at c:\InetPub\AdminScripts. The utility throw only errors like the following: ErrNumber: -2147463164 (0x80005004) Error Trying To GET the Schema of the property: IIS://localhost/Schema/ROOT Can you tell me, how to check if there exists a virtual directory and create it, if it does not exist? A: Hope this helps you. http://www.codeproject.com/KB/system/commandlineweb.aspx A: I could not comment your post, so I have to write a new message. I was able to use the script CreateWebDir.vbs from your link and use it to create/update my virtual directory with only one call: CreateWebDir.vbs DirName Path 80 If the virtual directory already exists, it changes the path and that's exactly, what I need. Thank you!
How to control IIS 5.1 from command line?
I found some informations about controlling IIS 5.1 from command line via adsutil.vbs (http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/d3df4bc9-0954-459a-b5e6-7a8bc462960c.mspx?mfr=true). The utility is available at c:\InetPub\AdminScripts. The utility throw only errors like the following: ErrNumber: -2147463164 (0x80005004) Error Trying To GET the Schema of the property: IIS://localhost/Schema/ROOT Can you tell me, how to check if there exists a virtual directory and create it, if it does not exist?
[ "Hope this helps you.\nhttp://www.codeproject.com/KB/system/commandlineweb.aspx\n", "I could not comment your post, so I have to write a new message. I was able to use the script CreateWebDir.vbs from your link and use it to create/update my virtual directory with only one call:\nCreateWebDir.vbs DirName Path 80\nIf the virtual directory already exists, it changes the path and that's exactly, what I need. Thank you!\n" ]
[ 0, 0 ]
[]
[]
[ "iis", "windows_xp" ]
stackoverflow_0000053618_iis_windows_xp.txt
Q: Is there a way to make a constructor only visible to a parent class in C#? I have a collection of classes that inherit from an abstract class I created. I'd like to use the abstract class as a factory for creating instances of concrete implementations of my abstract class. Is there any way to hide a constructor from all code except a parent class. I'd like to do this basically public abstract class AbstractClass { public static AbstractClass MakeAbstractClass(string args) { if (args == "a") return new ConcreteClassA(); if (args == "b") return new ConcreteClassB(); } } public class ConcreteClassA : AbstractClass { } public class ConcreteClassB : AbstractClass { } But I want to prevent anyone from directly instantiating the 2 concrete classes. I want to ensure that only the MakeAbstractClass() method can instantiate the base classes. Is there any way to do this? UPDATE I don't need to access any specific methods of ConcreteClassA or B from outside of the Abstract class. I only need the public methods my Abstract class provides. I don't really need to prevent the Concrete classes from being instantiated, I'm just trying to avoid it since they provide no new public interfaces, just different implementations of some very specific things internal to the abstract class. To me, the simplest solution is to make child classes as samjudson mentioned. I'd like to avoid this however since it would make my abstract class' file a lot bigger than I'd like it to be. I'd rather keep classes split out over a few files for organization. I guess there's no easy solution to this... A: To me, the simplest solution is to make child classes as samjudson mentioned. I'd like to avoid this however since it would make my abstract class' file a lot bigger than I'd like it to be. I'd rather keep classes split out over a few files for organization. No problem, just use partial keyword and you can split your inner classes into as many files as you wish. You don't have to keep it in the same file. Previous answer: It's possible but only with reflection public abstract class AbstractClass { public static AbstractClass MakeAbstractClass(string args) { if (args == "a") return (AbstractClass)Activator.CreateInstance(typeof(ConcreteClassA), true); if (args == "b") return (AbstractClass)Activator.CreateInstance(typeof(ConcreteClassB), true); } } public class ConcreteClassA : AbstractClass { private ConcreteClassA() { } } public class ConcreteClassB : AbstractClass { private ConcreteClassB() { } } and here is another pattern, without ugly MakeAbstractClass(string args) public abstract class AbstractClass<T> where T : AbstractClass<T> { public static T MakeAbstractClass() { T value = (T)Activator.CreateInstance(typeof(T), true); // your processing logic return value; } } public class ConcreteClassA : AbstractClass<ConcreteClassA> { private ConcreteClassA() { } } public class ConcreteClassB : AbstractClass<ConcreteClassB> { private ConcreteClassB() { } } A: If the classes are in the same assembly, can you not make the constructors internal? A: You can make the sub classes child classes, something like this: public abstract class AbstractClass { public static AbstractClass MakeAbstractClass(string args) { if (args == "a") return new ConcreteClassA(); if (args == "b") return new ConcreteClassB(); } private class ConcreteClassA : AbstractClass { } private class ConcreteClassB : AbstractClass { } } @Vaibhav This does indeed mean that the classes are also hidden. But this is as far as I am aware the only way to completely hide the constructor. Edit: As others have mentioned the same thing can be accomplished using Reflection, which might actually be closer to what you would like to be the case - for example the above method replies on the concrete classes being inside the same file as the Abstract class, which probably isn't very convenient. Having said that this way is a nice 'Hack', and good if the number and complexity of the concrete classes is low. A: No, I don't think we can do that. A: Following on from the accepted answer, if you had a public interface and made the private classes implement the interface, you could then return a pointer to the interface and anyone outside of your parent abstract class could then use them (whilst still hiding the child classes). A: Do you actually need to do this? If you're using some kind of pseudo factory pattern without a true design need for it, you're only going to make your code harder to understand, maintain and extend. If you don't need to do this, just implement a true factory pattern. Or, more ALTy, use a DI/IoC framework. A: Can't you use the keyword partial for splitting the code for a class into many files? A: If you are using this class in a seperate service assembly, you can use the internal keyword. public class AbstractClass { public AbstractClass ClassFactory(string args) { switch (args) { case "A": return new ConcreteClassA(); case "B": return new ConcreteClassB(); default: return null; } } } public class ConcreteClassA : AbstractClass { internal ConcreteClassA(){ } } public class ConcreteClassB : AbstractClass { internal ConcreteClassB() {} }
Is there a way to make a constructor only visible to a parent class in C#?
I have a collection of classes that inherit from an abstract class I created. I'd like to use the abstract class as a factory for creating instances of concrete implementations of my abstract class. Is there any way to hide a constructor from all code except a parent class. I'd like to do this basically public abstract class AbstractClass { public static AbstractClass MakeAbstractClass(string args) { if (args == "a") return new ConcreteClassA(); if (args == "b") return new ConcreteClassB(); } } public class ConcreteClassA : AbstractClass { } public class ConcreteClassB : AbstractClass { } But I want to prevent anyone from directly instantiating the 2 concrete classes. I want to ensure that only the MakeAbstractClass() method can instantiate the base classes. Is there any way to do this? UPDATE I don't need to access any specific methods of ConcreteClassA or B from outside of the Abstract class. I only need the public methods my Abstract class provides. I don't really need to prevent the Concrete classes from being instantiated, I'm just trying to avoid it since they provide no new public interfaces, just different implementations of some very specific things internal to the abstract class. To me, the simplest solution is to make child classes as samjudson mentioned. I'd like to avoid this however since it would make my abstract class' file a lot bigger than I'd like it to be. I'd rather keep classes split out over a few files for organization. I guess there's no easy solution to this...
[ "\nTo me, the simplest solution is to\n make child classes as samjudson\n mentioned. I'd like to avoid this\n however since it would make my\n abstract class' file a lot bigger than\n I'd like it to be. I'd rather keep\n classes split out over a few files for\n organization.\n\nNo problem, just use partial keyword and you can split your inner classes into as many files as you wish. You don't have to keep it in the same file.\nPrevious answer:\nIt's possible but only with reflection\npublic abstract class AbstractClass\n{\n public static AbstractClass MakeAbstractClass(string args)\n {\n if (args == \"a\")\n return (AbstractClass)Activator.CreateInstance(typeof(ConcreteClassA), true);\n if (args == \"b\")\n return (AbstractClass)Activator.CreateInstance(typeof(ConcreteClassB), true);\n }\n}\n\npublic class ConcreteClassA : AbstractClass\n{\n private ConcreteClassA()\n {\n }\n}\n\npublic class ConcreteClassB : AbstractClass\n{\n private ConcreteClassB()\n {\n }\n}\n\nand here is another pattern, without ugly MakeAbstractClass(string args)\npublic abstract class AbstractClass<T> where T : AbstractClass<T>\n{\n public static T MakeAbstractClass()\n {\n T value = (T)Activator.CreateInstance(typeof(T), true);\n // your processing logic\n return value;\n }\n}\n\npublic class ConcreteClassA : AbstractClass<ConcreteClassA>\n{\n private ConcreteClassA()\n {\n }\n}\n\npublic class ConcreteClassB : AbstractClass<ConcreteClassB>\n{\n private ConcreteClassB()\n {\n }\n}\n\n", "If the classes are in the same assembly, can you not make the constructors internal?\n", "You can make the sub classes child classes, something like this:\npublic abstract class AbstractClass\n{\n public static AbstractClass MakeAbstractClass(string args)\n {\n if (args == \"a\")\n return new ConcreteClassA();\n if (args == \"b\")\n return new ConcreteClassB();\n }\n\n private class ConcreteClassA : AbstractClass\n {\n }\n\n private class ConcreteClassB : AbstractClass\n {\n }\n}\n\n@Vaibhav This does indeed mean that the classes are also hidden. But this is as far as I am aware the only way to completely hide the constructor.\nEdit: As others have mentioned the same thing can be accomplished using Reflection, which might actually be closer to what you would like to be the case - for example the above method replies on the concrete classes being inside the same file as the Abstract class, which probably isn't very convenient. Having said that this way is a nice 'Hack', and good if the number and complexity of the concrete classes is low.\n", "No, I don't think we can do that.\n", "Following on from the accepted answer, if you had a public interface and made the private classes implement the interface, you could then return a pointer to the interface and anyone outside of your parent abstract class could then use them (whilst still hiding the child classes).\n", "Do you actually need to do this? If you're using some kind of pseudo factory pattern without a true design need for it, you're only going to make your code harder to understand, maintain and extend.\nIf you don't need to do this, just implement a true factory pattern. Or, more ALTy, use a DI/IoC framework.\n", "Can't you use the keyword partial for splitting the code for a class into many files?\n", "If you are using this class in a seperate service assembly, you can use the internal keyword.\npublic class AbstractClass\n{\n public AbstractClass ClassFactory(string args)\n {\n switch (args)\n {\n case \"A\":\n return new ConcreteClassA(); \n case \"B\":\n return new ConcreteClassB(); \n default:\n return null;\n }\n }\n}\n\npublic class ConcreteClassA : AbstractClass\n{\n internal ConcreteClassA(){ }\n}\n\npublic class ConcreteClassB : AbstractClass\n{\n internal ConcreteClassB() {}\n}\n\n" ]
[ 6, 3, 2, 1, 1, 0, 0, 0 ]
[ "What you need to do is this to prevent the default constructor to be create. The internal can be change to public if the classes are not in the same assembly.\npublic abstract class AbstractClass{\n\n public static AbstractClass MakeAbstractClass(string args)\n {\n if (args == \"a\")\n return ConcreteClassA().GetConcreteClassA();\n if (args == \"b\")\n return ConcreteClassB().GetConcreteClassB();\n }\n}\n\npublic class ConcreteClassA : AbstractClass\n{\n private ConcreteClassA(){}\n\n internal static ConcreteClassA GetConcreteClassA(){\n return ConcreteClassA();\n }\n}\n\npublic class ConcreteClassB : AbstractClass\n{\n private ConcreteClassB(){}\n internal static ConcreteClassB Get ConcreteClassB(){\n return ConcreteClassB();\n }\n\n}\n\n" ]
[ -2 ]
[ "c#", "inheritance", "oop" ]
stackoverflow_0000020185_c#_inheritance_oop.txt
Q: Can a Linq query retrieve BLOBs from a Sql Database? Can a Linq query retrieve BLOBs from a Sql Database? And how do they come out? A: LINQ-To-SQL classes create properties of type System.Data.Linq.Binary for all binary and varbinary fields in SQL-Server. The Binary type has a .ToArray() method that returns a byte[], and its constructor can take a byte[]. Older versions of SQLMetal generated properties of type byte[], but the problem with these was that they failed in any joins. I think that's the main reason they replaced it with the IEquatable Binary type. A: If I'm not mistaken LINQ to SQL teats BLOB as System.Byte[] I recall that there was some problem with SqlMetal, it generated wrong type for BLOB, but MSVS dmbl designer should work.
Can a Linq query retrieve BLOBs from a Sql Database?
Can a Linq query retrieve BLOBs from a Sql Database? And how do they come out?
[ "LINQ-To-SQL classes create properties of type System.Data.Linq.Binary for all binary and varbinary fields in SQL-Server. The Binary type has a .ToArray() method that returns a byte[], and its constructor can take a byte[].\nOlder versions of SQLMetal generated properties of type byte[], but the problem with these was that they failed in any joins. I think that's the main reason they replaced it with the IEquatable Binary type.\n", "If I'm not mistaken LINQ to SQL teats BLOB as System.Byte[]\nI recall that there was some problem with SqlMetal, it generated wrong type for BLOB, but MSVS dmbl designer should work.\n" ]
[ 5, 1 ]
[]
[]
[ "blob", "linq_to_sql", "sql_server" ]
stackoverflow_0000053873_blob_linq_to_sql_sql_server.txt
Q: Best way to draw text with OpenGL and Cocoa? Does anyone know an easy way to draw arbitrary text in a Cocoa NSOpenGLView? I have a couple of constraints. The text on screen may change from frame to frame (for example, a framerate display in the corner) I would like to be able to select any font installed on the system at any size A: Have you taken a look at the Cocoa OpenGL sample code? It includes "a texture class for strings, showing how to use an NSImage to write a string into and then texture from for high quality font rendering."
Best way to draw text with OpenGL and Cocoa?
Does anyone know an easy way to draw arbitrary text in a Cocoa NSOpenGLView? I have a couple of constraints. The text on screen may change from frame to frame (for example, a framerate display in the corner) I would like to be able to select any font installed on the system at any size
[ "Have you taken a look at the Cocoa OpenGL sample code? It includes \"a texture class for strings, showing how to use an NSImage to write a string into and then texture from for high quality font rendering.\"\n" ]
[ 8 ]
[]
[]
[ "cocoa", "opengl" ]
stackoverflow_0000053309_cocoa_opengl.txt
Q: iTunes warning message on quit due to scripting Wrote the following in PowersHell as a quick iTunes demonstration: $iTunes = New-Object -ComObject iTunes.Application $LibrarySource = $iTunes.LibrarySource foreach ($PList in $LibrarySource.Playlists) { write-host $PList.name } This works well and pulls back a list of playlist names. However on trying to close iTunes a warning appears One or more applications are using the iTunes scripting interface. Are you sure you want to quit? Obviously I can just ignore the message and press [Quit] or just wait the 20 seconds or so, but is there a clean way to tell iTunes that I've finished working with it? Itunes 7.7.1, Windows XP A: Here is one thing that I did on my a Powershell script that adds podcasts to iTunes. I use Juice on a server to download all the podcasts that I listen to. The script uses .Net methods to release the COM objects. When I wrote my iTunes script I had read a couple of articles that stated you should release your COM objects using .NET. [void][System.Runtime.InteropServices.Marshal]::ReleaseComObject([System.__ComObject]$LibrarySource) [void][System.Runtime.InteropServices.Marshal]::ReleaseComObject([System.__ComObject]$iTunes) I also run my scripts the majority of time from a shortcut, not from the powershell prompt. Based on your comments, I did some testing and I determined that I would get the message when running against iTunes, if I ran my script in a way that leaves powershell running. iTunes seems to keep track of that. Running the script in a manner that exits it's process after running, eliminated the message. One method of running your script from powershell, is to prefix your script with powershell. powershell .\scriptname.ps1 The above command will launch your script and then exit the process that was used to run it, but still leaving you at the powershell prompt. A: You should be able to set $itunes to $null. Alternatively, $itunes should have a quit method you can call. $itunes.quit()
iTunes warning message on quit due to scripting
Wrote the following in PowersHell as a quick iTunes demonstration: $iTunes = New-Object -ComObject iTunes.Application $LibrarySource = $iTunes.LibrarySource foreach ($PList in $LibrarySource.Playlists) { write-host $PList.name } This works well and pulls back a list of playlist names. However on trying to close iTunes a warning appears One or more applications are using the iTunes scripting interface. Are you sure you want to quit? Obviously I can just ignore the message and press [Quit] or just wait the 20 seconds or so, but is there a clean way to tell iTunes that I've finished working with it? Itunes 7.7.1, Windows XP
[ "Here is one thing that I did on my a Powershell script that adds podcasts to iTunes. I use Juice on a server to download all the podcasts that I listen to. The script uses .Net methods to release the COM objects. When I wrote my iTunes script I had read a couple of articles that stated you should release your COM objects using .NET.\n\n [void][System.Runtime.InteropServices.Marshal]::ReleaseComObject([System.__ComObject]$LibrarySource)\n [void][System.Runtime.InteropServices.Marshal]::ReleaseComObject([System.__ComObject]$iTunes)\n\nI also run my scripts the majority of time from a shortcut, not from the powershell prompt. \nBased on your comments, I did some testing and I determined that I would get the message when running against iTunes, if I ran my script in a way that leaves powershell running. iTunes seems to keep track of that. Running the script in a manner that exits it's process after running, eliminated the message.\nOne method of running your script from powershell, is to prefix your script with powershell.\npowershell .\\scriptname.ps1\nThe above command will launch your script and then exit the process that was used to run it, but still leaving you at the powershell prompt.\n", "You should be able to set $itunes to $null. Alternatively, $itunes should have a quit method you can call. $itunes.quit()\n" ]
[ 4, 0 ]
[]
[]
[ "itunes", "powershell" ]
stackoverflow_0000052286_itunes_powershell.txt
Q: What is wrong with this PHP regular expression? $output = preg_replace("|(/D)(/s+)(/d+)(;)|", "//1,//3;", $output); I'm trying to replace all alphabetical character followed by one or more whitespace characters (tabs and/or spaces) followed by one or more numerical characters followed by a semicolon with the alphabetical character followed by a comma followed by the numerical digits and then the semicolon. I'll provide an example: Start: hello world 1007; End: hello world,1007; A: The two | at the start and end probably are incorrect - and should both be forward-slashes. All other forward slashes should be backward slashes (and need escaping). And since PHP 4.04 $n is the preferred way of referring to a capture group. $output = preg_replace("/(\\D)\\s+(\\d+;)/", "$1,$2", $output); If you use single quotes you don't need to escape your backslashes: $output = preg_replace('/(\D)\s+(\d+;)/', '$1,$2', $output); A: Should those forward-slashes be backslashes? You'll need to escape them for PHP too unless you change your double-quotes to single-quotes. A: You want backslashes in the regular expression, not forward slashes. The starting and ending pipes are needed (or another delimiter for the regex) $x = "hello world 1007;"; echo preg_replace('|(\D)(\s+)(\d+)(;)|','$1,$3',$x); echo preg_replace('/(\D)(\s+)(\d+)(;)/','$1,$3',$x); echo preg_replace('{(\D)(\s+)(\d+)(;)}','$1,$3',$x);
What is wrong with this PHP regular expression?
$output = preg_replace("|(/D)(/s+)(/d+)(;)|", "//1,//3;", $output); I'm trying to replace all alphabetical character followed by one or more whitespace characters (tabs and/or spaces) followed by one or more numerical characters followed by a semicolon with the alphabetical character followed by a comma followed by the numerical digits and then the semicolon. I'll provide an example: Start: hello world 1007; End: hello world,1007;
[ "The two | at the start and end probably are incorrect - and should both be forward-slashes.\nAll other forward slashes should be backward slashes (and need escaping).\nAnd since PHP 4.04 $n is the preferred way of referring to a capture group.\n$output = preg_replace(\"/(\\\\D)\\\\s+(\\\\d+;)/\", \"$1,$2\", $output);\n\nIf you use single quotes you don't need to escape your backslashes:\n$output = preg_replace('/(\\D)\\s+(\\d+;)/', '$1,$2', $output);\n\n", "Should those forward-slashes be backslashes? You'll need to escape them for PHP too unless you change your double-quotes to single-quotes.\n", "You want backslashes in the regular expression, not forward slashes. The starting and ending pipes are needed (or another delimiter for the regex)\n$x = \"hello world 1007;\"; \necho preg_replace('|(\\D)(\\s+)(\\d+)(;)|','$1,$3',$x);\necho preg_replace('/(\\D)(\\s+)(\\d+)(;)/','$1,$3',$x);\necho preg_replace('{(\\D)(\\s+)(\\d+)(;)}','$1,$3',$x);\n\n" ]
[ 6, 3, 1 ]
[]
[]
[ "php", "regex" ]
stackoverflow_0000053965_php_regex.txt
Q: How can I lay images out in a grid? I'm trying to produce sheets of photographs with captions arranged in a grid using XSLT and XSL-FO. The photo URLs and captions are produced using a FOR XML query against an SQL Server database, and the number of photos returned varies from sheet to sheet. I want to lay the photos out in four columns, filling the grid from left to right and from top to bottom. In HTML I'd do this by putting each photo and caption into a div and using "float: left" to make them flow into the grid. Is there a similarly elegant method using XSL-FO? A: To keep life simple I would normally setup a table for this, it's quite simple and will ensure that things get laid out right. If you wanted to do it similarly to how you would do it in HTML then you should layout block-container elements. However you decide to do it I would always recommend using the ZVON Reference site. Nice lookup of elements and available attributes, and while their XSL-FO doesn't include much in the way of explanation every page deep links to the standards document. A: In the end I used a table with one row and four cells for this. In each one I selected the source elements with position() mod 4 equal to 0, 1, 2 or 3 as appropriate, and then made sure that the photo and caption was always the same height so the rows lined up correctly.
How can I lay images out in a grid?
I'm trying to produce sheets of photographs with captions arranged in a grid using XSLT and XSL-FO. The photo URLs and captions are produced using a FOR XML query against an SQL Server database, and the number of photos returned varies from sheet to sheet. I want to lay the photos out in four columns, filling the grid from left to right and from top to bottom. In HTML I'd do this by putting each photo and caption into a div and using "float: left" to make them flow into the grid. Is there a similarly elegant method using XSL-FO?
[ "To keep life simple I would normally setup a table for this, it's quite simple and will ensure that things get laid out right. If you wanted to do it similarly to how you would do it in HTML then you should layout block-container elements.\nHowever you decide to do it I would always recommend using the ZVON Reference site. Nice lookup of elements and available attributes, and while their XSL-FO doesn't include much in the way of explanation every page deep links to the standards document.\n", "In the end I used a table with one row and four cells for this. In each one I selected the source elements with position() mod 4 equal to 0, 1, 2 or 3 as appropriate, and then made sure that the photo and caption was always the same height so the rows lined up correctly.\n" ]
[ 4, 0 ]
[]
[]
[ "xml", "xsl_fo", "xslt" ]
stackoverflow_0000053913_xml_xsl_fo_xslt.txt
Q: Any restrictions on development in Vista I'm looking at a new computer which will probably have vista on it. But there are so many editions of vista; are there any weird restrictions on what you can run on the various editions? For instance you couldn't run IIS on Windows ME. Can you still run IIS on the home editions of vista? A: You can't run Aero on the 'basic' editions, and there are some 'extras' that only run in Ultimate. You probably won't care about those for development, though. The only thing to be careful of would be that it has the same client access restrictions that XP did. A: Vista Home Basic only has enough IIS features to host WCF services and does not have any of web server features for hosting static files, asp.net, etc. Here is a link to compare editions. I would recommend going with Home Premium or Ultimate depending on whether the computer will run on a domain. A: Get Home Premium unless you need to connect to a domain controller (if you don't know what that is, you don't need it).
Any restrictions on development in Vista
I'm looking at a new computer which will probably have vista on it. But there are so many editions of vista; are there any weird restrictions on what you can run on the various editions? For instance you couldn't run IIS on Windows ME. Can you still run IIS on the home editions of vista?
[ "You can't run Aero on the 'basic' editions, and there are some 'extras' that only run in Ultimate. You probably won't care about those for development, though. The only thing to be careful of would be that it has the same client access restrictions that XP did.\n", "Vista Home Basic only has enough IIS features to host WCF services and does not have any of web server features for hosting static files, asp.net, etc.\nHere is a link to compare editions. I would recommend going with Home Premium or Ultimate depending on whether the computer will run on a domain.\n", "Get Home Premium unless you need to connect to a domain controller (if you don't know what that is, you don't need it).\n" ]
[ 0, 0, 0 ]
[]
[]
[ "iis", "windows_vista" ]
stackoverflow_0000054068_iis_windows_vista.txt
Q: DataTable to readable text string This might be a bit on the silly side of things but I need to send the contents of a DataTable (unknown columns, unknown contents) via a text e-mail. Basic idea is to loop over rows and columns and output all cell contents into a StringBuilder using .ToString(). Formatting is a big issue though. Any tips/ideas on how to make this look "readable" in a text format ? I'm thinking on "padding" each cell with empty spaces, but I also need to split some cells into multiple lines, and this makes the StringBuilder approach a bit messy ( because the second line of text from the first column comes after the first line of text in the last column,etc.) A: Would converting the datatable to a HTML-table and sending HTML-mail be an alternative? That would make it much nicer on the receiving end if their client supports it. A: This will sound like a really horrible solution, but it just might work: Render the DataTable contents into a DataGrid/GridView (assuming ASP.NET) and then screen scrape that. I told you it would be messy. A: Get the max size for each column first. That way a varchar(255) column containing postal codes won't take up too much space. Maybe you can split the complete table instead of splitting single lines. Put the complete right part of the table in a second stringbuilder and put it beneath the first table. You can also give the user the option to create comma delimited text so the receiver can import the table into a spreadsheet. A: Loop through the datatable and send it as HTML email - generating html table from datatable & sending it as body of email. A: I got this working by writing a custom formatter specifically for this task. The code is about 120 -130 lines long, so I don't know if I should post it here as an answer (maybe a feature to attach .cs files to a topic would be a good ideea!) . Anyway, if anyone is interested in this, let me know and I'll provide the code. A: Does it need to be formatted nicely, or will an automated system pick up the mail message on the other end? If the latter, just use the datatable's .WriteXml() method.
DataTable to readable text string
This might be a bit on the silly side of things but I need to send the contents of a DataTable (unknown columns, unknown contents) via a text e-mail. Basic idea is to loop over rows and columns and output all cell contents into a StringBuilder using .ToString(). Formatting is a big issue though. Any tips/ideas on how to make this look "readable" in a text format ? I'm thinking on "padding" each cell with empty spaces, but I also need to split some cells into multiple lines, and this makes the StringBuilder approach a bit messy ( because the second line of text from the first column comes after the first line of text in the last column,etc.)
[ "Would converting the datatable to a HTML-table and sending HTML-mail be an alternative? That would make it much nicer on the receiving end if their client supports it.\n", "This will sound like a really horrible solution, but it just might work: \nRender the DataTable contents into a DataGrid/GridView (assuming ASP.NET) and then screen scrape that. \nI told you it would be messy.\n", "Get the max size for each column first. That way a varchar(255) column containing postal codes won't take up too much space.\nMaybe you can split the complete table instead of splitting single lines. Put the complete right part of the table in a second stringbuilder and put it beneath the first table.\nYou can also give the user the option to create comma delimited text so the receiver can import the table into a spreadsheet.\n", "Loop through the datatable and send it as HTML email - generating html table from datatable & sending it as body of email.\n", "I got this working by writing a custom formatter specifically for this task. The code is about 120 -130 lines long, so I don't know if I should post it here as an answer (maybe a feature to attach .cs files to a topic would be a good ideea!) .\nAnyway, if anyone is interested in this, let me know and I'll provide the code.\n", "Does it need to be formatted nicely, or will an automated system pick up the mail message on the other end? If the latter, just use the datatable's .WriteXml() method.\n" ]
[ 1, 1, 1, 0, 0, 0 ]
[ "You can do smth like this (if VB):\nDim Str As String = \"\"\n 'Create File if doesn't exist\n Dim FILE_NAME As String = \"C:\\temp\\Custom.txt\"\n If System.IO.File.Exists(FILE_NAME) = False Then\n System.IO.File.Create(FILE_NAME)\n End If\n\n Dim objWriter As System.IO.StreamWriter\n Try\n objWriter = New System.IO.StreamWriter(FILE_NAME)\n Catch ex As System.IO.IOException\n MsgBox(\"Please close the file: (C:\\temp\\Custom.txt) before proceeding\" & vbCrLf & ex.Message.ToString, MsgBoxStyle.Exclamation)\n objWriter = Nothing\n Err = True\n End Try\n\n\n'I assume you know how to write to text file.\n'Say my datagridview is named \"dgrid\"\n\nDim x,y as integer\n\nFor x = 0 to dgrid.rows.count -1\n For y = 0 to dgrid.columns.count - 1\n Str = dgrid.Rows(x).Cells(y).Values & \" \"\n Next y\nNext x\n\nobjWriter.Close()\n\nResource.\nOr you can even generate an CSV file from your DataTable.\n" ]
[ -2 ]
[ "c#", "datatable", "formatting" ]
stackoverflow_0000053652_c#_datatable_formatting.txt
Q: Best tool to monitor network connection bandwidth I'm looking for a very simple tool to monitor the bandwidth of all my applications. No need for extra features like traffic spying, I'm just interested by bandwidth. I already know Wireshark (which is great), but what I'm looking for is more something like TcpView (great tool from Sysinternals) with current bandwidth indication. PS: I'm interested by Windows tools only A: Try NetLimiter, which is great for that and also allows you to limit bandwidth usage so that you can test your app in reduced bandwidth scenarios.
Best tool to monitor network connection bandwidth
I'm looking for a very simple tool to monitor the bandwidth of all my applications. No need for extra features like traffic spying, I'm just interested by bandwidth. I already know Wireshark (which is great), but what I'm looking for is more something like TcpView (great tool from Sysinternals) with current bandwidth indication. PS: I'm interested by Windows tools only
[ "Try NetLimiter, which is great for that and also allows you to limit bandwidth usage so that you can test your app in reduced bandwidth scenarios.\n" ]
[ 7 ]
[]
[]
[ "networking", "windows" ]
stackoverflow_0000054184_networking_windows.txt
Q: SharePoint: Using RichHTML field type in a custom content type I'd like to use the field type RichHTML in a custom content type that I'm making. However, I think that the RichHTML type comes with MOSS Publishing so I'm unsure how to add it to my content type. Right now I've tried with this: <Field ID="{7F55A8F0-4555-46BC-B24C-222240B862AF}" Type="RichHTML" Name="NewsBodyField" DisplayName="News Body" StaticName="NewsBodyField" Hidden="False" Required="True" Sealed="False" /> <Field ID="{7F55A8F0-4555-46BC-B24C-222240B862AF}" Type="RichHtmlField" Name="NewsBodyField" DisplayName="News Body" StaticName="NewsBodyField" Hidden="False" Required="True" Sealed="False" /> I know that when I want to access this custom field using a CQWP, I can export it and add it to my CommonViews using 'RichHTML', however that doesn't work here. Any help regarding how to add a Rich Html Field to a custom content type would be much appreciated. A: I figured it out myself. The type you're looking for is "HTML" not RichHTML.
SharePoint: Using RichHTML field type in a custom content type
I'd like to use the field type RichHTML in a custom content type that I'm making. However, I think that the RichHTML type comes with MOSS Publishing so I'm unsure how to add it to my content type. Right now I've tried with this: <Field ID="{7F55A8F0-4555-46BC-B24C-222240B862AF}" Type="RichHTML" Name="NewsBodyField" DisplayName="News Body" StaticName="NewsBodyField" Hidden="False" Required="True" Sealed="False" /> <Field ID="{7F55A8F0-4555-46BC-B24C-222240B862AF}" Type="RichHtmlField" Name="NewsBodyField" DisplayName="News Body" StaticName="NewsBodyField" Hidden="False" Required="True" Sealed="False" /> I know that when I want to access this custom field using a CQWP, I can export it and add it to my CommonViews using 'RichHTML', however that doesn't work here. Any help regarding how to add a Rich Html Field to a custom content type would be much appreciated.
[ "I figured it out myself. The type you're looking for is \"HTML\" not RichHTML.\n" ]
[ 4 ]
[]
[]
[ "content_type", "sharepoint" ]
stackoverflow_0000053935_content_type_sharepoint.txt
Q: HTML to Image .tiff File Is there a way to convert a HTML string into a Image .tiff file? I am using C# .NET 3.5. The requirement is to give the user an option to fact a confirmation. The confirmation is created with XML and a XSLT. Typically it is e-mailed. Is there a way I can take the HTML string generated by the transformation HTML string and convert that to a .tiff or any image that can be faxed? 3rd party software is allowed, however the cheaper the better. We are using a 3rd party fax library, that will only accept .tiff images, but if I can get the HTML to be any image I can covert it into a .tiff. A: Here are some free-as-in-beer possibilities: You can use the PDFCreator printer driver that comes with ghostscript and print directly to a TIFF file or many other formats. If you have MSOffice installed, the Microsoft Office Document Image Writer will produce a file you can convert to other formats. But in general, your best bet is to print to a driver that will produce and image file of some kind or a windows meta-file format (.wmf) file. Is there some reason why you can't just print-to-fax? Does the third-party software not support a printer driver? That's unusual these days. A: A starting point might be the software of WebSuperGoo, which provide rich image editing products, cheap or for free. I know for sure their PDF Writer can do basic HTML (http://www.websupergoo.com/helppdf6net/source/3-concepts/b-htmlstyles.htm). This should not be too hard to convert to TIFF. This does not include the full HTML subset or CSS. That might require using Microsofts IE ActiveX component.
HTML to Image .tiff File
Is there a way to convert a HTML string into a Image .tiff file? I am using C# .NET 3.5. The requirement is to give the user an option to fact a confirmation. The confirmation is created with XML and a XSLT. Typically it is e-mailed. Is there a way I can take the HTML string generated by the transformation HTML string and convert that to a .tiff or any image that can be faxed? 3rd party software is allowed, however the cheaper the better. We are using a 3rd party fax library, that will only accept .tiff images, but if I can get the HTML to be any image I can covert it into a .tiff.
[ "Here are some free-as-in-beer possibilities:\nYou can use the PDFCreator printer driver that comes with ghostscript and print\ndirectly to a TIFF file or many other formats.\nIf you have MSOffice installed, the Microsoft Office Document Image Writer will produce\na file you can convert to other formats.\nBut in general, your best bet is to print to a driver that will produce and\nimage file of some kind or a windows meta-file format (.wmf) file.\nIs there some reason why you can't just print-to-fax? Does the third-party software not support a printer driver? That's unusual these days.\n", "A starting point might be the software of WebSuperGoo, which provide rich image editing products, cheap or for free.\nI know for sure their PDF Writer can do basic HTML (http://www.websupergoo.com/helppdf6net/source/3-concepts/b-htmlstyles.htm). This should not be too hard to convert to TIFF.\nThis does not include the full HTML subset or CSS. That might require using Microsofts IE ActiveX component.\n" ]
[ 4, 2 ]
[]
[]
[ ".net", "c#", "html", "image", "tiff" ]
stackoverflow_0000053956_.net_c#_html_image_tiff.txt
Q: Asp.net path compaction I have an asp.net url path which is being generated in a web form, and is coming out something like "/foo/bar/../bar/path.aspx", and is coming out in the generated html like this too. It should be shortened to "/foo/bar/path.aspx". Path.Combine didn't fix it. Is there a function to clean this path up? A: You could create a helper class which wrapped the UriBuilder class in System.Net public static class UriHelper { public static string NormalizeRelativePath(string path) { UriBuilder _builder = new UriBuilder("http://localhost"); builder.Path = path; return builder.Uri.AbsolutePath; } } which could then be used like this: string url = "foo/bar/../bar/path.aspx"; Console.WriteLine(UriHelper.NormalizeRelativePath(url)); It is a bit hacky but it would work for the specific example you gave. EDIT: Updated to reflect Andrew's comments. A: Whatever you do, don't use a static UriBuilder. This introduces all sorts of potential race conditions that you might not detect until you are under heavy load. If two different threads called UriHelper.NormalizeRelativePath at the same time, the return value for one could be passed back to the other caller arbitrarily. If you want to use UriBuilder to do this, just create a new one when you need it (it's not expensive to create). A: Sarcastic's reply is so much better than mine, but if you were working with filesystem paths, my ugly hack below could turn out to be useful too. (Translation: I typed it, so I'll be damned if I don't post it :) Path.Combine just slaps two strings together, paying attention to leading or trailing slashes. As far as I know, the only Path method that does normalization is Path.GetFullPath. The following will give you the "cleaned up" version. myPath = System.IO.Path.GetFullPath(myPath); Of course, there is the small issue that the resulting path will be rooted and the forward slashes will be converted to back slashes (like "C:\foo\bar\path.aspx"). But if you know the parent root of the original path, stripping out the root should not be a big problem.
Asp.net path compaction
I have an asp.net url path which is being generated in a web form, and is coming out something like "/foo/bar/../bar/path.aspx", and is coming out in the generated html like this too. It should be shortened to "/foo/bar/path.aspx". Path.Combine didn't fix it. Is there a function to clean this path up?
[ "You could create a helper class which wrapped the UriBuilder class in System.Net\npublic static class UriHelper\n{ \n public static string NormalizeRelativePath(string path)\n {\n UriBuilder _builder = new UriBuilder(\"http://localhost\");\n builder.Path = path;\n return builder.Uri.AbsolutePath;\n }\n}\n\nwhich could then be used like this:\nstring url = \"foo/bar/../bar/path.aspx\";\nConsole.WriteLine(UriHelper.NormalizeRelativePath(url));\n\nIt is a bit hacky but it would work for the specific example you gave.\nEDIT: Updated to reflect Andrew's comments.\n", "Whatever you do, don't use a static UriBuilder. This introduces all sorts of potential race conditions that you might not detect until you are under heavy load. \nIf two different threads called UriHelper.NormalizeRelativePath at the same time, the return value for one could be passed back to the other caller arbitrarily.\nIf you want to use UriBuilder to do this, just create a new one when you need it (it's not expensive to create). \n", "Sarcastic's reply is so much better than mine, but if you were working with filesystem paths, my ugly hack below could turn out to be useful too. (Translation: I typed it, so I'll be damned if I don't post it :)\nPath.Combine just slaps two strings together, paying attention to leading or trailing slashes. As far as I know, the only Path method that does normalization is Path.GetFullPath. The following will give you the \"cleaned up\" version.\nmyPath = System.IO.Path.GetFullPath(myPath);\n\nOf course, there is the small issue that the resulting path will be rooted and the forward slashes will be converted to back slashes (like \"C:\\foo\\bar\\path.aspx\"). But if you know the parent root of the original path, stripping out the root should not be a big problem.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "asp.net", "c#", "path" ]
stackoverflow_0000054227_asp.net_c#_path.txt
Q: Ajax XMLHttpRequest object limit Is there a security limit to the number of Ajax XMLHttpRequest objects you can create on a single page? If so, does this vary from one browser to another? A: I don't think so, but there's a limit of two simultaneous HTTP connections per domain per client (you can override this in Firefox, but practically no one does so). A: I've found it easier to pool and reuse XMLHTTPRequest objects instead of creating new ones... A: Yes, as Kevin says, HTTP/1.1 specifications say "A single-user client should not maintain more than 2 connections with any server or proxy."
Ajax XMLHttpRequest object limit
Is there a security limit to the number of Ajax XMLHttpRequest objects you can create on a single page? If so, does this vary from one browser to another?
[ "I don't think so, but there's a limit of two simultaneous HTTP connections per domain per client (you can override this in Firefox, but practically no one does so).\n", "I've found it easier to pool and reuse XMLHTTPRequest objects instead of creating new ones...\n", "Yes, as Kevin says, HTTP/1.1 specifications say \"A single-user client should not maintain more than 2 connections with any server or proxy.\"\n" ]
[ 1, 1, 0 ]
[]
[]
[ "ajax" ]
stackoverflow_0000054217_ajax.txt
Q: Lightbox style dialogs in MFC App Has anyone implemented Lightbox style background dimming on a modal dialog box in a MFC/non .net app. I think the procedure would have to be something like: steps: Get dialog parent HWND or CWnd* Get the rect of the parent window and draw an overlay with a translucency over that window allow the dialog to do it's modal draw routine, e.g DoModal() Are there any existing libraries/frameworks to do this, or what's the best way to drop a translucent overlay in MFC? edit Here's a mockup of what i'm trying to achieve if you don't know what 'lightbox style' means Some App: with a lightbox dialog box A: Here's what I did* based on Brian's links First create a dialog resource with the properties: border FALSE 3D look FALSE client edge FALSE Popup style static edge FALSE Transparent TRUE Title bar FALSE and you should end up with a dialog window with no frame or anything, just a grey box. override the Create function to look like this: BOOL LightBoxDlg::Create(UINT nIDTemplate, CWnd* pParentWnd) { if(!CDialog::Create(nIDTemplate, pParentWnd)) return false; RECT rect; RECT size; GetParent()->GetWindowRect(&rect); size.top = 0; size.left = 0; size.right = rect.right - rect.left; size.bottom = rect.bottom - rect.top; SetWindowPos(m_pParentWnd,rect.left,rect.top,size.right,size.bottom,NULL); HWND hWnd=m_hWnd; SetWindowLong (hWnd , GWL_EXSTYLE ,GetWindowLong (hWnd , GWL_EXSTYLE ) | WS_EX_LAYERED ) ; typedef DWORD (WINAPI *PSLWA)(HWND, DWORD, BYTE, DWORD); PSLWA pSetLayeredWindowAttributes; HMODULE hDLL = LoadLibrary (_T("user32")); pSetLayeredWindowAttributes = (PSLWA) GetProcAddress(hDLL,"SetLayeredWindowAttributes"); if (pSetLayeredWindowAttributes != NULL) { /* * Second parameter RGB(255,255,255) sets the colorkey * to white LWA_COLORKEY flag indicates that color key * is valid LWA_ALPHA indicates that ALphablend parameter * is valid - here 100 is used */ pSetLayeredWindowAttributes (hWnd, RGB(255,255,255), 100, LWA_COLORKEY|LWA_ALPHA); } return true; } then create a small black bitmap in an image editor (say 48x48) and import it as a bitmap resource (in this example IDB_BITMAP1) override the WM_ERASEBKGND message with: BOOL LightBoxDlg::OnEraseBkgnd(CDC* pDC) { BOOL bRet = CDialog::OnEraseBkgnd(pDC); RECT rect; RECT size; m_pParentWnd->GetWindowRect(&rect); size.top = 0; size.left = 0; size.right = rect.right - rect.left; size.bottom = rect.bottom - rect.top; CBitmap cbmp; cbmp.LoadBitmapW(IDB_BITMAP1); BITMAP bmp; cbmp.GetBitmap(&bmp); CDC memDc; memDc.CreateCompatibleDC(pDC); memDc.SelectObject(&cbmp); pDC->StretchBlt(0,0,size.right,size.bottom,&memDc,0,0,bmp.bmWidth,bmp.bmHeight,SRCCOPY); return bRet; } Instantiate it in the DoModal of the desired dialog, Create it like a Modal Dialog i.e. on the stack(or heap if desired), call it's Create manually, show it then create your actual modal dialog over the top of it: INT_PTR CAboutDlg::DoModal() { LightBoxDlg Dlg(m_pParentWnd);//make sure to pass in the parent of the new dialog Dlg.Create(LightBoxDlg::IDD); Dlg.ShowWindow(SW_SHOW); BOOL ret = CDialog::DoModal(); Dlg.ShowWindow(SW_HIDE); return ret; } and this results in something exactly like my mock up above *there are still places for improvment, like doing it without making a dialog box to begin with and some other general tidyups. A: I think you just need to create a window and set the transparency. There is an MFC CGlassDialog sample on CodeProject that might help you. There is also an article on how to do this with the Win32 APIs.
Lightbox style dialogs in MFC App
Has anyone implemented Lightbox style background dimming on a modal dialog box in a MFC/non .net app. I think the procedure would have to be something like: steps: Get dialog parent HWND or CWnd* Get the rect of the parent window and draw an overlay with a translucency over that window allow the dialog to do it's modal draw routine, e.g DoModal() Are there any existing libraries/frameworks to do this, or what's the best way to drop a translucent overlay in MFC? edit Here's a mockup of what i'm trying to achieve if you don't know what 'lightbox style' means Some App: with a lightbox dialog box
[ "Here's what I did* based on Brian's links\nFirst create a dialog resource with the properties:\n\nborder FALSE\n3D look FALSE\nclient edge FALSE\nPopup style\nstatic edge FALSE\nTransparent TRUE\nTitle bar FALSE \n\nand you should end up with a dialog window with no frame or anything, just a grey box.\noverride the Create function to look like this: \nBOOL LightBoxDlg::Create(UINT nIDTemplate, CWnd* pParentWnd)\n{\n\n if(!CDialog::Create(nIDTemplate, pParentWnd))\n return false;\n RECT rect;\n RECT size;\n\n GetParent()->GetWindowRect(&rect);\n size.top = 0;\n size.left = 0;\n size.right = rect.right - rect.left;\n size.bottom = rect.bottom - rect.top;\n SetWindowPos(m_pParentWnd,rect.left,rect.top,size.right,size.bottom,NULL);\n\n HWND hWnd=m_hWnd; \n SetWindowLong (hWnd , GWL_EXSTYLE ,GetWindowLong (hWnd , GWL_EXSTYLE ) | WS_EX_LAYERED ) ;\n typedef DWORD (WINAPI *PSLWA)(HWND, DWORD, BYTE, DWORD);\n PSLWA pSetLayeredWindowAttributes;\n HMODULE hDLL = LoadLibrary (_T(\"user32\"));\n pSetLayeredWindowAttributes = \n (PSLWA) GetProcAddress(hDLL,\"SetLayeredWindowAttributes\");\n if (pSetLayeredWindowAttributes != NULL) \n {\n /*\n * Second parameter RGB(255,255,255) sets the colorkey \n * to white LWA_COLORKEY flag indicates that color key \n * is valid LWA_ALPHA indicates that ALphablend parameter \n * is valid - here 100 is used\n */\n pSetLayeredWindowAttributes (hWnd, \n RGB(255,255,255), 100, LWA_COLORKEY|LWA_ALPHA);\n }\n\n\n return true;\n}\n\nthen create a small black bitmap in an image editor (say 48x48) and import it as a bitmap resource (in this example IDB_BITMAP1)\noverride the WM_ERASEBKGND message with:\nBOOL LightBoxDlg::OnEraseBkgnd(CDC* pDC)\n{\n\n BOOL bRet = CDialog::OnEraseBkgnd(pDC);\n\n RECT rect;\n RECT size;\n m_pParentWnd->GetWindowRect(&rect);\n size.top = 0;\n size.left = 0;\n size.right = rect.right - rect.left;\n size.bottom = rect.bottom - rect.top;\n\n CBitmap cbmp;\n cbmp.LoadBitmapW(IDB_BITMAP1);\n BITMAP bmp;\n cbmp.GetBitmap(&bmp);\n CDC memDc;\n memDc.CreateCompatibleDC(pDC);\n memDc.SelectObject(&cbmp);\n pDC->StretchBlt(0,0,size.right,size.bottom,&memDc,0,0,bmp.bmWidth,bmp.bmHeight,SRCCOPY);\n\n return bRet;\n}\n\nInstantiate it in the DoModal of the desired dialog, Create it like a Modal Dialog i.e. on the stack(or heap if desired), call it's Create manually, show it then create your actual modal dialog over the top of it: \nINT_PTR CAboutDlg::DoModal()\n{\n LightBoxDlg Dlg(m_pParentWnd);//make sure to pass in the parent of the new dialog\n Dlg.Create(LightBoxDlg::IDD);\n Dlg.ShowWindow(SW_SHOW);\n\n BOOL ret = CDialog::DoModal();\n\n Dlg.ShowWindow(SW_HIDE);\n return ret;\n}\n\nand this results in something exactly like my mock up above \n*there are still places for improvment, like doing it without making a dialog box to begin with and some other general tidyups.\n", "I think you just need to create a window and set the transparency. There is an MFC CGlassDialog sample on CodeProject that might help you. There is also an article on how to do this with the Win32 APIs.\n" ]
[ 4, 2 ]
[]
[]
[ "c++", "mfc", "user_interface" ]
stackoverflow_0000051687_c++_mfc_user_interface.txt
Q: Accessing System Databases/Tables using LINQ to SQL? Right now I have an SSIS package that runs every morning and gives me a report on the number of packages that failed or succeeded from the day before. The information for these packages is contained partly within the sysjobs table (a system table) within the msdb database (a system database) in SQL Server 2005. When trying to move the package to a C# executable (mostly to gain better formatting over the email that gets sent out), I wasn't able to find a way to create a dbml file that allowed me to access these tables through LINQ. I tried to look for any properties that would make these tables visible, but I haven't had much luck. Is this possible with LINQ to SQL? A: If you're in Server Explorer, you can make them visible this way: Create a connection to the server you want. Right-click the server and choose Change View > Object Type. You should now see System Tables and User Tables. You should see sysjobs there, and you can easily drag it onto a .dbml surface. A: It may not be available in the designer, but why not just add it to the DBML file itself?
Accessing System Databases/Tables using LINQ to SQL?
Right now I have an SSIS package that runs every morning and gives me a report on the number of packages that failed or succeeded from the day before. The information for these packages is contained partly within the sysjobs table (a system table) within the msdb database (a system database) in SQL Server 2005. When trying to move the package to a C# executable (mostly to gain better formatting over the email that gets sent out), I wasn't able to find a way to create a dbml file that allowed me to access these tables through LINQ. I tried to look for any properties that would make these tables visible, but I haven't had much luck. Is this possible with LINQ to SQL?
[ "If you're in Server Explorer, you can make them visible this way:\n\nCreate a connection to the server you want.\nRight-click the server and choose Change View > Object Type.\nYou should now see System Tables and User Tables. You should see sysjobs there, and you can easily drag it onto a .dbml surface.\n\n", "It may not be available in the designer, but why not just add it to the DBML file itself?\n" ]
[ 23, 0 ]
[]
[]
[ "c#", "linq", "linq_to_sql", "sql_server" ]
stackoverflow_0000054222_c#_linq_linq_to_sql_sql_server.txt
Q: Building a custom Linux Live CD Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch? I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible. The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot. Can anyone fix me up with a current How-To? Update: So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator". My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell. The whole job would have only taken an hour or two if there had been a README explaining the configuration file! A: There are a couple of interesting projects you could look into. But first: does it have to be a CD-ROM? That's probably the slowest possible storage (well, apart from tape, maybe) you could use. What about a fast USB stick or a an IEE1394 hard-disk or maybe even an eSATA hard-disk? Okay, there are several Live-CDs that are designed to be very small, in order to e.g. fit on a business card sized CD. Some were also designed to be booted from a USB stick, back when that meant 64-128 MiByte: Damn Small Linux is one of the best known ones, however it uses a 2.4 kernel. There is a sister project called Damn Small Linux - Not, which has a 2.6 kernel (although it seems it hasn't been updated in years). Another project worth noting is grml, a Live-CD for system administration tasks. It does not boot into a graphic environment, and is therefore quite fast; however, it still contains about 2 GiByte of software compressed onto a CD-ROM. But it also has a smaller flavor, aptly named grml-small, which only contains about 200 MiByte of software compressed into 60 MiByte. Then there is Morphix, which is a Live-CD builder toolkit based on Knoppix. ("Morphable Knoppix"!) Morphix is basically a tool to build your own special purpose Live-CD. The last thing I want to mention is MachBoot. MachBoot is a super-fast Live-CD. It uses various techniques to massively speed up the boot process. I believe they even trace the order in which blocks are accessed during booting and then remaster the ISO so that those blocks are laid out contiguously on the medium. Their current record is less than 6 seconds to boot into a full graphical desktop environment. However, this also seems to be stale. A: One key piece of advice I can give is that most LiveCDs use a compressed filesystem called squashfs to cram as much data on the CD as possible. Since you don't need compression, you could run the mksquashfs step (present in most tutorials) with -noDataCompression and -noFragmentCompression to save on decompression time. You may even be able to drop the squashfs approach entirely, but this would require some restructuring. This may actually be slower depending on your CD-ROM read speed vs. CPU speed, but it's worth looking into. This Ubuntu tutorial was effective enough for me to build a LiveCD based on 8.04. It may be useful for getting the feel of how a LiveCD is composed, but I would probably not recommend using an Ubuntu LiveCD. If at all possible, find a minimal LiveCD and build up with only minimal stripping out, rather than stripping down a huge LiveCD like Ubuntu. There are some situations in which the smaller distros are using smaller/faster alternatives rather than just leaving something out. If you want to get seriously hardcore, you could look at Linux From Scratch, and include only what you want, but that's probably more time than you want to spend. A: Creating Your Own Custom Ubuntu 7.10 Or Linux Mint 4.0 Live-CD With Remastersys A: Depends on your distro. Here's a good article you can check out from LWN.net There is a book I used which covers a lot of distros, though it does not cover creating a flash-bootable image. The book is Live Linux(R) CDs: Building and Customizing Bootables. You can use it with supplemental information from your distro of choice. A: So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator". My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell. The whole job would have only taken an hour or two if there had been a README explaining the configuration file! A: Debian Live provides the best tools for building a Linux Live CD. Webconverger uses Debian Live for example. It's very easy to use. sudo apt-get install live-helper # from Debian unstable, which should work fine from Ubuntu lh_config # edit config/* to your liking sudo lh_build
Building a custom Linux Live CD
Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch? I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible. The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot. Can anyone fix me up with a current How-To? Update: So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator". My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell. The whole job would have only taken an hour or two if there had been a README explaining the configuration file!
[ "There are a couple of interesting projects you could look into.\nBut first: does it have to be a CD-ROM? That's probably the slowest possible storage (well, apart from tape, maybe) you could use. What about a fast USB stick or a an IEE1394 hard-disk or maybe even an eSATA hard-disk?\nOkay, there are several Live-CDs that are designed to be very small, in order to e.g. fit on a business card sized CD. Some were also designed to be booted from a USB stick, back when that meant 64-128 MiByte: Damn Small Linux is one of the best known ones, however it uses a 2.4 kernel. There is a sister project called Damn Small Linux - Not, which has a 2.6 kernel (although it seems it hasn't been updated in years).\nAnother project worth noting is grml, a Live-CD for system administration tasks. It does not boot into a graphic environment, and is therefore quite fast; however, it still contains about 2 GiByte of software compressed onto a CD-ROM. But it also has a smaller flavor, aptly named grml-small, which only contains about 200 MiByte of software compressed into 60 MiByte.\nThen there is Morphix, which is a Live-CD builder toolkit based on Knoppix. (\"Morphable Knoppix\"!) Morphix is basically a tool to build your own special purpose Live-CD.\nThe last thing I want to mention is MachBoot. MachBoot is a super-fast Live-CD. It uses various techniques to massively speed up the boot process. I believe they even trace the order in which blocks are accessed during booting and then remaster the ISO so that those blocks are laid out contiguously on the medium. Their current record is less than 6 seconds to boot into a full graphical desktop environment. However, this also seems to be stale.\n", "One key piece of advice I can give is that most LiveCDs use a compressed filesystem called squashfs to cram as much data on the CD as possible. Since you don't need compression, you could run the mksquashfs step (present in most tutorials) with -noDataCompression and -noFragmentCompression to save on decompression time. You may even be able to drop the squashfs approach entirely, but this would require some restructuring. This may actually be slower depending on your CD-ROM read speed vs. CPU speed, but it's worth looking into.\nThis Ubuntu tutorial was effective enough for me to build a LiveCD based on 8.04. It may be useful for getting the feel of how a LiveCD is composed, but I would probably not recommend using an Ubuntu LiveCD. \nIf at all possible, find a minimal LiveCD and build up with only minimal stripping out, rather than stripping down a huge LiveCD like Ubuntu. There are some situations in which the smaller distros are using smaller/faster alternatives rather than just leaving something out. If you want to get seriously hardcore, you could look at Linux From Scratch, and include only what you want, but that's probably more time than you want to spend.\n", "Creating Your Own Custom Ubuntu 7.10 Or Linux Mint 4.0 Live-CD With Remastersys\n", "Depends on your distro. Here's a good article you can check out from LWN.net\nThere is a book I used which covers a lot of distros, though it does not cover creating a flash-bootable image. The book is Live Linux(R) CDs: Building and Customizing Bootables. You can use it with supplemental information from your distro of choice.\n", "So, just as a final update for anyone reading this later - the tool I ended up using was \"livecd-creator\".\nMy reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell.\nThe whole job would have only taken an hour or two if there had been a README explaining the configuration file!\n", "Debian Live provides the best tools for building a Linux Live CD. Webconverger uses Debian Live for example.\nIt's very easy to use.\nsudo apt-get install live-helper # from Debian unstable, which should work fine from Ubuntu\nlh_config # edit config/* to your liking\nsudo lh_build\n" ]
[ 4, 3, 2, 1, 1, 1 ]
[]
[]
[ "linux" ]
stackoverflow_0000033117_linux.txt
Q: Enabling Hibernate second-level cache with JPA on JBoss 4.2 What are the steps required to enable Hibernate's second-level cache, when using the Java Persistence API (annotated entities)? How do I check that it's working? I'm using JBoss 4.2.2.GA. From the Hibernate documentation, it seems that I need to enable the cache and specify a cache provider in persistence.xml, like: <property name="hibernate.cache.use_second_level_cache" value="true" /> <property name="hibernate.cache.provider_class" value="org.hibernate.cache.HashtableCacheProvider" /> What else is required? Do I need to add @Cache annotations to my JPA entities? How can I tell if the cache is working? I have tried accessing cache statistics after running a Query, but Statistics.getSecondLevelCacheStatistics returns null, perhaps because I don't know what 'region' name to use. A: Follow-up: in the end, after adding annotations, I have it working with EhCache, i.e. <property name="hibernate.cache.provider_class" value="net.sf.ehcache.hibernate.EhCacheProvider" /> A: I believe you need to add the cache annotations to tell hibernate how to use the second-level cache (read-only, read-write, etc). This was the case in my app (using spring / traditional hibernate and ehcache, so your mileage may vary). Once the caches were indicated, I started seeing messages that they were in use from hibernate.
Enabling Hibernate second-level cache with JPA on JBoss 4.2
What are the steps required to enable Hibernate's second-level cache, when using the Java Persistence API (annotated entities)? How do I check that it's working? I'm using JBoss 4.2.2.GA. From the Hibernate documentation, it seems that I need to enable the cache and specify a cache provider in persistence.xml, like: <property name="hibernate.cache.use_second_level_cache" value="true" /> <property name="hibernate.cache.provider_class" value="org.hibernate.cache.HashtableCacheProvider" /> What else is required? Do I need to add @Cache annotations to my JPA entities? How can I tell if the cache is working? I have tried accessing cache statistics after running a Query, but Statistics.getSecondLevelCacheStatistics returns null, perhaps because I don't know what 'region' name to use.
[ "Follow-up: in the end, after adding annotations, I have it working with EhCache, i.e.\n<property name=\"hibernate.cache.provider_class\" \n value=\"net.sf.ehcache.hibernate.EhCacheProvider\" />\n\n", "I believe you need to add the cache annotations to tell hibernate how to use the second-level cache (read-only, read-write, etc). This was the case in my app (using spring / traditional hibernate and ehcache, so your mileage may vary). Once the caches were indicated, I started seeing messages that they were in use from hibernate.\n" ]
[ 4, 3 ]
[]
[]
[ "caching", "hibernate", "java", "jpa" ]
stackoverflow_0000053562_caching_hibernate_java_jpa.txt
Q: Using ActiveDirectoryMembershipProvider with two domain controllers We have an ASP.NET application running at a customer site that uses ActiveDirectory for user login via the ActiveDirectoryMembershipProvider. Their primary domain controller that we were pointing to went down this morning, and in getting everything set back up the client was wondering if we could have a redundant connection to two domain controllers; i.e. specifying a primary and a backup AD server. A Google search proved fruitless - does anyone know if this can be done? A: If ActiveDirectory couldn't handle multiple domain controllers then it wouldn't be a very good technology. You just need to make sure in your Membership configuration you are pointing to the 'Domain' rather than the 'Server' and then add two or more controllers to your domain. Generally if you are referring to the domain as "LDAP://server/DC=domain,DC=com" then you should be able to remove the "server" part and refer simply to "LDAP://DC=domain,DC=com" The following code project gives a long list of things you can do in Active Directory from C#: http://www.codeproject.com/KB/system/everythingInAD.aspx#7 A: It can be done, it will just take some work. You will need to create a class that inherits off of the ActiveDirectoryMemberhsipProvider and use it has your provider instead. That way you can maintain most of the functionality. Then setup a way to specify two connectionStringName properties, one for primary and one for secondary. You will also need to create the code to read the information from the config since you are changing it. Then just override the methods where you need to catch when the primary is down and switch to the secondary. This will be the most reusable way of doing it. There's probably other ways of doing it, but it will probably be hacky and not very reusable. Like testing the connection before each request and then setting the connectionstring that way. Based on the MSDN documentation on the class, this will probably be the only way to do it. They don't provide the functionality internal.
Using ActiveDirectoryMembershipProvider with two domain controllers
We have an ASP.NET application running at a customer site that uses ActiveDirectory for user login via the ActiveDirectoryMembershipProvider. Their primary domain controller that we were pointing to went down this morning, and in getting everything set back up the client was wondering if we could have a redundant connection to two domain controllers; i.e. specifying a primary and a backup AD server. A Google search proved fruitless - does anyone know if this can be done?
[ "If ActiveDirectory couldn't handle multiple domain controllers then it wouldn't be a very good technology.\nYou just need to make sure in your Membership configuration you are pointing to the 'Domain' rather than the 'Server' and then add two or more controllers to your domain.\nGenerally if you are referring to the domain as \"LDAP://server/DC=domain,DC=com\" then you should be able to remove the \"server\" part and refer simply to \"LDAP://DC=domain,DC=com\"\nThe following code project gives a long list of things you can do in Active Directory from C#: http://www.codeproject.com/KB/system/everythingInAD.aspx#7\n", "It can be done, it will just take some work.\nYou will need to create a class that inherits off of the ActiveDirectoryMemberhsipProvider and use it has your provider instead. That way you can maintain most of the functionality. Then setup a way to specify two connectionStringName properties, one for primary and one for secondary. You will also need to create the code to read the information from the config since you are changing it. Then just override the methods where you need to catch when the primary is down and switch to the secondary. This will be the most reusable way of doing it.\nThere's probably other ways of doing it, but it will probably be hacky and not very reusable. Like testing the connection before each request and then setting the connectionstring that way.\nBased on the MSDN documentation on the class, this will probably be the only way to do it. They don't provide the functionality internal.\n" ]
[ 2, 0 ]
[]
[]
[ "active_directory", "asp.net", "directoryservices" ]
stackoverflow_0000054364_active_directory_asp.net_directoryservices.txt
Q: What do you use to create a website architecture? Sure, we can use a simple bulleted list or a mindmap. But, there must be a better, more interactive way. What do you use when starting your website architecture? A: From a physical and logical architecture standpoint, nothing beats the whiteboard, drawing up the layers/tiers of the application in boxes. Then create an electronic copy using Visio. After that, iteratively dive into each layer and design it using appropriate tools and techniques. Here are what I commonly use: Database: ERD Business Objects (and Service Contracts): UML class diagrams UI: prototypes & wireframes Workflows and asynchronous operations: flowcharts and sequence diagrams A: I like to sketch out a design with pen & paper. Seriously. No computer. Layout the home screen, including a navigation bar. From here, think about what you'd like 2nd and 3rd tier pages to look like. I've found that this process of writing things out on paper really helps me think about what I want out of the site. Try to come up with templates for a few of the screens. Then create an outline of the content you would like to include. A: Paper prototyping baby - post-its on a whiteboard (so you can move them around and keep the relationships fluid without having to modify the page ideas themselves). Then of course, once you get down to the nitty-gritty of interface design, 'zoom in' on each of those post-its and use them to represent individual elements. Good for usability testing, avoiding code duplication, clean structures...just about everything. Plus it's cheap and very, very fast. A: By "architecture", do you mean the initial site map? If not, please post a clarification and I'll edit my response. Our tech team starts development after our creative department has done their stuff. Part of what we get is output from the information architect. He passes off a graphical sitemap, a detailed sitemap as an Excel sheet, and a set of wireframes in a PDF.
What do you use to create a website architecture?
Sure, we can use a simple bulleted list or a mindmap. But, there must be a better, more interactive way. What do you use when starting your website architecture?
[ "From a physical and logical architecture standpoint, nothing beats the whiteboard, drawing up the layers/tiers of the application in boxes. Then create an electronic copy using Visio.\nAfter that, iteratively dive into each layer and design it using appropriate tools and techniques. Here are what I commonly use:\n\nDatabase: ERD\nBusiness Objects (and Service Contracts): UML class diagrams\nUI: prototypes & wireframes\nWorkflows and asynchronous operations: flowcharts and sequence diagrams\n\n", "I like to sketch out a design with pen & paper. Seriously. No computer. Layout the home screen, including a navigation bar. From here, think about what you'd like 2nd and 3rd tier pages to look like.\nI've found that this process of writing things out on paper really helps me think about what I want out of the site. Try to come up with templates for a few of the screens. Then create an outline of the content you would like to include.\n", "Paper prototyping baby - post-its on a whiteboard (so you can move them around and keep the relationships fluid without having to modify the page ideas themselves). Then of course, once you get down to the nitty-gritty of interface design, 'zoom in' on each of those post-its and use them to represent individual elements. Good for usability testing, avoiding code duplication, clean structures...just about everything. Plus it's cheap and very, very fast.\n", "By \"architecture\", do you mean the initial site map? If not, please post a clarification and I'll edit my response.\nOur tech team starts development after our creative department has done their stuff. Part of what we get is output from the information architect. He passes off a graphical sitemap, a detailed sitemap as an Excel sheet, and a set of wireframes in a PDF.\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "architecture", "project_planning" ]
stackoverflow_0000054277_architecture_project_planning.txt
Q: Is it possible to over OO? My question is simple; is it possible to over object-orient your code? How much is too much? At what point are you giving up readability and maintainability for the sake of OO? I am a huge OO person but sometimes I wonder if I am over-complicating my code.... Thoughts? A: is it possible to over object-orient your code Yes A: If you think more objects is more object-oriented then yes. When doing object oriented design there are a couple of forces you have to balance. Most of OO design is about reducing and handling complexity. So if you get very complex solutions you're not doing too much OO but you're doing it wrong. A: If you find that the time needed to fully implement OO in your project is needlessly causing missed deadlines, then yes. There has to be a trade off between releasing software and full OO fidelity. How to decide depends on the person, the team, the project and the organization running the project. A: Yes, of course there is :-) object oriented techniques are a tool ... if you use the wrong tool for a given job, you will be over complicating things (think spoon when all you need is a knife). To me, I judge "how much" by the size and scope of the project. If it is a small project, sometimes it does add too much complexity. If the project is large, you will still be taking on this complexity, but it will pay for itself in ease of maintainability, extensibility, etc. A: My advice is not to overthink it. That usually results in over or under doing SOMETHING (if not OO). The rule of thumb that I usually use is this: if it makes the problem easier to wrap my head around, I use an object. If another paradigm makes it easier to wrap my head around than it would be if I used an object, I use that. That strategy has yet to fail me. A: Yes, it's definitely possible -- common, either. I once worked with a guy who created a data structure to bind to a dropdown list -- so he could allow users to select a gender. True, that would be useful if the list of possible genders were to change, but they haven't as yet (we don't live in California) A: Yes, just as one can over-normalize a database design. This seems to be one of those purist vs. pragmatic debates that will never end. <:S A: A lot of people try to design their code for maximum flexibility an reuse without considering how likely that will be. Instead, break your classes up based on the program you're writing. If you will have exactly one instance of a particular object, you might consider merging it into the containing object. A: I think your question should read, "Can you over Architecture your application?" And of course the answer is yet. OO is just an approach to design. If you spend your time building unnecessary complexity into a system because "Polymorphism Rocks!". Then yes maybe you're over OOing. The very XP answer is that Regardless of what approach you favor (OO, procedural, etc.) the design should only be as complex as is demonstrably necessary. A: Yes, you can. As an example, if you find yourself creating Interfaces or abstract classes before you have two subtypes for them, then you're over-doing it. I see this kind of thinking often when developers (over)design up front. I use Test-Driven Development and Refactoring techniques to avoid this behavior. A: is it possible to over object-orient your code? No. But it is possible to over complicate your code. For example, you can use design patterns for the sake of using design patterns. But you cannot over object-orient your code. Your code is either object-oriented or it is not. Just as your code is either well designed or it is not. A: I guess it is possible, but its hard to answer your in abstract terms (no pun intended). Give an example of over OO. A: Yes. See the concept of the "golden hammer antipattern" A: I think there are times when OO design can be taken to an extreme and even in very large projects make the code less readable and maintainable. For instance, there can be use in a footware base class, and child classes of sneaker, dress shoe, etc. But I have read/reviewed people's code where it seemed as though they were going to the extreme of creating classes beneath that for NikeSneakers and ReebokSneakers. Certainly there are differences between the two, but to have readable, maintainable code, I believe its important, at times, to expand a class to adapt to differences rather than creating new child classes to handle the differences. A: Yes, and it's easy. Code is not better simply because it's object-oriented, any more than it's better simply because it's modular or functional or generic or generative or dataflow-based or aspect-oriented or anything else. Good code is good code because it's well-designed in its programming paradigm. Good design requires care. Being careful takes time. An example for your case: I've seen horrific pieces of Java in which, in the name of being "object oriented", every class implements some interface, even when no other class will ever implement that interface. Sometimes it's a hack, but in others it really is gratuitous. In whatever paradigm or idiom you write code, going too far, partaking of too much of a good thing, can make the code more complicated than the problem. Some people will say, when that point is reached, that the code isn't even really, for example, object-oriented anymore. Object-oriented code is supposed to be better organized for the purpose of being simpler, more straight-forward, or easier to understand and digest in reasonably independent portions. Using the mechanisms of object oriented coding antithetically to this goal does not result in object oriented design . A: I think the clear answer is yes, but depending on the domain you are referring to, it could be REALLY yes, or less so yes. If you are building high level .Net or Java apps, then I think this is the latter, as OO is basically built into the language. On the other hand if you are working on embedded apps, then the dangers and likelihood that your are over OO'ing are high. There is nothing worse than seeing a really high level person come onto a embedded project and over complicate things that they think are ugly, but are the most simple and fast ways to do things. A: I think anything can be "overdone". This is true with nearly every best practice I can think of. I've seen inheritance chains so complex, the code was virtually unmanageable.
Is it possible to over OO?
My question is simple; is it possible to over object-orient your code? How much is too much? At what point are you giving up readability and maintainability for the sake of OO? I am a huge OO person but sometimes I wonder if I am over-complicating my code.... Thoughts?
[ "\nis it possible to over object-orient your code\n\nYes\n", "If you think more objects is more object-oriented then yes.\nWhen doing object oriented design there are a couple of forces you have to balance. Most of OO design is about reducing and handling complexity. So if you get very complex solutions you're not doing too much OO but you're doing it wrong.\n", "If you find that the time needed to fully implement OO in your project is needlessly causing missed deadlines, then yes.\nThere has to be a trade off between releasing software and full OO fidelity. How to decide depends on the person, the team, the project and the organization running the project.\n", "Yes, of course there is :-) object oriented techniques are a tool ... if you use the wrong tool for a given job, you will be over complicating things (think spoon when all you need is a knife).\nTo me, I judge \"how much\" by the size and scope of the project. If it is a small project, sometimes it does add too much complexity. If the project is large, you will still be taking on this complexity, but it will pay for itself in ease of maintainability, extensibility, etc.\n", "My advice is not to overthink it. That usually results in over or under doing SOMETHING (if not OO). The rule of thumb that I usually use is this: if it makes the problem easier to wrap my head around, I use an object. If another paradigm makes it easier to wrap my head around than it would be if I used an object, I use that.\nThat strategy has yet to fail me.\n", "Yes, it's definitely possible -- common, either. I once worked with a guy who created a data structure to bind to a dropdown list -- so he could allow users to select a gender. True, that would be useful if the list of possible genders were to change, but they haven't as yet (we don't live in California)\n", "Yes, just as one can over-normalize a database design. \nThis seems to be one of those purist vs. pragmatic debates that will never end. <:S\n", "A lot of people try to design their code for maximum flexibility an reuse without considering how likely that will be. Instead, break your classes up based on the program you're writing. If you will have exactly one instance of a particular object, you might consider merging it into the containing object.\n", "I think your question should read, \"Can you over Architecture your application?\"\nAnd of course the answer is yet. OO is just an approach to design. If you spend your time building unnecessary complexity into a system because \"Polymorphism Rocks!\". Then yes maybe you're over OOing.\nThe very XP answer is that Regardless of what approach you favor (OO, procedural, etc.) the design should only be as complex as is demonstrably necessary.\n", "Yes, you can. As an example, if you find yourself creating Interfaces or abstract classes before you have two subtypes for them, then you're over-doing it. I see this kind of thinking often when developers (over)design up front. I use Test-Driven Development and Refactoring techniques to avoid this behavior.\n", "\nis it possible to over object-orient your code?\n\nNo. But it is possible to over complicate your code. For example, you can use design patterns for the sake of using design patterns. But you cannot over object-orient your code. Your code is either object-oriented or it is not. Just as your code is either well designed or it is not.\n", "I guess it is possible, but its hard to answer your in abstract terms (no pun intended). Give an example of over OO.\n", "Yes. See the concept of the \"golden hammer antipattern\"\n", "I think there are times when OO design can be taken to an extreme and even in very large projects make the code less readable and maintainable. For instance, there can be use in a footware base class, and child classes of sneaker, dress shoe, etc. But I have read/reviewed people's code where it seemed as though they were going to the extreme of creating classes beneath that for NikeSneakers and ReebokSneakers. Certainly there are differences between the two, but to have readable, maintainable code, I believe its important, at times, to expand a class to adapt to differences rather than creating new child classes to handle the differences.\n", "Yes, and it's easy. Code is not better simply because it's object-oriented, any more than it's better simply because it's modular or functional or generic or generative or dataflow-based or aspect-oriented or anything else.\nGood code is good code because it's well-designed in its programming paradigm.\nGood design requires care.\nBeing careful takes time.\nAn example for your case: I've seen horrific pieces of Java in which, in the name\nof being \"object oriented\", every class implements some interface, even when no other class will ever implement that interface. Sometimes it's a hack, but in others it really is gratuitous.\nIn whatever paradigm or idiom you write code, going too far, partaking of too much of a good thing, can make the code more complicated than the problem. Some people will say, when that point is reached, that the code isn't even really, for example, object-oriented anymore.\nObject-oriented code is supposed to be better organized for the purpose of being simpler, more straight-forward, or easier to understand and digest in reasonably independent portions. Using the mechanisms of object oriented coding antithetically to this goal does not result in object oriented design .\n", "I think the clear answer is yes, but depending on the domain you are referring to, it could be REALLY yes, or less so yes. If you are building high level .Net or Java apps, then I think this is the latter, as OO is basically built into the language. On the other hand if you are working on embedded apps, then the dangers and likelihood that your are over OO'ing are high. There is nothing worse than seeing a really high level person come onto a embedded project and over complicate things that they think are ugly, but are the most simple and fast ways to do things.\n", "I think anything can be \"overdone\". This is true with nearly every best practice I can think of. I've seen inheritance chains so complex, the code was virtually unmanageable.\n" ]
[ 28, 3, 2, 2, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "oop" ]
stackoverflow_0000054299_oop.txt
Q: Conditional Number Formatting In Java How can I format Floats in Java so that the float component is displayed only if it's not zero? For example: 123.45 -> 123.45 99.0 -> 99 23.2 -> 23.2 45.0 -> 45 Edit: I forgot to mention - I'm still on Java 1.4 - sorry! A: If you use DecimalFormat and specify # in the pattern it only displays the value if it is not zero. See my question How do I format a number in java? Sample Code DecimalFormat format = new DecimalFormat("###.##"); double[] doubles = {123.45, 99.0, 23.2, 45.0}; for(int i=0;i<doubles.length;i++){ System.out.println(format.format(doubles[i])); } A: Check out the DecimalFormat class, e.g. new DecimalFormat("0.##").format(99.0) will return "99". A: new Formatter().format( "%f", myFloat )
Conditional Number Formatting In Java
How can I format Floats in Java so that the float component is displayed only if it's not zero? For example: 123.45 -> 123.45 99.0 -> 99 23.2 -> 23.2 45.0 -> 45 Edit: I forgot to mention - I'm still on Java 1.4 - sorry!
[ "If you use DecimalFormat and specify # in the pattern it only displays the value if it is not zero.\nSee my question How do I format a number in java?\nSample Code\n DecimalFormat format = new DecimalFormat(\"###.##\");\n\n double[] doubles = {123.45, 99.0, 23.2, 45.0};\n for(int i=0;i<doubles.length;i++){\n System.out.println(format.format(doubles[i]));\n }\n\n", "Check out the DecimalFormat class, e.g. new DecimalFormat(\"0.##\").format(99.0) will return \"99\".\n", "new Formatter().format( \"%f\", myFloat )\n\n" ]
[ 6, 2, 0 ]
[]
[]
[ "java", "java1.4" ]
stackoverflow_0000054487_java_java1.4.txt
Q: What's cleanest, shortest Javascript to submit a URL the user is at to another process via URL? Like the Delicious submission bookmark-let, I'd like to have some standard JavaScript I can use to submit any visited URL to a 3rd party site when that's possible by URL. Suggestions? For example, I've been using javascript:void(location.href="http://www.yacktrack.com/home?query="+encodeURI(location.href)) so far but wonder if there's something more sophisticated I could use or better practice. A: Do you want something exactly like the Delicious bookmarklet (as in, something the user actively clicks on to submit the URL)? If so, you could probably just copy their code and replace the target URL: javascript:(function(){ location.href='http://example.com/your-script.php?url='+ encodeURIComponent(window.location.href)+ '&title='+encodeURIComponent(document.title) })() You may need to change the query string names, etc., to match what your script expects. If you want to track a user through your website automatically, this probably won't be possible. You'd need to request the URL with AJAX, but the web browser won't allow Javascript to make a request outside of the originating domain. Maybe it's possible with iframe trickery. Edit: John beat me to it. A: document.location = "http://url_submitting_to.com?query_string_param=" + window.location; A: Another option would be to something like this: <form action="http://www.yacktrack.com/home" method="get" name="f"> <input type="hidden" name="query" /> </form> then your javascript would be: f.query.value=location.href; f.submit(); or you could combine the [save link] with the submit like this: <form action="http://www.yacktrack.com/home" method="get" name="f" onsubmit="f.query.value=location.href;"> <input type="hidden" name="query" /> <input type="submit" name="Save Link" /> </form> and if you're running server-side code, you can plug in the location so you can be JavaScript-free: <form action="http://www.yacktrack.com/home" method="get" name="f"> <input type="hidden" name="query" value="<%=Response.Url%>" /> <input type="submit" name="Save Link" /> </form>
What's cleanest, shortest Javascript to submit a URL the user is at to another process via URL?
Like the Delicious submission bookmark-let, I'd like to have some standard JavaScript I can use to submit any visited URL to a 3rd party site when that's possible by URL. Suggestions? For example, I've been using javascript:void(location.href="http://www.yacktrack.com/home?query="+encodeURI(location.href)) so far but wonder if there's something more sophisticated I could use or better practice.
[ "Do you want something exactly like the Delicious bookmarklet (as in, something the user actively clicks on to submit the URL)? If so, you could probably just copy their code and replace the target URL:\njavascript:(function(){\n location.href='http://example.com/your-script.php?url='+\n encodeURIComponent(window.location.href)+\n '&title='+encodeURIComponent(document.title)\n})()\n\nYou may need to change the query string names, etc., to match what your script expects.\nIf you want to track a user through your website automatically, this probably won't be possible. You'd need to request the URL with AJAX, but the web browser won't allow Javascript to make a request outside of the originating domain. Maybe it's possible with iframe trickery.\nEdit: John beat me to it.\n", "document.location = \"http://url_submitting_to.com?query_string_param=\" + window.location;\n\n", "Another option would be to something like this:\n<form action=\"http://www.yacktrack.com/home\" method=\"get\" name=\"f\">\n <input type=\"hidden\" name=\"query\" />\n</form>\n\nthen your javascript would be:\nf.query.value=location.href; f.submit();\n\nor you could combine the [save link] with the submit like this:\n<form action=\"http://www.yacktrack.com/home\" method=\"get\" name=\"f\" onsubmit=\"f.query.value=location.href;\">\n <input type=\"hidden\" name=\"query\" />\n <input type=\"submit\" name=\"Save Link\" />\n</form>\n\nand if you're running server-side code, you can plug in the location so you can be JavaScript-free:\n<form action=\"http://www.yacktrack.com/home\" method=\"get\" name=\"f\">\n <input type=\"hidden\" name=\"query\" value=\"<%=Response.Url%>\" />\n <input type=\"submit\" name=\"Save Link\" />\n</form>\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "javascript", "submission", "url" ]
stackoverflow_0000054426_javascript_submission_url.txt
Q: Tracking Refactorings in a Bug Database Let's say you work someplace where every change to source code must be associated with a bug-report or feature-request, and there is no way to get that policy reformed. In such an environment, what is the best way to deal with code refactorings (that is, changes that improve the code but do not fix a bug or add a feature)? Write up a bug-report and associate the refactoring with it. Write up a feature-request and associate the refactoring with it. Sneak in the refactorings while working on code that is associated with a bug-report/feature-request. Just don't do any refactoring. Other Note that all bug reports and feature descriptions will be visible to managers and customers. A: I vote for the "sneak in refactorings" approach, which is, I believe, the way refactoring is meant to be done in the first place. It's probably a bad idea to refactor just for the sake of "cleaning up the code." This means that you're making changes for no real reason. Refactoring is, by definition, modifying the without the intent of fixing bugs or adding features. If you're following the KISS principle, any new feature is going to need at least some refactoring because you're not really thinking about how to make the most extensible system possible the first time around. A: The way we work it is: There must be a good reason to refactor the code, otherwise why? If the reason is to allow another feature to use the same code, associate the changes with the other feature's request. If it's to make something faster, create a feature request for faster 'xyz' and associate the changes with that - then the customers see you're improving the product. If it's to design out a bug, log the bug. It's worth noting that in my environment, the policy cannot be enforced. But clever managers can get reports of changes and if they don't have a bug\request reference in the commit text it's followed up. A: If you're working on a block of code, in most cases that's because there's either a bug fix or new feature that requires that block of code to change, and the refactoring is either prior to the change in order to make it easier, or after the change to tidy up the result. In either case, you can associate the refactoring with that bug fix or feature. A: Lets have a look at each option: Write up a bug-report and associate the refactoring with it. If you feel that, in your opinion, the original code poses a security risk or potential for crashing or instability. Write a small bug report outlining the danger, and then fix it. Write up a feature-request and associate the refactoring with it. It might be harder to reactor code based on a feature request. But you could use valid feature request to do this which leads me onto the next point... Sneak in the refactorings while working on code that is associated with a bug-report/feature-request. If there is a valid bug or feature, state that function x had to be change slightly to fix the bug or add the feature. Just don't do any refactoring. This seems to suggest the self development through improving an application is not allowed. Developers should be allowed, if not, encourage to explorer new techniques and technologies. Other Perhaps you could discuss your improvement at relevant meeting, giving convincing reasons why the changes should be made. Then at least you will have management backing to the change without having to sneak in the code through another method. A: Other If you work at a place with that kind of inflexible (and ridiculous) policy, the best solution is to find another job!
Tracking Refactorings in a Bug Database
Let's say you work someplace where every change to source code must be associated with a bug-report or feature-request, and there is no way to get that policy reformed. In such an environment, what is the best way to deal with code refactorings (that is, changes that improve the code but do not fix a bug or add a feature)? Write up a bug-report and associate the refactoring with it. Write up a feature-request and associate the refactoring with it. Sneak in the refactorings while working on code that is associated with a bug-report/feature-request. Just don't do any refactoring. Other Note that all bug reports and feature descriptions will be visible to managers and customers.
[ "I vote for the \"sneak in refactorings\" approach, which is, I believe, the way refactoring is meant to be done in the first place. It's probably a bad idea to refactor just for the sake of \"cleaning up the code.\" This means that you're making changes for no real reason. Refactoring is, by definition, modifying the without the intent of fixing bugs or adding features. If you're following the KISS principle, any new feature is going to need at least some refactoring because you're not really thinking about how to make the most extensible system possible the first time around.\n", "The way we work it is: There must be a good reason to refactor the code, otherwise why?\nIf the reason is to allow another feature to use the same code, associate the changes with the other feature's request.\nIf it's to make something faster, create a feature request for faster 'xyz' and associate the changes with that - then the customers see you're improving the product.\nIf it's to design out a bug, log the bug.\nIt's worth noting that in my environment, the policy cannot be enforced. But clever managers can get reports of changes and if they don't have a bug\\request reference in the commit text it's followed up.\n", "If you're working on a block of code, in most cases that's because there's either a bug fix or new feature that requires that block of code to change, and the refactoring is either prior to the change in order to make it easier, or after the change to tidy up the result. In either case, you can associate the refactoring with that bug fix or feature.\n", "Lets have a look at each option:\n\nWrite up a bug-report and associate the refactoring with it.\n\nIf you feel that, in your opinion, the original code poses a security risk or potential for crashing or instability. Write a small bug report outlining the danger, and then fix it.\n\nWrite up a feature-request and associate the refactoring with it.\n\nIt might be harder to reactor code based on a feature request. But you could use valid feature request to do this which leads me onto the next point...\n\nSneak in the refactorings while working on code that is associated with a bug-report/feature-request.\n\nIf there is a valid bug or feature, state that function x had to be change slightly to fix the bug or add the feature.\n\nJust don't do any refactoring.\n\nThis seems to suggest the self development through improving an application is not allowed. Developers should be allowed, if not, encourage to explorer new techniques and technologies.\n\nOther\n\nPerhaps you could discuss your improvement at relevant meeting, giving convincing reasons why the changes should be made. Then at least you will have management backing to the change without having to sneak in the code through another method.\n", "\nOther\n\nIf you work at a place with that kind of inflexible (and ridiculous) policy, the best solution is to find another job!\n" ]
[ 7, 2, 2, 0, 0 ]
[]
[]
[ "bug_tracking", "refactoring" ]
stackoverflow_0000054264_bug_tracking_refactoring.txt
Q: Printing data into a preprinted form in C# .Net 3.5 SP1 I need to print out data into a pre-printed A6 form (1/4 the size of a landsacpe A4). I do not need to print paragraphs of text, just short lines scattered about on the page. All the stuff on MSDN is about priting paragraphs of text. Thanks for any help you can give, Roberto A: you'll have to create a PrintDocument object, handle the at least the PrintPage event and apply the appropriate changes to the PrinterSettings property. In your PrintPage event handler, do whatever you need to do with the PringPageEventArgs.Graphics object; like drawing lines, drawing images, etc. A: When finding the x,y coordinates to use for lining up your new text with the pre-printed gaps, the default settings for the graphics object's Draw____() functions are 100 pixels per inch. That might be subject to change based on your printer, but in my (very limited) experience that's always been the case.
Printing data into a preprinted form in C# .Net 3.5 SP1
I need to print out data into a pre-printed A6 form (1/4 the size of a landsacpe A4). I do not need to print paragraphs of text, just short lines scattered about on the page. All the stuff on MSDN is about priting paragraphs of text. Thanks for any help you can give, Roberto
[ "you'll have to create a PrintDocument object, handle the at least the PrintPage event and apply the appropriate changes to the PrinterSettings property.\nIn your PrintPage event handler, do whatever you need to do with the PringPageEventArgs.Graphics object; like drawing lines, drawing images, etc.\n", "When finding the x,y coordinates to use for lining up your new text with the pre-printed gaps, the default settings for the graphics object's Draw____() functions are 100 pixels per inch. That might be subject to change based on your printer, but in my (very limited) experience that's always been the case.\n" ]
[ 2, 2 ]
[]
[]
[ ".net", "c#", "printing" ]
stackoverflow_0000054522_.net_c#_printing.txt
Q: Getting DIV id based on x & y position The problem I'm trying to solve is "What's at this position?" It's fairly trivial to get the x/y position (offset) of a DIV, but what about the reverse? How do I get the id of a DIV (or any element) given an x/y position? A: Unfortunately, triggering a manufactured/simulated mouse event won't work, since when you dispatch it, you have to provide a target element. Since that element is the one you're trying to figure out, all you could do is dispatch it on the body, as if it had already bubbled. You really are left to do it on your own, that is manually walk through the elements you're interested in, and compare their position/size/zIndex to your x/y point and see if they overlap. Except in IE and more recently FF3, where you can use var el = document.elementFromPoint(x, y); See http://developer.mozilla.org/En/DOM:document.elementFromPoint http://msdn.microsoft.com/en-us/library/ms536417(VS.85).aspx A: function getDivByXY(x,y) { var alldivs = document.getElementsByTagName('div'); for(var d = 0; d < alldivs.length; d++) { if((alldivs[d].offsetLeft == x) && (alldivs[d].offsetTop == y)) { return alldivs[d]; } } return false; } A: Use a JQuery selector to filter the list of all DIVs for one that matches your position criteria? A: Create a mouse event listener, then trigger a mouse event at that location. This should give you the entire stack of elements at that location. Or, look at the source of Firebug. A: If all you have is the X and Y position, (and you can't track mouse movement like you mentioned) then you will have to traverse the DOM, looping through every DIV. For each DIV you will need to compare its X and Y coordinates against those you have. This is an expensive operation, but it is the only way. I suggest you might be better off rethinking your problem instead of coming up with a solution for it. A: One option is to build an array of "div-dimension" objects. (Not to be confused with the divs themselves... IE7 perf is frustrating when you read dimensions off of object.) These objects consist of a pointer to the div, their dimensions (four points... say top, left, bottom, and right), and possibly a dirty bit. (Dirty bit is only really needed if the sizes change. You could then iterate through the array and check dimensions. It requires O(n) to do that on each mouse move. You might be able to do slightly better with a binary search style approach... maybe. If you do a binary search style approach, one way is to store 4 arrays. Each with a single point of the dimension, and then binary search on all four. O(4logn) = O(logn). I'm not saying I recommend any of these, but they MIGHT work. A: I think what John is saying is that you can use document.createEvent() to simulate a mousemove at the location you want. If you capture that event, by adding an eventlistener to the body, you can look at the event.target and see what element was at that position. I'm unsure as to what degree IE supports this method, maybe someone else knows? http://developer.mozilla.org/en/DOM/document.createEvent Update: Here's a jquery plugin that simulates events: http://jquery-ui.googlecode.com/svn/trunk/tests/simulate/jquery.simulate.js A: this might be a little too processor intensive but going over the whole list of div elements on a page, finding their positions and sizes then testing if they're under the mouse. i don't think i'd want to do that to a browser though. A: You might find it's more efficient to traverse the DOM tree once when the page is loaded, get all elements' positions and sizes, and store them in an array/hash/etc. If you design the data structure well, you should be able to find an element at the given coordinates fairly quickly when you need it later. Consider how often you will need to detect an element, and compare that to how often the elements on the page will change. You would be balancing the number of times you have to re-compute all the element locations (an expensive computation) against the number of times you'd actually use the computed information (relatively cheap, I hope).
Getting DIV id based on x & y position
The problem I'm trying to solve is "What's at this position?" It's fairly trivial to get the x/y position (offset) of a DIV, but what about the reverse? How do I get the id of a DIV (or any element) given an x/y position?
[ "Unfortunately, triggering a manufactured/simulated mouse event won't work, since when you dispatch it, you have to provide a target element. Since that element is the one you're trying to figure out, all you could do is dispatch it on the body, as if it had already bubbled.\nYou really are left to do it on your own, that is manually walk through the elements you're interested in, and compare their position/size/zIndex to your x/y point and see if they overlap. Except in IE and more recently FF3, where you can use\nvar el = document.elementFromPoint(x, y);\n\nSee\nhttp://developer.mozilla.org/En/DOM:document.elementFromPoint\nhttp://msdn.microsoft.com/en-us/library/ms536417(VS.85).aspx\n", "function getDivByXY(x,y) {\n var alldivs = document.getElementsByTagName('div');\n\n for(var d = 0; d < alldivs.length; d++) {\n if((alldivs[d].offsetLeft == x) && (alldivs[d].offsetTop == y)) {\n return alldivs[d];\n }\n }\n\n return false;\n}\n\n", "Use a JQuery selector to filter the list of all DIVs for one that matches your position criteria?\n", "Create a mouse event listener, then trigger a mouse event at that location. This should give you the entire stack of elements at that location.\nOr, look at the source of Firebug.\n", "If all you have is the X and Y position, (and you can't track mouse movement like you mentioned) then you will have to traverse the DOM, looping through every DIV. For each DIV you will need to compare its X and Y coordinates against those you have. This is an expensive operation, but it is the only way. I suggest you might be better off rethinking your problem instead of coming up with a solution for it.\n", "One option is to build an array of \"div-dimension\" objects. (Not to be confused with the divs themselves... IE7 perf is frustrating when you read dimensions off of object.)\nThese objects consist of a pointer to the div, their dimensions (four points... say top, left, bottom, and right), and possibly a dirty bit. (Dirty bit is only really needed if the sizes change. \nYou could then iterate through the array and check dimensions. It requires O(n) to do that on each mouse move. You might be able to do slightly better with a binary search style approach... maybe.\nIf you do a binary search style approach, one way is to store 4 arrays. Each with a single point of the dimension, and then binary search on all four. O(4logn) = O(logn).\nI'm not saying I recommend any of these, but they MIGHT work. \n", "I think what John is saying is that you can use document.createEvent() to simulate a mousemove at the location you want. If you capture that event, by adding an eventlistener to the body, you can look at the event.target and see what element was at that position. I'm unsure as to what degree IE supports this method, maybe someone else knows?\nhttp://developer.mozilla.org/en/DOM/document.createEvent\nUpdate:\nHere's a jquery plugin that simulates events:\nhttp://jquery-ui.googlecode.com/svn/trunk/tests/simulate/jquery.simulate.js\n", "this might be a little too processor intensive but going over the whole list of div elements on a page, finding their positions and sizes then testing if they're under the mouse. i don't think i'd want to do that to a browser though.\n", "You might find it's more efficient to traverse the DOM tree once when the page is loaded, get all elements' positions and sizes, and store them in an array/hash/etc. If you design the data structure well, you should be able to find an element at the given coordinates fairly quickly when you need it later.\nConsider how often you will need to detect an element, and compare that to how often the elements on the page will change. You would be balancing the number of times you have to re-compute all the element locations (an expensive computation) against the number of times you'd actually use the computed information (relatively cheap, I hope).\n" ]
[ 4, 3, 2, 2, 2, 1, 1, 0, 0 ]
[]
[]
[ "javascript" ]
stackoverflow_0000048999_javascript.txt
Q: What is the best practice for writing Registry calls/File Sytem calls/Process creation filter for WinXP, Vista? We needed to monitor all processes Registry calls/File Sytem calls/Process creations in the system (for the antivirus hips module) . Also time by time it will be needed to delay some calls or decline them. A: The supported method of doing this is RegNotifyChangeKeyValue Most virus checkers likely perform some sort of API hooking instead of using this function. There's lots of information out there about API hooking, like http://www.codeproject.com/KB/system/hooksys.aspx, http://www.codeguru.com/cpp/w-p/system/misc/article.php/c5667
What is the best practice for writing Registry calls/File Sytem calls/Process creation filter for WinXP, Vista?
We needed to monitor all processes Registry calls/File Sytem calls/Process creations in the system (for the antivirus hips module) . Also time by time it will be needed to delay some calls or decline them.
[ "The supported method of doing this is RegNotifyChangeKeyValue\nMost virus checkers likely perform some sort of API hooking instead of using this function. There's lots of information out there about API hooking, like http://www.codeproject.com/KB/system/hooksys.aspx, http://www.codeguru.com/cpp/w-p/system/misc/article.php/c5667\n" ]
[ 1 ]
[]
[]
[ "drivers", "winapi" ]
stackoverflow_0000054504_drivers_winapi.txt
Q: Modifying a spreadsheet using a VB macro I have two spreadsheets... when one gets modified in a certain way I want to have a macro run that modifies the second in an appropriate manner. I've already isolated the event I need to act on (the modification of any cell in a particular column), I just can't seem to find any concrete information on accessing and modifying another spreadsheet (this spreadsheet is located on a different LAN share also... the user has access to both, though). Any help would be great. References on how to do this or something similar are just as good as concrete code samples. A: In Excel, you would likely just write code to open the other worksheet, modify it and then save the data. See this tutorial for more info. I'll have to edit my VBA later, so pretend this is pseudocode, but it should look something like: Dim xl: Set xl = CreateObject("Excel.Application") xl.Open "\\the\share\file.xls" Dim ws: Set ws = xl.Worksheets(1) ws.Cells(0,1).Value = "New Value" ws.Save xl.Quit constSilent A: You can open a spreadsheet in a single line: Workbooks.Open FileName:="\\the\share\file.xls" and refer to it as the active workbook: Range("A1").value = "New value" A: Copy the following in your ThisWorkbook object to watch for specific changes. In this case when you increase a numeric value to another numeric value. NB: you will have to replace Workbook-SheetChange and Workbook-SheetSelectionChange with an underscore. Ex: Workbook_SheetChange and Workbook_SheetSelectionChange the underscore gets escaped in Markdown code. Option Explicit Dim varPreviousValue As Variant ' required for IsThisMyChange() . This should be made more unique since it's in the global space. Private Sub Workbook-SheetChange(ByVal Sh As Object, ByVal Target As Range) ' required for IsThisMyChange() IsThisMyChange Sh, Target End Sub Private Sub Workbook-SheetSelectionChange(ByVal Sh As Object, ByVal Target As Range) ' This implements and awful way of accessing the previous value via a global. ' not pretty but required for IsThisMyChange() varPreviousValue = Target.Cells(1, 1).Value ' NB: This is used so that if a Merged set of cells if referenced only the first cell is used End Sub Private Sub IsThisMyChange(Sh As Object, Target As Range) Dim isMyChange As Boolean Dim dblValue As Double Dim dblPreviousValue As Double isMyChange = False ' Simple catch all. If either number cant be expressed as doubles, then exit. On Error GoTo ErrorHandler dblValue = CDbl(Target.Value) dblPreviousValue = CDbl(varPreviousValue) On Error GoTo 0 ' This turns off "On Error" statements in VBA. If dblValue > dblPreviousValue Then isMyChange = True End If If isMyChange Then MsgBox ("You've increased the value of " & Target.Address) End If ' end of normal execution Exit Sub ErrorHandler: ' Do nothing much. Exit Sub End Sub If you are wishing to change another workbook based on this, i'd think about checking to see if the workbook is already open first... or even better design a solution that can batch up all your changes and do them at once. Continuously changing another spreadsheet based on you listening to this one could be painful. A: After playing with this for a while, I found the Michael's pseudo-code was the closest, but here's how I did it: Dim xl As Excel.Application Set xl = CreateObject("Excel.Application") xl.Workbooks.Open "\\owghome1\bennejm$\testing.xls" xl.Sheets("Sheet1").Select Then, manipulate the sheet... maybe like this: xl.Cells(x, y).Value = "Some text" When you're done, use these lines to finish up: xl.Workbooks.Close xl.Quit If changes were made, the user will be prompted to save the file before it's closed. There might be a way to save automatically, but this way is actually better so I'm leaving it like it is. Thanks for all the help!
Modifying a spreadsheet using a VB macro
I have two spreadsheets... when one gets modified in a certain way I want to have a macro run that modifies the second in an appropriate manner. I've already isolated the event I need to act on (the modification of any cell in a particular column), I just can't seem to find any concrete information on accessing and modifying another spreadsheet (this spreadsheet is located on a different LAN share also... the user has access to both, though). Any help would be great. References on how to do this or something similar are just as good as concrete code samples.
[ "In Excel, you would likely just write code to open the other worksheet, modify it and then save the data.\nSee this tutorial for more info.\nI'll have to edit my VBA later, so pretend this is pseudocode, but it should look something like:\nDim xl: Set xl = CreateObject(\"Excel.Application\")\nxl.Open \"\\\\the\\share\\file.xls\"\n\nDim ws: Set ws = xl.Worksheets(1)\nws.Cells(0,1).Value = \"New Value\"\nws.Save\n\nxl.Quit constSilent\n\n", "You can open a spreadsheet in a single line:\nWorkbooks.Open FileName:=\"\\\\the\\share\\file.xls\"\n\nand refer to it as the active workbook:\nRange(\"A1\").value = \"New value\"\n\n", "Copy the following in your ThisWorkbook object to watch for specific changes. In this case when you increase a numeric value to another numeric value. \nNB: you will have to replace Workbook-SheetChange and Workbook-SheetSelectionChange with an underscore. Ex: Workbook_SheetChange and Workbook_SheetSelectionChange the underscore gets escaped in Markdown code.\nOption Explicit\nDim varPreviousValue As Variant ' required for IsThisMyChange() . This should be made more unique since it's in the global space.\n\n\nPrivate Sub Workbook-SheetChange(ByVal Sh As Object, ByVal Target As Range)\n ' required for IsThisMyChange()\n IsThisMyChange Sh, Target\nEnd Sub\n\nPrivate Sub Workbook-SheetSelectionChange(ByVal Sh As Object, ByVal Target As Range)\n ' This implements and awful way of accessing the previous value via a global.\n ' not pretty but required for IsThisMyChange()\n varPreviousValue = Target.Cells(1, 1).Value ' NB: This is used so that if a Merged set of cells if referenced only the first cell is used\nEnd Sub\n\nPrivate Sub IsThisMyChange(Sh As Object, Target As Range)\n Dim isMyChange As Boolean\n Dim dblValue As Double\n Dim dblPreviousValue As Double\n\n isMyChange = False\n\n ' Simple catch all. If either number cant be expressed as doubles, then exit.\n On Error GoTo ErrorHandler\n dblValue = CDbl(Target.Value)\n dblPreviousValue = CDbl(varPreviousValue)\n On Error GoTo 0 ' This turns off \"On Error\" statements in VBA.\n\n\n If dblValue > dblPreviousValue Then\n isMyChange = True\n End If\n\n\n If isMyChange Then\n MsgBox (\"You've increased the value of \" & Target.Address)\n End If\n\n\n ' end of normal execution\n Exit Sub\n\n\nErrorHandler:\n ' Do nothing much.\n Exit Sub\n\nEnd Sub\n\nIf you are wishing to change another workbook based on this, i'd think about checking to see if the workbook is already open first... or even better design a solution that can batch up all your changes and do them at once. Continuously changing another spreadsheet based on you listening to this one could be painful.\n", "After playing with this for a while, I found the Michael's pseudo-code was the closest, but here's how I did it:\nDim xl As Excel.Application\nSet xl = CreateObject(\"Excel.Application\")\nxl.Workbooks.Open \"\\\\owghome1\\bennejm$\\testing.xls\"\nxl.Sheets(\"Sheet1\").Select\n\nThen, manipulate the sheet... maybe like this:\nxl.Cells(x, y).Value = \"Some text\"\n\nWhen you're done, use these lines to finish up:\nxl.Workbooks.Close\nxl.Quit\n\nIf changes were made, the user will be prompted to save the file before it's closed. There might be a way to save automatically, but this way is actually better so I'm leaving it like it is.\nThanks for all the help!\n" ]
[ 5, 0, 0, 0 ]
[]
[]
[ "excel", "vba" ]
stackoverflow_0000051098_excel_vba.txt
Q: Embed asp page without iframe I want to embed an .asp page on an html page. I cannot use an iframe. I tried: <object width="100%" height="1500" type="text/html" data="url.asp"> alt : <a href="url.asp">url</a> </object>" works great in ff but not ie7. Any ideas? Is it possible to use the object tag to embed .asp pages for IE or does it only work in ff? A: You might be able to fake it using javascript. You could either use AJAX to load the page, then insert the HTML, or load "url.asp" in a hidden iframe and copy the HTML from there. One downside (or maybe this is what you want) is that the pages aren't completely independent, so CSS rules from the outer page will affect the embedded page. A: I've solved it in the past using Javascript and XMLHttp. It can get a bit hacky depending on the circumstances. In particular, you have to watch out for the inner page failing and how it affects/downgrades the outer one (hopefully you can keep it downgrading elegantly). Search for XMLHttp (or check this great tutorial) and request the "child" page from the outer one, rendering the HTML you need. Preferably you can get just the specific data you need and process it in Javascript. A: Well, after searching around and testing I don't think it is possible. It looks to me like IE does not allow the object tag access to a resource that is not on the same domain as the parent. It would have worked for me if the content I was trying to pull in was on same domain but it wasn't. If anyone could confirm my interpretation of this it would be appreciated.
Embed asp page without iframe
I want to embed an .asp page on an html page. I cannot use an iframe. I tried: <object width="100%" height="1500" type="text/html" data="url.asp"> alt : <a href="url.asp">url</a> </object>" works great in ff but not ie7. Any ideas? Is it possible to use the object tag to embed .asp pages for IE or does it only work in ff?
[ "You might be able to fake it using javascript. You could either use AJAX to load the page, then insert the HTML, or load \"url.asp\" in a hidden iframe and copy the HTML from there.\nOne downside (or maybe this is what you want) is that the pages aren't completely independent, so CSS rules from the outer page will affect the embedded page.\n", "I've solved it in the past using Javascript and XMLHttp. It can get a bit hacky depending on the circumstances. In particular, you have to watch out for the inner page failing and how it affects/downgrades the outer one (hopefully you can keep it downgrading elegantly).\nSearch for XMLHttp (or check this great tutorial) and request the \"child\" page from the outer one, rendering the HTML you need. Preferably you can get just the specific data you need and process it in Javascript.\n", "Well, after searching around and testing I don't think it is possible. It looks to me like IE does not allow the object tag access to a resource that is not on the same domain as the parent. It would have worked for me if the content I was trying to pull in was on same domain but it wasn't. If anyone could confirm my interpretation of this it would be appreciated.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "asp.net", "iframe" ]
stackoverflow_0000053064_asp.net_iframe.txt
Q: How should I handle a situation where I need to store several unrelated types but provide specific types on demand? I'm working on an editor for files that are used by an important internal testing tool we use. The tool itself is large, complicated, and refactoring or rewriting would take more resources than we are able to devote to it for the forseeable future, so my hands are tied when it comes to large modifications. I must use a .NET language. The files are XML serialized versions of four classes that are used by the tool (let's call them A, B, C, and D). The classes form a tree structure when all is well. Our editor works by loading a set of files, deserializing them, working out the relationships between them, and keeping track of any bad states it can find. The idea is for us to move away from hand-editing these files, which introduces tons of errors. For a particular type of error, I'd like to maintain a collection of all files that have the problem. All four classes can have the problem, and I'd like to reduce duplication of code as much as possible. An important requirement is the user needs to be able to get the items in sets; for example, they need to get all A objects with an error, and telling them to iterate over the whole collection and pick out what they want is unacceptable compared to a GetAs() method. So, my first thought was to make a generic item that related the deserialized object and some metadata to indicate the error: public class ErrorItem<T> { public T Item { get; set; } public Metadata Metadata { get; set; } } Then, I'd have a collection class that could hold all of the error items, with helper methods to extract the items of a specific class when the user needs them. This is where the trouble starts. None of the classes inherit from a common ancestor (other than Object). This was probably a mistake of the initial design, but I've spent a few days thinking about it and the classes really don't have much in common other than a GUID property that uniquely identifies each item so I can see why the original designer did not relate them through inheritance. This means that the unified error collection would need to store ErrorItem<Object> objects, since I don't have a base class or interface to restrict what comes in. However, this makes the idea of this unified collection a little sketchy to me: Public Class ErrorCollection { public ErrorItem<Object> AllItems { get; set; } } However, this has consequences on the public interface. What I really want is to return the appropriate ErrorItem generic type like this: public ErrorItem<A>[] GetA() This is impossible because I can only store ErrorItem<Object>! I've gone over some workarounds in my head; mostly they include creating a new ErrorItem of the appropriate type on-the-fly, but it just feels kind of ugly. Another thought has been using a Dictionary to keep items organized by type, but it still doesn't seem right. Is there some kind of pattern that might help me here? I know the easiest way to solve this is to add a base class that A, B, C, and D derive from, but I'm trying to have as small an impact on the original tool as possible. Is the cost of any workaround great enough that I should push to change the initial tool? A: Is this what you are looking for? private List<ErrorItem<object>> _allObjects = new List<ErrorItem<object>>(); public IEnumerable<ErrorItem<A>> ItemsOfA { get { foreach (ErrorItem<object> obj in _allObjects) { if (obj.Item is A) yield return new ErrorItem<A>((A)obj.Item, obj.MetaData); } } } If you want to cache the ItemsOfA you can easily do that: private List<ErrorItem<A>> _itemsOfA = null; public IEnumerable<ErrorItem<A>> ItemsOfACached { if (_itemsOfA == null) _itemsOfA = new List<ErrorItem<A>>(ItemsOfA); return _itemsOfA; } A: The answer I'm going with so far is a combination of the answers from fryguybob and Mendelt Siebenga. Adding a base class would just pollute the namespace and introduce a similar problem, as Mendelt Siebenga pointed out. I would get more control over what items can go into the collection, but I'd still need to store ErrorItem<BaseClass> and still do some casting, so I'd have a slightly different problem with the same root cause. This is why I selected the post as the answer: it points out that I'm going to have to do some casts no matter what, and KISS would dictate that the extra base class and generics are too much. I like fryguybob's answer not for the solution itself but for reminding me about yield return, which will make a non-cached version easier to write (I was going to use LINQ). I think a cached version is a little bit more wise, though the expected performance parameters won't make the non-cached version noticably slower. A: If A, B, C and D have nothing in common then adding a base class won't really get you anything. It will just be an empty class and in effect will be the same as object. I'd just create an ErrorItem class without the generics, make Item an object and do some casting when you want to use the objects referenced. If you want to use any of the properties or methods of the A, B, C or D class other than the Guid you would have had to cast them anyway.
How should I handle a situation where I need to store several unrelated types but provide specific types on demand?
I'm working on an editor for files that are used by an important internal testing tool we use. The tool itself is large, complicated, and refactoring or rewriting would take more resources than we are able to devote to it for the forseeable future, so my hands are tied when it comes to large modifications. I must use a .NET language. The files are XML serialized versions of four classes that are used by the tool (let's call them A, B, C, and D). The classes form a tree structure when all is well. Our editor works by loading a set of files, deserializing them, working out the relationships between them, and keeping track of any bad states it can find. The idea is for us to move away from hand-editing these files, which introduces tons of errors. For a particular type of error, I'd like to maintain a collection of all files that have the problem. All four classes can have the problem, and I'd like to reduce duplication of code as much as possible. An important requirement is the user needs to be able to get the items in sets; for example, they need to get all A objects with an error, and telling them to iterate over the whole collection and pick out what they want is unacceptable compared to a GetAs() method. So, my first thought was to make a generic item that related the deserialized object and some metadata to indicate the error: public class ErrorItem<T> { public T Item { get; set; } public Metadata Metadata { get; set; } } Then, I'd have a collection class that could hold all of the error items, with helper methods to extract the items of a specific class when the user needs them. This is where the trouble starts. None of the classes inherit from a common ancestor (other than Object). This was probably a mistake of the initial design, but I've spent a few days thinking about it and the classes really don't have much in common other than a GUID property that uniquely identifies each item so I can see why the original designer did not relate them through inheritance. This means that the unified error collection would need to store ErrorItem<Object> objects, since I don't have a base class or interface to restrict what comes in. However, this makes the idea of this unified collection a little sketchy to me: Public Class ErrorCollection { public ErrorItem<Object> AllItems { get; set; } } However, this has consequences on the public interface. What I really want is to return the appropriate ErrorItem generic type like this: public ErrorItem<A>[] GetA() This is impossible because I can only store ErrorItem<Object>! I've gone over some workarounds in my head; mostly they include creating a new ErrorItem of the appropriate type on-the-fly, but it just feels kind of ugly. Another thought has been using a Dictionary to keep items organized by type, but it still doesn't seem right. Is there some kind of pattern that might help me here? I know the easiest way to solve this is to add a base class that A, B, C, and D derive from, but I'm trying to have as small an impact on the original tool as possible. Is the cost of any workaround great enough that I should push to change the initial tool?
[ "Is this what you are looking for?\nprivate List<ErrorItem<object>> _allObjects = new List<ErrorItem<object>>();\n\npublic IEnumerable<ErrorItem<A>> ItemsOfA\n{\n get\n {\n foreach (ErrorItem<object> obj in _allObjects)\n {\n if (obj.Item is A)\n yield return new ErrorItem<A>((A)obj.Item, obj.MetaData);\n }\n }\n}\n\nIf you want to cache the ItemsOfA you can easily do that:\nprivate List<ErrorItem<A>> _itemsOfA = null;\n\npublic IEnumerable<ErrorItem<A>> ItemsOfACached\n{\n if (_itemsOfA == null)\n _itemsOfA = new List<ErrorItem<A>>(ItemsOfA);\n return _itemsOfA;\n}\n\n", "The answer I'm going with so far is a combination of the answers from fryguybob and Mendelt Siebenga.\nAdding a base class would just pollute the namespace and introduce a similar problem, as Mendelt Siebenga pointed out. I would get more control over what items can go into the collection, but I'd still need to store ErrorItem<BaseClass> and still do some casting, so I'd have a slightly different problem with the same root cause. This is why I selected the post as the answer: it points out that I'm going to have to do some casts no matter what, and KISS would dictate that the extra base class and generics are too much.\nI like fryguybob's answer not for the solution itself but for reminding me about yield return, which will make a non-cached version easier to write (I was going to use LINQ). I think a cached version is a little bit more wise, though the expected performance parameters won't make the non-cached version noticably slower.\n", "If A, B, C and D have nothing in common then adding a base class won't really get you anything. It will just be an empty class and in effect will be the same as object.\nI'd just create an ErrorItem class without the generics, make Item an object and do some casting when you want to use the objects referenced. If you want to use any of the properties or methods of the A, B, C or D class other than the Guid you would have had to cast them anyway.\n" ]
[ 1, 1, 0 ]
[]
[]
[ ".net", "generics" ]
stackoverflow_0000054219_.net_generics.txt
Q: How Do You Insert XML Into an existing XML node I'm not even sure if it's possible but say I have some XML: <source> <list> <element id="1"/> </list> </source> And I would like to insert into list: <element id="2"/> Can I write an XSLT to do this? A: Add these 2 template definitions to an XSLT file: <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> <xsl:template match="list"> <list> <xsl:apply-templates select="@* | *"/> <element id="2"/> </list> </xsl:template>
How Do You Insert XML Into an existing XML node
I'm not even sure if it's possible but say I have some XML: <source> <list> <element id="1"/> </list> </source> And I would like to insert into list: <element id="2"/> Can I write an XSLT to do this?
[ "Add these 2 template definitions to an XSLT file:\n<xsl:template match=\"@*|node()\">\n <xsl:copy>\n <xsl:apply-templates select=\"@*|node()\"/>\n </xsl:copy>\n</xsl:template>\n<xsl:template match=\"list\">\n <list>\n <xsl:apply-templates select=\"@* | *\"/>\n <element id=\"2\"/>\n </list>\n</xsl:template> \n\n" ]
[ 37 ]
[]
[]
[ "xml", "xslt" ]
stackoverflow_0000054683_xml_xslt.txt
Q: Hibernate crops clob values oddly I have a one to many relationship between two tables. The many table contains a clob column. The clob column looks like this in hibernate: @CollectionOfElements(fetch = EAGER) @JoinTable(name = NOTE_JOIN_TABLE, joinColumns = @JoinColumn(name = "note")) @Column(name = "substitution") @IndexColumn(name = "listIndex", base = 0) @Lob private List<String> substitutions; So basically I may have a Note with some subsitutions, say "foo" and "fizzbuzz". So in my main table I could have a Note with id 4 and in my NOTE_JOIN_TABLE I would have two rows, "foo" and "fizzbuzz" that both have a relationship to the Note. However, when one of these is inserted into the DB the larger substitution values are cropped to be as long as the shortest. So in this case I would have "foo" and "fiz" in the DB instead of "foo" and "fizzbuzz". Do you have any idea why this is happening? I have checked and confirmed they aren't being cropped anywhere in our code, it's defintely hibernate. A: LOB/CLOB column may not be large enough. Hibernate has some default column sizes for LOB/CLOB that are relatively small (may depend on db). Anyway, try something like this: @Lob @Column(length=2147483648) Adjust the length (in bytes) based on your needs. A: Many JDBC drivers, early versions of Oracle in particular, have problems while inserting LOBs. Did you make sure that the query Hibernate fires, with the same parameters bound works successfully in your JDBC driver?
Hibernate crops clob values oddly
I have a one to many relationship between two tables. The many table contains a clob column. The clob column looks like this in hibernate: @CollectionOfElements(fetch = EAGER) @JoinTable(name = NOTE_JOIN_TABLE, joinColumns = @JoinColumn(name = "note")) @Column(name = "substitution") @IndexColumn(name = "listIndex", base = 0) @Lob private List<String> substitutions; So basically I may have a Note with some subsitutions, say "foo" and "fizzbuzz". So in my main table I could have a Note with id 4 and in my NOTE_JOIN_TABLE I would have two rows, "foo" and "fizzbuzz" that both have a relationship to the Note. However, when one of these is inserted into the DB the larger substitution values are cropped to be as long as the shortest. So in this case I would have "foo" and "fiz" in the DB instead of "foo" and "fizzbuzz". Do you have any idea why this is happening? I have checked and confirmed they aren't being cropped anywhere in our code, it's defintely hibernate.
[ "LOB/CLOB column may not be large enough. Hibernate has some default column sizes for LOB/CLOB that are relatively small (may depend on db). Anyway, try something like this:\n@Lob \n@Column(length=2147483648)\n\nAdjust the length (in bytes) based on your needs.\n", "Many JDBC drivers, early versions of Oracle in particular, have problems while inserting LOBs. Did you make sure that the query Hibernate fires, with the same parameters bound works successfully in your JDBC driver?\n" ]
[ 0, 0 ]
[]
[]
[ "hibernate", "java", "oracle" ]
stackoverflow_0000053316_hibernate_java_oracle.txt
Q: How do you implement Levenshtein distance in Delphi? I'm posting this in the spirit of answering your own questions. The question I had was: How can I implement the Levenshtein algorithm for calculating edit-distance between two strings, as described here, in Delphi? Just a note on performance: This thing is very fast. On my desktop (2.33 Ghz dual-core, 2GB ram, WinXP), I can run through an array of 100K strings in less than one second. A: function EditDistance(s, t: string): integer; var d : array of array of integer; i,j,cost : integer; begin { Compute the edit-distance between two strings. Algorithm and description may be found at either of these two links: http://en.wikipedia.org/wiki/Levenshtein_distance http://www.google.com/search?q=Levenshtein+distance } //initialize our cost array SetLength(d,Length(s)+1); for i := Low(d) to High(d) do begin SetLength(d[i],Length(t)+1); end; for i := Low(d) to High(d) do begin d[i,0] := i; for j := Low(d[i]) to High(d[i]) do begin d[0,j] := j; end; end; //store our costs in a 2-d grid for i := Low(d)+1 to High(d) do begin for j := Low(d[i])+1 to High(d[i]) do begin if s[i] = t[j] then begin cost := 0; end else begin cost := 1; end; //to use "Min", add "Math" to your uses clause! d[i,j] := Min(Min( d[i-1,j]+1, //deletion d[i,j-1]+1), //insertion d[i-1,j-1]+cost //substitution ); end; //for j end; //for i //now that we've stored the costs, return the final one Result := d[Length(s),Length(t)]; //dynamic arrays are reference counted. //no need to deallocate them end;
How do you implement Levenshtein distance in Delphi?
I'm posting this in the spirit of answering your own questions. The question I had was: How can I implement the Levenshtein algorithm for calculating edit-distance between two strings, as described here, in Delphi? Just a note on performance: This thing is very fast. On my desktop (2.33 Ghz dual-core, 2GB ram, WinXP), I can run through an array of 100K strings in less than one second.
[ "function EditDistance(s, t: string): integer;\nvar\n d : array of array of integer;\n i,j,cost : integer;\nbegin\n {\n Compute the edit-distance between two strings.\n Algorithm and description may be found at either of these two links:\n http://en.wikipedia.org/wiki/Levenshtein_distance\n http://www.google.com/search?q=Levenshtein+distance\n }\n\n //initialize our cost array\n SetLength(d,Length(s)+1);\n for i := Low(d) to High(d) do begin\n SetLength(d[i],Length(t)+1);\n end;\n\n for i := Low(d) to High(d) do begin\n d[i,0] := i;\n for j := Low(d[i]) to High(d[i]) do begin\n d[0,j] := j;\n end;\n end;\n\n //store our costs in a 2-d grid \n for i := Low(d)+1 to High(d) do begin\n for j := Low(d[i])+1 to High(d[i]) do begin\n if s[i] = t[j] then begin\n cost := 0;\n end\n else begin\n cost := 1;\n end;\n\n //to use \"Min\", add \"Math\" to your uses clause!\n d[i,j] := Min(Min(\n d[i-1,j]+1, //deletion\n d[i,j-1]+1), //insertion\n d[i-1,j-1]+cost //substitution\n );\n end; //for j\n end; //for i\n\n //now that we've stored the costs, return the final one\n Result := d[Length(s),Length(t)];\n\n //dynamic arrays are reference counted.\n //no need to deallocate them\nend;\n\n" ]
[ 17 ]
[]
[]
[ "algorithm", "delphi", "edit_distance", "levenshtein_distance" ]
stackoverflow_0000054797_algorithm_delphi_edit_distance_levenshtein_distance.txt
Q: How do I get InputVerifier to work with an editable JComboBox I've got an JComboBox with a custom inputVerifyer set to limit MaxLength when it's set to editable. The verify method never seems to get called. The same verifyer gets invoked on a JTextField fine. What might I be doing wrong? A: I found a workaround. I thought I'd let the next person with this problem know about. Basically. Instead of setting the inputVerifier on the ComboBox you set it to it's "Editor Component". JComboBox combo = new JComboBox(); JTextField tf = (JTextField)(combo.getEditor().getEditorComponent()); tf.setInputVerifier(verifyer); A: Show us a small section of your code. package inputverifier; import javax.swing.*; class Go { public static void main(String[] args) { java.awt.EventQueue.invokeLater(new Runnable() { public void run() { runEDT(); }}); } private static void runEDT() { new JFrame("combo thing") {{ setLayout(new java.awt.GridLayout(2, 1)); add(new JComboBox() {{ setEditable(true); setInputVerifier(new InputVerifier() { @Override public boolean verify(JComponent input) { System.err.println("Hi!"); return true; } }); }}); add(new JTextField()); setDefaultCloseOperation(EXIT_ON_CLOSE); pack(); setVisible(true); }}; } } Looks like it's a problem with JComboBox being a composite component. I'd suggest avoiding such nasty UI solutions.
How do I get InputVerifier to work with an editable JComboBox
I've got an JComboBox with a custom inputVerifyer set to limit MaxLength when it's set to editable. The verify method never seems to get called. The same verifyer gets invoked on a JTextField fine. What might I be doing wrong?
[ "I found a workaround. I thought I'd let the next person with this problem know about. \nBasically. Instead of setting the inputVerifier on the ComboBox you set it to it's \"Editor Component\". \nJComboBox combo = new JComboBox();\nJTextField tf = (JTextField)(combo.getEditor().getEditorComponent());\ntf.setInputVerifier(verifyer);\n\n", "Show us a small section of your code.\npackage inputverifier;\n\nimport javax.swing.*;\n\n class Go {\n public static void main(String[] args) {\n java.awt.EventQueue.invokeLater(new Runnable() { public void run() {\n runEDT();\n }});\n }\n private static void runEDT() {\n new JFrame(\"combo thing\") {{\n setLayout(new java.awt.GridLayout(2, 1));\n add(new JComboBox() {{\n setEditable(true);\n setInputVerifier(new InputVerifier() {\n @Override public boolean verify(JComponent input) {\n System.err.println(\"Hi!\");\n return true;\n }\n });\n }});\n add(new JTextField());\n setDefaultCloseOperation(EXIT_ON_CLOSE);\n pack();\n setVisible(true);\n }};\n } \n}\n\nLooks like it's a problem with JComboBox being a composite component. I'd suggest avoiding such nasty UI solutions.\n" ]
[ 8, 1 ]
[]
[]
[ "java", "jcombobox", "swing" ]
stackoverflow_0000054567_java_jcombobox_swing.txt
Q: Should I choose scripting or compiled code for small tasks? I'm a Java programmer, and I like my compiler, static analysis tools and unit testing frameworks as tools that help me quickly deliver robust and efficient code. The JRE is pretty much everywhere I would work, too. Given that situation, I can't see a reason why I would ever choose to use shell scripting, vb scripting etc, no matter how small the task is if I wear one of my other hats like my cool black sysadmin fedora. I don't wear the other hats too often, under what circumstances should I choose scripting over writing compiled code? A: If you are comfortable with Java, and the JRE is everywhere you work, then I would say keep using it. There are, however, languages like perl and python that are particularly suited to quickly solving problems. I would suggest learning either perl or python, and then use your judgement on when to use it. A: If I have a small problem that I'd like to solve quickly, I tend to use a scripting language. The code tax is smaller, and, for me at least, the result comes faster. A: Whatever you think will be most efficient for you! I had a co-worker who seemed to use a different language for every task; Perl for quick text processing, PHP for small internal web applications, .NET for our main product, cygwin for filesystem stuff. He preferred to use the technology which was most specific to the task at hand. Personally, I find that context switching between technologies is painful. My day-to-day work is in .NET, so that's pretty much the terms I think in. For most tasks I find it more efficient to knock something up in C# using SnippetCompiler than I would to hack around in PowerShell or a scripting environment. A: I would say where it makes sense. If it's going to take you longer to open up your IDE, compile the script, etc. than it would to edit a script file and be done with it than use script file. If you're not going to be changing the thing often and are quicker at Java coding then go that route :) A: It is usually quicker to write scripts than compiled programmes. You don't have to worry so much about portability between different platforms and environments. A shell script will run pretty much every where on most platforms. Because you're a java developer and you mention that you have java everywhere you might look at groovy (http://groovy.codehaus.org/). It is a scripting language written in java with the ability to use java libraries. A: The way I see it (others disagree) all your code needs to be maintainable. The smallest useful collection of code is that which a single person maintains. Even that benefits from the language and tools you mentioned. However, there may obviously be tasks where specialised languages are more advantageous than a single general purpose language. A: If you can write it quicker in Java, then go for it. Just try and be aware of what the various scripting languages can do. e.g. Don't make a full blown Java app when you can do the same with a bash one-liner. A: Weigh the importance of the tool against popping open a text editor for a quick edit vs. opening IDE, recompiling, redeploying, etc. A: Of course, the prime directive should be to "use whatever you're comfortable with." If Java is getting the job done right and on time, stick to it. But a lot of the scripting languages could save you some time because they're attuned to different problems. If you're using regular expressions, the scripting languages are a good fit. If you're dropping into shell commands, scripts are nice. I tend to use Ruby scripts whenever I'm writing something that's small, because it's quick to write, easy to maintain, and (with Gems) easy to bolt on additional functionality without needed to use JARs or anything. Your milage will, of course, vary. A: At the end of the day this is a question that only you can answer for yourself. Based on the fact that you said "I can't see a reason why I would ever choose to use shell scripting , ..." then it's probably the case that you should never choose it right now. But if I were you I would pick a scripting language like python, ruby or perl and start trying to solve some of these small problems with this language. Over time you will start to get a feel for when it is more appropriate to write a quick script than build a full-blown solution. A: I use scripting languages for writing programs which are not expected to be maintained beyond few executions. Most of these languages are light on boiler-plate syntax and do have a REPL. Both these features enable rapid prototyping. Since you already know Java, you can try JVM languages like Groovy, JRuby, BeanShell etc. Scala has much lighter syntax than Java, has a REPL, is statically typed and runs on the JVM - you might give that a shot as well.
Should I choose scripting or compiled code for small tasks?
I'm a Java programmer, and I like my compiler, static analysis tools and unit testing frameworks as tools that help me quickly deliver robust and efficient code. The JRE is pretty much everywhere I would work, too. Given that situation, I can't see a reason why I would ever choose to use shell scripting, vb scripting etc, no matter how small the task is if I wear one of my other hats like my cool black sysadmin fedora. I don't wear the other hats too often, under what circumstances should I choose scripting over writing compiled code?
[ "If you are comfortable with Java, and the JRE is everywhere you work, then I would say keep using it. There are, however, languages like perl and python that are particularly suited to quickly solving problems. I would suggest learning either perl or python, and then use your judgement on when to use it.\n", "If I have a small problem that I'd like to solve quickly, I tend to use a scripting language. The code tax is smaller, and, for me at least, the result comes faster.\n", "Whatever you think will be most efficient for you!\nI had a co-worker who seemed to use a different language for every task; Perl for quick text processing, PHP for small internal web applications, .NET for our main product, cygwin for filesystem stuff. He preferred to use the technology which was most specific to the task at hand.\nPersonally, I find that context switching between technologies is painful. My day-to-day work is in .NET, so that's pretty much the terms I think in. For most tasks I find it more efficient to knock something up in C# using SnippetCompiler than I would to hack around in PowerShell or a scripting environment.\n", "I would say where it makes sense. If it's going to take you longer to open up your IDE, compile the script, etc. than it would to edit a script file and be done with it than use script file. If you're not going to be changing the thing often and are quicker at Java coding then go that route :)\n", "It is usually quicker to write scripts than compiled programmes. You don't have to worry so much about portability between different platforms and environments. A shell script will run pretty much every where on most platforms. Because you're a java developer and you mention that you have java everywhere you might look at groovy (http://groovy.codehaus.org/). It is a scripting language written in java with the ability to use java libraries. \n", "The way I see it (others disagree) all your code needs to be maintainable. The smallest useful collection of code is that which a single person maintains. Even that benefits from the language and tools you mentioned.\nHowever, there may obviously be tasks where specialised languages are more advantageous than a single general purpose language.\n", "If you can write it quicker in Java, then go for it.\nJust try and be aware of what the various scripting languages can do.\ne.g. Don't make a full blown Java app when you can do the same with a bash one-liner.\n", "Weigh the importance of the tool against popping open a text editor for a quick edit vs. opening IDE, recompiling, redeploying, etc.\n", "Of course, the prime directive should be to \"use whatever you're comfortable with.\" If Java is getting the job done right and on time, stick to it. But a lot of the scripting languages could save you some time because they're attuned to different problems. If you're using regular expressions, the scripting languages are a good fit. If you're dropping into shell commands, scripts are nice.\nI tend to use Ruby scripts whenever I'm writing something that's small, because it's quick to write, easy to maintain, and (with Gems) easy to bolt on additional functionality without needed to use JARs or anything. Your milage will, of course, vary.\n", "At the end of the day this is a question that only you can answer for yourself. Based on the fact that you said \"I can't see a reason why I would ever choose to use shell scripting , ...\" then it's probably the case that you should never choose it right now. \nBut if I were you I would pick a scripting language like python, ruby or perl and start trying to solve some of these small problems with this language. Over time you will start to get a feel for when it is more appropriate to write a quick script than build a full-blown solution.\n", "I use scripting languages for writing programs which are not expected to be maintained beyond few executions. Most of these languages are light on boiler-plate syntax and do have a REPL. Both these features enable rapid prototyping.\nSince you already know Java, you can try JVM languages like Groovy, JRuby, BeanShell etc. Scala has much lighter syntax than Java, has a REPL, is statically typed and runs on the JVM - you might give that a shot as well.\n" ]
[ 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "scripting", "testing" ]
stackoverflow_0000054703_scripting_testing.txt
Q: What is the best style/syntax to use with Rhino Mocks? Multiple approaches exist to write your unit tests when using Rhino Mocks: The Standard Syntax Record/Replay Syntax The Fluent Syntax What is the ideal and most frictionless way? A: For .NET 2.0, I recommend the record/playback model. We like this because it separates clearly your expectations from your verifications. using(mocks.Record()) { Expect.Call(foo.Bar()); } using(mocks.Playback()) { MakeItAllHappen(); } If you're using .NET 3.5 and C# 3, then I'd recommend the fluent syntax. A: Interesting question! My own preference is the for the reflection-based syntax (what I guess you mean by the Standard Syntax). I would argue that this is the most frictionless, as it does not add much extra code: you reference the stubs directly on your interfaces as though they were properly implemented. I do also quite like the Fluent syntax, although this is quite cumbersome. The Record/Replay syntax is as cumbersome as the Fluent syntax (if not more so, seemingly), but less intuitive (to me at least). I've only used NMock2, so the Record/Replay syntax is a bit alien to me, whilst the Fluent syntax is quite familiar. However, as this post suggests, if you prefer separating your expectations from your verifications/assertions, you should opt for the Fluent syntax. It's all a matter of style and personal preference, ultimately :-)
What is the best style/syntax to use with Rhino Mocks?
Multiple approaches exist to write your unit tests when using Rhino Mocks: The Standard Syntax Record/Replay Syntax The Fluent Syntax What is the ideal and most frictionless way?
[ "For .NET 2.0, I recommend the record/playback model. We like this because it separates clearly your expectations from your verifications.\nusing(mocks.Record())\n{\n Expect.Call(foo.Bar());\n}\nusing(mocks.Playback())\n{\n MakeItAllHappen();\n}\n\nIf you're using .NET 3.5 and C# 3, then I'd recommend the fluent syntax.\n", "Interesting question! My own preference is the for the reflection-based syntax (what I guess you mean by the Standard Syntax). I would argue that this is the most frictionless, as it does not add much extra code: you reference the stubs directly on your interfaces as though they were properly implemented. \nI do also quite like the Fluent syntax, although this is quite cumbersome. The Record/Replay syntax is as cumbersome as the Fluent syntax (if not more so, seemingly), but less intuitive (to me at least). I've only used NMock2, so the Record/Replay syntax is a bit alien to me, whilst the Fluent syntax is quite familiar. \nHowever, as this post suggests, if you prefer separating your expectations from your verifications/assertions, you should opt for the Fluent syntax. It's all a matter of style and personal preference, ultimately :-)\n" ]
[ 1, 0 ]
[]
[]
[ "c#", "mocking", "rhino_mocks", "syntax", "unit_testing" ]
stackoverflow_0000054709_c#_mocking_rhino_mocks_syntax_unit_testing.txt
Q: As a ASP.NET programmer, do I need to be concerned about email injection attacks? There are lots of PHP articles about the subject so is this a PHP only problem. I am sending emails using System.Net.Mail after some regular expression checks of course. Similar to http://weblogs.asp.net/scottgu/archive/2005/12/10/432854.aspx A: the PHP email injection attack works because of a weakness in the PHP Mail() function. As a .net developer you need not worry. A: I've never heard of that issue in ASP.NET. However, you should trust user input about as much as you'd trust a hooker with your wallet. A: As long as you are using the MailAddress object, I think you're fine, because injections will only manage to throw FormatExceptions for the specified address. Examples of how to properly use the System.Net.Mail components are included in that MSDN page; be sure to follow them and you will be fine.
As a ASP.NET programmer, do I need to be concerned about email injection attacks?
There are lots of PHP articles about the subject so is this a PHP only problem. I am sending emails using System.Net.Mail after some regular expression checks of course. Similar to http://weblogs.asp.net/scottgu/archive/2005/12/10/432854.aspx
[ "the PHP email injection attack works because of a weakness in the PHP Mail() function. As a .net developer you need not worry.\n", "I've never heard of that issue in ASP.NET. However, you should trust user input about as much as you'd trust a hooker with your wallet.\n", "As long as you are using the MailAddress object, I think you're fine, because injections will only manage to throw FormatExceptions for the specified address.\nExamples of how to properly use the System.Net.Mail components are included in that MSDN page; be sure to follow them and you will be fine.\n" ]
[ 6, 4, 4 ]
[]
[]
[ "asp.net", "email", "security" ]
stackoverflow_0000054889_asp.net_email_security.txt
Q: Programmatically accessing Data in an ASP.NET 2.0 Repeater This is an ASP.Net 2.0 web app. The Item template looks like this, for reference: <ItemTemplate> <tr> <td class="class1" align=center><a href='url'><img src="img.gif"></a></td> <td class="class1"><%# DataBinder.Eval(Container.DataItem,"field1") %></td> <td class="class1"><%# DataBinder.Eval(Container.DataItem,"field2") %></td> <td class="class1"><%# DataBinder.Eval(Container.DataItem,"field3") %></td> <td class="class1"><%# DataBinder.Eval(Container.DataItem,"field4") %></td> </tr> </ItemTemplate> Using this in codebehind: foreach (RepeaterItem item in rptrFollowupSummary.Items) { string val = ((DataBoundLiteralControl)item.Controls[0]).Text; Trace.Write(val); } I produce this: <tr> <td class="class1" align=center><a href='url'><img src="img.gif"></a></td> <td class="class1">23</td> <td class="class1">1/1/2000</td> <td class="class1">-2</td> <td class="class1">11</td> </tr> What I need is the data from Field1 and Field4 I can't seem to get at the data the way I would in say a DataList or a GridView, and I can't seem to come up with anything else on Google or quickly leverage this one to do what I want. The only way I can see to get at the data is going to be using a regex to go and get it (Because a man takes what he wants. He takes it all. And I'm a man, aren't I? Aren't I?). Am I on the right track (not looking for the specific regex to do this; forging that might be a followup question ;) ), or am I missing something? The Repeater in this case is set in stone so I can't switch to something more elegant. Once upon a time I did something similar to what Alison Zhou suggested using DataLists, but it's been some time (2+ years) and I just completely forgot about doing it this way. Yeesh, talk about overlooking something obvious. . . So I did as Alison suggested and it works fine. I don't think the viewstate is an issue here, even though this repeater can get dozens of rows. I can't really speak to the question if doing it that way versus using the instead (but that seems like a fine solution to me otherwise). Obviously the latter is less of a viewstate footprint, but I'm not experienced enough to say when one approach might be preferrable to another without an extreme example in front of me. Alison, one question: why literals and not labels? Euro Micelli, I was trying to avoid a return trip to the database. Since I'm still a little green relative to the rest of the development world, I admit I don't necessarily have a good grasp of how many database trips is "just right". There wouldn't be a performance issue here (I know the app's load enough to know this), but I suppose I was trying to avoid it out of habit, since my boss tends to emphasize fewer trips where possible. A: Off the top of my head, you can try something like this: <ItemTemplate> <tr> <td "class1"><asp:Literal ID="litField1" runat="server" Text='<%# Bind("Field1") %>'/></td> <td "class1"><asp:Literal ID="litField2" runat="server" Text='<%# Bind("Field2") %>'/></td> <td "class1"><asp:Literal ID="litField3" runat="server" Text='<%# Bind("Field3") %>'/></td> <td "class1"><asp:Literal ID="litField4" runat="server" Text='<%# Bind("Field4") %>'/></td> </tr> </ItemTemplate> Then, in your code behind, you can access each Literal control as follows: foreach (RepeaterItem item in rptrFollowupSummary.Items) { Literal lit1 = (Literal)item.FindControl("litField1"); string value1 = lit1.Text; Literal lit4 = (Literal)item.FindControl("litField4"); string value4 = lit4.Text; } This will add to your ViewState but it makes it easy to find your controls. A: Since you are working with tabular data, I'd recommend using the GridView control. Then you'll be able to access individual cells. Otherwise, you can set the td's for Field1 and Field4 to runat="server" and give them ID's. Then in the codebehind, access the InnerText property for each td. A: If you can afford a smidge more overhead in the generation, go for DataList and use the DataKeys property, which will save the data fields you need. You could also use labels in each of your table cells and be able to reference items with e.Item.FindControl("LabelID"). A: The <%#DataBinder.Eval(...) %> mechanism is not Data Binding in a "strict" sense. It is a one-way technique to put text in specific places in the template. If you need to get the data back out, you have to either: Get it from your source data Populate the repeater with a different mechanism Note that the Repeater doesn't save the DataSource between postbacks, You can't just ask it to give you the data later. The first method is usually easier to work with. Don't assume that it's too expensive to reacquire your data from the source, unless you prove it to yourself by measuring; it's usually pretty fast. The biggest problem with this technique is if the source data can change between calls. For the second method, a common technique is to use a Literal control. See Alison Zhou's post for an example of how to do it. I usually personally prefer to fill the Literal controls inside of the OnItemDataBound instead A: @peacedog: Correct; Alison's method is perfectly acceptable. The trick with the database roundtrips: they are not free, obviously, but web servers tend to be very "close" (fast, low-latency connection) to the database, while your users are probably "far" (slow, high-latency connection). Because of that, sending data to/from the browser via cookies, ViewState, hidden fields or any other method can actually be "worse" than reading it again from your database. There are also security implications to keep in mind (Can an "evil" user fake the data coming back from the browser? Would it matter if they do?). But quite often it doesn't make any difference in performance. That's why you should do what works more naturally for your particular problem and worry about it only if performance starts to be a real-world issue. Good luck!
Programmatically accessing Data in an ASP.NET 2.0 Repeater
This is an ASP.Net 2.0 web app. The Item template looks like this, for reference: <ItemTemplate> <tr> <td class="class1" align=center><a href='url'><img src="img.gif"></a></td> <td class="class1"><%# DataBinder.Eval(Container.DataItem,"field1") %></td> <td class="class1"><%# DataBinder.Eval(Container.DataItem,"field2") %></td> <td class="class1"><%# DataBinder.Eval(Container.DataItem,"field3") %></td> <td class="class1"><%# DataBinder.Eval(Container.DataItem,"field4") %></td> </tr> </ItemTemplate> Using this in codebehind: foreach (RepeaterItem item in rptrFollowupSummary.Items) { string val = ((DataBoundLiteralControl)item.Controls[0]).Text; Trace.Write(val); } I produce this: <tr> <td class="class1" align=center><a href='url'><img src="img.gif"></a></td> <td class="class1">23</td> <td class="class1">1/1/2000</td> <td class="class1">-2</td> <td class="class1">11</td> </tr> What I need is the data from Field1 and Field4 I can't seem to get at the data the way I would in say a DataList or a GridView, and I can't seem to come up with anything else on Google or quickly leverage this one to do what I want. The only way I can see to get at the data is going to be using a regex to go and get it (Because a man takes what he wants. He takes it all. And I'm a man, aren't I? Aren't I?). Am I on the right track (not looking for the specific regex to do this; forging that might be a followup question ;) ), or am I missing something? The Repeater in this case is set in stone so I can't switch to something more elegant. Once upon a time I did something similar to what Alison Zhou suggested using DataLists, but it's been some time (2+ years) and I just completely forgot about doing it this way. Yeesh, talk about overlooking something obvious. . . So I did as Alison suggested and it works fine. I don't think the viewstate is an issue here, even though this repeater can get dozens of rows. I can't really speak to the question if doing it that way versus using the instead (but that seems like a fine solution to me otherwise). Obviously the latter is less of a viewstate footprint, but I'm not experienced enough to say when one approach might be preferrable to another without an extreme example in front of me. Alison, one question: why literals and not labels? Euro Micelli, I was trying to avoid a return trip to the database. Since I'm still a little green relative to the rest of the development world, I admit I don't necessarily have a good grasp of how many database trips is "just right". There wouldn't be a performance issue here (I know the app's load enough to know this), but I suppose I was trying to avoid it out of habit, since my boss tends to emphasize fewer trips where possible.
[ "Off the top of my head, you can try something like this:\n<ItemTemplate>\n <tr>\n <td \"class1\"><asp:Literal ID=\"litField1\" runat=\"server\" Text='<%# Bind(\"Field1\") %>'/></td>\n <td \"class1\"><asp:Literal ID=\"litField2\" runat=\"server\" Text='<%# Bind(\"Field2\") %>'/></td>\n <td \"class1\"><asp:Literal ID=\"litField3\" runat=\"server\" Text='<%# Bind(\"Field3\") %>'/></td>\n <td \"class1\"><asp:Literal ID=\"litField4\" runat=\"server\" Text='<%# Bind(\"Field4\") %>'/></td>\n </tr>\n</ItemTemplate>\n\nThen, in your code behind, you can access each Literal control as follows:\nforeach (RepeaterItem item in rptrFollowupSummary.Items)\n{ \n Literal lit1 = (Literal)item.FindControl(\"litField1\");\n string value1 = lit1.Text;\n Literal lit4 = (Literal)item.FindControl(\"litField4\");\n string value4 = lit4.Text;\n}\n\nThis will add to your ViewState but it makes it easy to find your controls.\n", "Since you are working with tabular data, I'd recommend using the GridView control. Then you'll be able to access individual cells.\nOtherwise, you can set the td's for Field1 and Field4 to runat=\"server\" and give them ID's. Then in the codebehind, access the InnerText property for each td.\n", "If you can afford a smidge more overhead in the generation, go for DataList and use the DataKeys property, which will save the data fields you need.\nYou could also use labels in each of your table cells and be able to reference items with e.Item.FindControl(\"LabelID\").\n", "The <%#DataBinder.Eval(...) %> mechanism is not Data Binding in a \"strict\" sense. It is a one-way technique to put text in specific places in the template.\nIf you need to get the data back out, you have to either:\n\nGet it from your source data\nPopulate the repeater with a different mechanism\n\nNote that the Repeater doesn't save the DataSource between postbacks, You can't just ask it to give you the data later.\nThe first method is usually easier to work with. Don't assume that it's too expensive to reacquire your data from the source, unless you prove it to yourself by measuring; it's usually pretty fast. The biggest problem with this technique is if the source data can change between calls.\nFor the second method, a common technique is to use a Literal control. See Alison Zhou's post for an example of how to do it. I usually personally prefer to fill the Literal controls inside of the OnItemDataBound instead\n", "@peacedog: \nCorrect; Alison's method is perfectly acceptable.\nThe trick with the database roundtrips: they are not free, obviously, but web servers tend to be very \"close\" (fast, low-latency connection) to the database, while your users are probably \"far\" (slow, high-latency connection).\nBecause of that, sending data to/from the browser via cookies, ViewState, hidden fields or any other method can actually be \"worse\" than reading it again from your database. There are also security implications to keep in mind (Can an \"evil\" user fake the data coming back from the browser? Would it matter if they do?).\nBut quite often it doesn't make any difference in performance. That's why you should do what works more naturally for your particular problem and worry about it only if performance starts to be a real-world issue.\nGood luck!\n" ]
[ 6, 2, 0, 0, 0 ]
[]
[]
[ "asp.net", "data_access", "repeater" ]
stackoverflow_0000054708_asp.net_data_access_repeater.txt
Q: What are some best practices for creating my own custom exception? In a follow-up to a previous question regarding exceptions, what are best practices for creating a custom exception in .NET? More specifically should you inherit from System.Exception, System.ApplicationException or some other base exception? A: In the C# IDE, type 'exception' and hit TAB. This will expand to get you started in writing a new exception type. There are comments withs links to some discussion of exception practices. Personally, I'm a big fan of creating lots of small classes, at that extends to exception types. For example, in writing the Foo class, I can choose between: throw new Exception("Bar happened in Foo"); throw new FooException("Bar happened"); throw new FooBarException(); where class FooException : Exception { public FooException(string message) ... } and class FooBarException : FooException { public FooBarException() : base ("Bar happened") { } } I prefer the 3rd option, because I see it as being an OO solution. A: Inherit from System.Exception. System.ApplicationException is useless and the design guidelines say "Do not throw or derive from System.ApplicationException." See http://blogs.msdn.com/kcwalina/archive/2006/06/23/644822.aspx A: There is a code snippet for it. Use that. Plus, check your code analysis afterwards; the snippet leaves out one of the constructors you should implement. A: I think the single most important thing to remember when dealing with exceptions at any level (making custom, throwing, catching) is that exceptions are only for exceptional conditions. A: The base exception from where all other exceptions inherit from is System.Exception, and that is what you should inherit, unless of course you have a use for things like, say, default messages of a more specific exception.
What are some best practices for creating my own custom exception?
In a follow-up to a previous question regarding exceptions, what are best practices for creating a custom exception in .NET? More specifically should you inherit from System.Exception, System.ApplicationException or some other base exception?
[ "In the C# IDE, type 'exception' and hit TAB. This will expand to get you started in writing a new exception type. There are comments withs links to some discussion of exception practices.\nPersonally, I'm a big fan of creating lots of small classes, at that extends to exception types. For example, in writing the Foo class, I can choose between:\n\nthrow new Exception(\"Bar happened in Foo\");\nthrow new FooException(\"Bar happened\");\nthrow new FooBarException();\n\nwhere\nclass FooException : Exception \n{\n public FooException(string message) ... \n}\n\nand\nclass FooBarException : FooException \n{\n public FooBarException() \n : base (\"Bar happened\") \n {\n }\n}\n\nI prefer the 3rd option, because I see it as being an OO solution.\n", "Inherit from System.Exception. System.ApplicationException is useless and the design guidelines say \"Do not throw or derive from System.ApplicationException.\" \nSee http://blogs.msdn.com/kcwalina/archive/2006/06/23/644822.aspx\n", "There is a code snippet for it. Use that. Plus, check your code analysis afterwards; the snippet leaves out one of the constructors you should implement. \n", "I think the single most important thing to remember when dealing with exceptions at any level (making custom, throwing, catching) is that exceptions are only for exceptional conditions.\n", "The base exception from where all other exceptions inherit from is System.Exception, and that is what you should inherit, unless of course you have a use for things like, say, default messages of a more specific exception.\n" ]
[ 26, 16, 5, 1, 1 ]
[]
[]
[ ".net", "c#", "exception" ]
stackoverflow_0000054851_.net_c#_exception.txt
Q: What is a prepared statement? I see a bunch of lines in the .log files in the postgres pg_log directory that say something like: ERROR: prepared statement "pdo_pgsql_stmt_09e097f4" does not exist What are prepared statements, and what kinds of things can cause these error messages to be displayed? A: From the documentation: A prepared statement is a server-side object that can be used to optimize performance. When the PREPARE statement is executed, the specifie statement is parsed, rewritten, and planned. When an EXECUTE command is subsequently issued, the prepared statement need only be executed. Thus, the parsing, rewriting, and planning stages are only performed once, instead of every time the statement is executed. Searching the net, I found that the "pdo_pgsql_stmt" command is from some sort of PHP-connection to your database. Maybe this link can help you find a suiteable mailing-list or issue-tracker that you can send your error-messages to? EDIT: I think I found your bug here: http://bugs.php.net/bug.php?id=37870
What is a prepared statement?
I see a bunch of lines in the .log files in the postgres pg_log directory that say something like: ERROR: prepared statement "pdo_pgsql_stmt_09e097f4" does not exist What are prepared statements, and what kinds of things can cause these error messages to be displayed?
[ "From the documentation:\n\nA prepared statement is a server-side\n object that can be used to optimize\n performance. When the PREPARE\n statement is executed, the specifie\n statement is parsed, rewritten, and\n planned. When an EXECUTE command is\n subsequently issued, the prepared\n statement need only be executed. Thus,\n the parsing, rewriting, and planning\n stages are only performed once,\n instead of every time the statement is\n executed.\n\nSearching the net, I found that the \"pdo_pgsql_stmt\" command is from some sort of PHP-connection to your database. Maybe this link can help you find a suiteable mailing-list or issue-tracker that you can send your error-messages to?\n\nEDIT: I think I found your bug here:\nhttp://bugs.php.net/bug.php?id=37870\n" ]
[ 4 ]
[]
[]
[ "postgresql" ]
stackoverflow_0000054955_postgresql.txt
Q: How to use JQuery "after" selector I can't seem to figure out a good way to do this, but it seems like it should be simple. I have an element that I want to append a div to. Then I have another element that I want to clone and shove into that intermediate div. Here's what I was hoping to do: $("#somediv > ul").after("<div id='xxx'></div>").append($("#someotherdiv").clone()); This seems to be close, but not quite there. The problem with this is that the "append" seems to be operating on the original #somediv > ul selector. This sort of makes sense, but it's not what I wanted. How can I most efficiently select that intermediate div that I added with the after and put my #someotherdiv into it? A: Go the other way around and use insertAfter(). $("<div id='xxx'></div>") .append($("#someotherdiv").clone()) .insertAfter("#somediv > ul") Try to add your generated DOM nodes to the document only after finishing your work. Once the nodes are added to the displayed document, the browser starts listening to any change to refresh the view. Doing all the work before adding the nodes to the displayed document does improve browser performance. A: use insertAfter(): $("<div id='xxx'></div>").insertAfter("#somediv > ul").append($("#someotherdiv").clone()) A: How can I most efficiently select that intermediate div that I added with the "after" and put my "#someotherdiv" into it? @Vincent's solution is probably the fastest way to get the same result. However if for whatever reason you need add the div with after() then need to select it and operate on it you can use .nextAll( [expr] ) Find all sibling elements after the current element. Use an optional expression to filter the matched set. So your js becomes: $("#somediv > ul") .after("<div id='xxx'></div>") .nextAll('#xxx') .append($("#someotherdiv").clone());
How to use JQuery "after" selector
I can't seem to figure out a good way to do this, but it seems like it should be simple. I have an element that I want to append a div to. Then I have another element that I want to clone and shove into that intermediate div. Here's what I was hoping to do: $("#somediv > ul").after("<div id='xxx'></div>").append($("#someotherdiv").clone()); This seems to be close, but not quite there. The problem with this is that the "append" seems to be operating on the original #somediv > ul selector. This sort of makes sense, but it's not what I wanted. How can I most efficiently select that intermediate div that I added with the after and put my #someotherdiv into it?
[ "Go the other way around and use insertAfter().\n$(\"<div id='xxx'></div>\")\n .append($(\"#someotherdiv\").clone())\n .insertAfter(\"#somediv > ul\")\n\nTry to add your generated DOM nodes to the document only after finishing your work.\nOnce the nodes are added to the displayed document, the browser starts listening to any change to refresh the view. Doing all the work before adding the nodes to the displayed document does improve browser performance.\n", "use insertAfter():\n$(\"<div id='xxx'></div>\").insertAfter(\"#somediv > ul\").append($(\"#someotherdiv\").clone())\n\n", "\nHow can I most efficiently select that intermediate div that I added with the \"after\" and put my \"#someotherdiv\" into it?\n\n@Vincent's solution is probably the fastest way to get the same result. However if for whatever reason you need add the div with after() then need to select it and operate on it you can use \n\n.nextAll( [expr] )\nFind all sibling elements after the current element.\n Use an optional expression to filter the matched set.\n\nSo your js becomes:\n$(\"#somediv > ul\")\n .after(\"<div id='xxx'></div>\")\n .nextAll('#xxx')\n .append($(\"#someotherdiv\").clone());\n\n" ]
[ 9, 5, 0 ]
[]
[]
[ "css_selectors", "dom", "jquery" ]
stackoverflow_0000054877_css_selectors_dom_jquery.txt
Q: Count the number of nodes that match a given XPath expression in XmlSpy I am using XmlSpy to analyze an xml file, and I want to get a quick count of the number of nodes that match a given xpath. I know how to enter the XPathand get the list of nodes, but I am really just interested in the count. Is it possible to get this? I'm using XmlSpy Professional Edition version 2007 sp2, if it matters. A: I just figureed it out. I just needed to put count() around my xpath, like so: count(//my/node)
Count the number of nodes that match a given XPath expression in XmlSpy
I am using XmlSpy to analyze an xml file, and I want to get a quick count of the number of nodes that match a given xpath. I know how to enter the XPathand get the list of nodes, but I am really just interested in the count. Is it possible to get this? I'm using XmlSpy Professional Edition version 2007 sp2, if it matters.
[ "I just figureed it out. I just needed to put count() around my xpath, like so:\ncount(//my/node)\n\n" ]
[ 6 ]
[]
[]
[ "xml", "xmlspy", "xpath" ]
stackoverflow_0000054953_xml_xmlspy_xpath.txt
Q: Replacing Windows Explorer With Third Party Tool How would I go about replacing Windows Explorer with a third party tool such as TotalCommander, explorer++, etc? I would like to have one of those load instead of win explorer when I type "C:\directoryName" into the run window. Is this possible? A: From a comment on the first LifeHacker link, How to make x² your default folder application As part of the installation process, x² adds "open with xplorer2" in the context menu for filesystem folders. If you want to have this the default action (so that folders always open in x2 when you click on them) then make sure this is the default verb, either using Folder Options ("file folder" type) or editing the registry: [HKEY_CLASSES_ROOT\Directory\shell] @="open_x2" If you want some slightly different command line options, you can add any of the supported options by editing the following registry key: [HKEY_CLASSES_ROOT\Directory\shell\open\command] @="C:\Program files\zabkat\xplorer2\xplorer2_UC.exe" /T /1 "%1" Notes: Please check your installation folder first: Your installation path may be different. Secondly, your executable may be called xplorer2.exe, if it is the non-Unicode version. Note that "%1" is required (including the quotation marks), and is replaced by the folder path you are trying to open. The /T switch causes no tabs to be restored and the /1 switch puts x² in single pane mode. (You do not have to use these switches, but they make sense). (The above are from xplorer2 user manual) A: If you go to Control Panel -> Folder Options And go to the File Types tab. You can go to the "Folder" file type (with "(NONE)" as the extension). Go to Advanced, create a new action that uses your program (I tried it with FreeCommander). Make sure you set it as default. That should do it.
Replacing Windows Explorer With Third Party Tool
How would I go about replacing Windows Explorer with a third party tool such as TotalCommander, explorer++, etc? I would like to have one of those load instead of win explorer when I type "C:\directoryName" into the run window. Is this possible?
[ "From a comment on the first LifeHacker link,\nHow to make x² your default folder application\nAs part of the installation process, x² adds \"open with xplorer2\" in the context menu for\nfilesystem folders.\nIf you want to have this the default action (so that folders always open in x2 when you click on\nthem) then make sure this is the default verb, either using Folder Options (\"file folder\" type) or\nediting the registry:\n[HKEY_CLASSES_ROOT\\Directory\\shell]\n@=\"open_x2\"\n\nIf you want some slightly different command line options, you can add any of the supported\noptions by editing the following registry key:\n[HKEY_CLASSES_ROOT\\Directory\\shell\\open\\command]\n@=\"C:\\Program files\\zabkat\\xplorer2\\xplorer2_UC.exe\" /T /1 \"%1\"\n\nNotes:\n\nPlease check your installation folder first: Your installation path may be different.\nSecondly, your executable may be called xplorer2.exe, if it is the non-Unicode version.\nNote that \"%1\" is required (including the quotation marks), and is replaced by the folder path you are trying to open.\nThe /T switch causes no tabs to be restored and the /1 switch puts x² in single pane mode. (You do not have to use these switches, but they make sense).\n\n(The above are from xplorer2 user manual)\n", "If you go to Control Panel -> Folder Options And go to the File Types tab. You can go to the \"Folder\" file type (with \"(NONE)\" as the extension). Go to Advanced, create a new action that uses your program (I tried it with FreeCommander). Make sure you set it as default.\nThat should do it.\n" ]
[ 3, 0 ]
[]
[]
[ "file_management", "windows", "windows_vista" ]
stackoverflow_0000054966_file_management_windows_windows_vista.txt
Q: How do prepared statements work? I'm writing some DB routines and I'm using prepared statements. My environment is PDO with PHP5. I understand prepared statements primarily provide a performance benefit, as well as some auxiliary bonuses such as not having to manually SQL-escape input data. My question is about the performance part. I have two implementations of a getPrice function below that takes a product id and returns its price. getPrice_A reuses the same PDOStatement object across subsequent calls within the same script execution. Is this necessary or recommended? If so, is there any way to avoid duplicating this extra code across every single get*() in every single model? getPrice_B creates a new PDOStatement object on every call. Will the DBMS recognize this statement has already been prepared and still be able to skip some work? In other words, does this implementation properly take advantage of the performance benefits of prepared statements? Having written all this out and read it over, I imagine getPrice_B is fine and getPrice_A is providing a negligible benefit on top of that, which may or may not be worth the extra complication. I'd still like to hear for sure from someone more knowledgable though. Assume that $pdo is a valid, connected PDO object in the examples below. <?php class Product { static function &getPrice_A($id) { static $stmt; if (!$stmt) { $stmt = $pdo->prepare('SELECT price FROM products WHERE id = ?'); } $stmt->execute(array($id)); return $stmt->fetchColumn(0); } static function &getPrice_B($id) { $stmt = $pdo->prepare('SELECT price FROM products WHERE id = ?'); $stmt->execute(array($id)); return $stmt->fetchColumn(0); } } // example usage: $price = Product::getPrice(4982); echo "Product 4982 costs $price\n"; A: From what I understand, prepared statements will reuse the generated SQL plan if it is the same statement, so the database will see the same prepared statement and not have to do the work to figure out how to query the database. I would say the extra work of saving the prepared statement in Product::getPrice_A is not typically very helpful, more because it can obscure the code rather than an issue of performance. When dealing with performance, I feel it's always best to focus on code clarity and then performance when you have real statistics that indicate a problem. I would say "yes, the extra work is unnecessary" (regardless of if it really boosts performance). Also, I am not a very big DB expert, but the performance gain of prepared statements is something I heard from others, and it is at the database level, not the code level (so if the code is actually invoking a parameterized statement on the actual DB, then the DB can do these execution plan caching... though depending on the database, you may get the benefit even without the parameterized statement). Anyways, if you are really worried about (and seeing) database performance issues, you should look into a caching solution... of which I would highly recommend memcached. With such a solution, you can cache your query results and not even hit the database for things you access frequently.
How do prepared statements work?
I'm writing some DB routines and I'm using prepared statements. My environment is PDO with PHP5. I understand prepared statements primarily provide a performance benefit, as well as some auxiliary bonuses such as not having to manually SQL-escape input data. My question is about the performance part. I have two implementations of a getPrice function below that takes a product id and returns its price. getPrice_A reuses the same PDOStatement object across subsequent calls within the same script execution. Is this necessary or recommended? If so, is there any way to avoid duplicating this extra code across every single get*() in every single model? getPrice_B creates a new PDOStatement object on every call. Will the DBMS recognize this statement has already been prepared and still be able to skip some work? In other words, does this implementation properly take advantage of the performance benefits of prepared statements? Having written all this out and read it over, I imagine getPrice_B is fine and getPrice_A is providing a negligible benefit on top of that, which may or may not be worth the extra complication. I'd still like to hear for sure from someone more knowledgable though. Assume that $pdo is a valid, connected PDO object in the examples below. <?php class Product { static function &getPrice_A($id) { static $stmt; if (!$stmt) { $stmt = $pdo->prepare('SELECT price FROM products WHERE id = ?'); } $stmt->execute(array($id)); return $stmt->fetchColumn(0); } static function &getPrice_B($id) { $stmt = $pdo->prepare('SELECT price FROM products WHERE id = ?'); $stmt->execute(array($id)); return $stmt->fetchColumn(0); } } // example usage: $price = Product::getPrice(4982); echo "Product 4982 costs $price\n";
[ "From what I understand, prepared statements will reuse the generated SQL plan if it is the same statement, so the database will see the same prepared statement and not have to do the work to figure out how to query the database. I would say the extra work of saving the prepared statement in Product::getPrice_A is not typically very helpful, more because it can obscure the code rather than an issue of performance. When dealing with performance, I feel it's always best to focus on code clarity and then performance when you have real statistics that indicate a problem.\nI would say \"yes, the extra work is unnecessary\" (regardless of if it really boosts performance). Also, I am not a very big DB expert, but the performance gain of prepared statements is something I heard from others, and it is at the database level, not the code level (so if the code is actually invoking a parameterized statement on the actual DB, then the DB can do these execution plan caching... though depending on the database, you may get the benefit even without the parameterized statement).\nAnyways, if you are really worried about (and seeing) database performance issues, you should look into a caching solution... of which I would highly recommend memcached. With such a solution, you can cache your query results and not even hit the database for things you access frequently.\n" ]
[ 3 ]
[]
[]
[ "pdo", "php", "sql" ]
stackoverflow_0000054980_pdo_php_sql.txt
Q: Better way of opening a Document from Java? I've been using the following code to open Office Documents, PDF, etc. on my windows machines using Java and it's working fine, except for some reason when a filename has embedded it within it multiple contiguous spaces like "File[SPACE][SPACE]Test.doc". How can I make this work? I'm not averse to canning the whole piece of code... but I'd rather not replace it with a third party library that calls JNI. public static void openDocument(String path) throws IOException { // Make forward slashes backslashes (for windows) // Double quote any path segments with spaces in them path = path.replace("/", "\\").replaceAll( "\\\\([^\\\\\\\\\"]* [^\\\\\\\\\"]*)", "\\\\\\\"$1\""); String command = "C:\\Windows\\System32\\cmd.exe /c start " + path + ""; Runtime.getRuntime().exec(command); } EDIT: When I run it with the errant file windows complains about finding the file. But... when I run the command line directly from the command line it runs just fine. A: If you are using Java 6 you can just use the open method of java.awt.Desktop to launch the file using the default application for the current platform. A: Not sure if this will help you much... I use java 1.5+'s ProcessBuilder to launch external shell scripts in a java program. Basically I do the following: ( although this may not apply because you don't want to capture the commands output; you actually wanna fire up the document - but, maybe this will spark something that you can use ) List<String> command = new ArrayList<String>(); command.add(someExecutable); command.add(someArguemnt0); command.add(someArgument1); command.add(someArgument2); ProcessBuilder builder = new ProcessBuilder(command); try { final Process process = builder.start(); ... } catch (IOException ioe) {} A: The issue may be the "start" command you are using, rather than your file name parsing. For example, this seems to work well on my WinXP machine (using JDK 1.5) import java.io.IOException; import java.io.File; public class test { public static void openDocument(String path) throws IOException { path = "\"" + path + "\""; File f = new File( path ); String command = "C:\\Windows\\System32\\cmd.exe /c " + f.getPath() + ""; Runtime.getRuntime().exec(command); } public static void main( String[] argv ) { test thisApp = new test(); try { thisApp.openDocument( "c:\\so\\My Doc.doc"); } catch( IOException e ) { e.printStackTrace(); } } }
Better way of opening a Document from Java?
I've been using the following code to open Office Documents, PDF, etc. on my windows machines using Java and it's working fine, except for some reason when a filename has embedded it within it multiple contiguous spaces like "File[SPACE][SPACE]Test.doc". How can I make this work? I'm not averse to canning the whole piece of code... but I'd rather not replace it with a third party library that calls JNI. public static void openDocument(String path) throws IOException { // Make forward slashes backslashes (for windows) // Double quote any path segments with spaces in them path = path.replace("/", "\\").replaceAll( "\\\\([^\\\\\\\\\"]* [^\\\\\\\\\"]*)", "\\\\\\\"$1\""); String command = "C:\\Windows\\System32\\cmd.exe /c start " + path + ""; Runtime.getRuntime().exec(command); } EDIT: When I run it with the errant file windows complains about finding the file. But... when I run the command line directly from the command line it runs just fine.
[ "If you are using Java 6 you can just use the open method of java.awt.Desktop to launch the file using the default application for the current platform.\n", "Not sure if this will help you much... I use java 1.5+'s ProcessBuilder to launch external shell scripts in a java program. Basically I do the following: ( although this may not apply because you don't want to capture the commands output; you actually wanna fire up the document - but, maybe this will spark something that you can use )\nList<String> command = new ArrayList<String>();\ncommand.add(someExecutable);\ncommand.add(someArguemnt0);\ncommand.add(someArgument1);\ncommand.add(someArgument2);\nProcessBuilder builder = new ProcessBuilder(command);\ntry {\nfinal Process process = builder.start();\n... \n} catch (IOException ioe) {}\n\n", "The issue may be the \"start\" command you are using, rather than your file name parsing. For example, this seems to work well on my WinXP machine (using JDK 1.5)\nimport java.io.IOException;\nimport java.io.File;\n\npublic class test {\n\n public static void openDocument(String path) throws IOException {\n path = \"\\\"\" + path + \"\\\"\";\n File f = new File( path );\n String command = \"C:\\\\Windows\\\\System32\\\\cmd.exe /c \" + f.getPath() + \"\";\n Runtime.getRuntime().exec(command); \n }\n\n public static void main( String[] argv ) {\n test thisApp = new test();\n try {\n thisApp.openDocument( \"c:\\\\so\\\\My Doc.doc\");\n }\n catch( IOException e ) {\n e.printStackTrace();\n }\n }\n}\n\n" ]
[ 5, 0, 0 ]
[]
[]
[ "java", "process", "windows" ]
stackoverflow_0000054861_java_process_windows.txt